-
Notifications
You must be signed in to change notification settings - Fork 350
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
replace transformerencoder with mamba #199
Comments
Hi @sunxin010205 , what is Mamba? |
Mamba: Linear-Time Sequence Modeling with Selective State Spaces,Mamba is a new architecture proposed for the linear complexity of transformer。 |
I'm not familiar with this one. |
I tried to replace the transformer layer with other architectures and encountered the same situation as yours, where the loss is around 2.4 and cannot be reduced. May I ask if you have found a solution later on? |
Sry, I have not continued this part of the work at present, but my overall loss after replacement is about 0.2. If you can find the problem, could you please tell me the solution? I have some ideas. Perhaps the original encoder layer is not suitable after replacing transformer. You can try other encoder methods. |
Thank you for your reply. I am also trying to use Mamba or a combination of Mamba with the transformer to replace the transformer layer. May I ask if you have conducted any experiments combining Mamba with the transformer?
苏紫欣
***@***.***
广西师范大学
…------------------ 原始邮件 ------------------
发件人: ***@***.***>;
发送时间: 2024年6月18日(星期二) 下午4:48
收件人: ***@***.***>;
抄送: ***@***.***>; ***@***.***>;
主题: Re: [GuyTevet/motion-diffusion-model] replace transformerencoder with mamba (Issue #199)
Hi! After replacing an eight-layer Transformer encoder with Mamba, the training loss fails to decrease. Could it be that Mamba doesn't perform as effectively as the Transformer in the diffusion model? Looking forward to your response。 Here are my codes.嗨!用Mamba替换八层Transformer编码器后,训练损失未能减少。难道曼巴在扩散模型中的表现不如Transformer有效?期待您的回复。这是我的密码
mamba.txt mdm.txt minimamba.txt loss_log.txt
I tried to replace the transformer layer with other architectures and encountered the same situation as yours, where the loss is around 2.4 and cannot be reduced. May I ask if you have found a solution later on?我尝试用其他架构替换Transformer层,遇到了和你一样的情况,损耗在2.4左右,无法降低。我能问一下你后来有没有找到解决办法吗?
Sry, I have not continued this part of the work at present, but my overall loss after replacement is about 0.2. If you can find the problem, could you please tell me the solution? I have some ideas. Perhaps the original encoder layer is not suitable after replacing transformer. You can try other encoder methods.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***>
|
我用过4层mamba+4层transformer,loss也可以正常的降下去,而且效果还可以。我采用的一层mamba和一层transformer交替的结构,然后速度可能会比较快一点,参数量基本相同。 |
Great, thank you for your reply, it has been helpful for my subsequent experiments!
苏紫欣
***@***.***
广西师范大学
…------------------ 原始邮件 ------------------
发件人: ***@***.***>;
发送时间: 2024年6月18日(星期二) 下午5:01
收件人: ***@***.***>;
抄送: ***@***.***>; ***@***.***>;
主题: Re: [GuyTevet/motion-diffusion-model] replace transformerencoder with mamba (Issue #199)
Thank you for your reply. I am also trying to use Mamba or a combination of Mamba with the transformer to replace the transformer layer. May I ask if you have conducted any experiments combining Mamba with the transformer? 苏紫欣 @.*** 广西师范大学
感谢您的回复。我还尝试使用Mamba或Mamba与Transformer的组合来替换Transformer层。我能问一下你有没有做过将曼巴和Transformer结合的实验吗?
苏紫欣
@.***
广西师范大学
…
------------------ 原始邮件 ------------------ 发件人: @.>; 发送时间: 2024年6月18日(星期二) 下午4:48 收件人: @.>; 抄送: @.>; @.>; 主题: Re: [GuyTevet/motion-diffusion-model] replace transformerencoder with mamba (Issue #199) Hi! After replacing an eight-layer Transformer encoder with Mamba, the training loss fails to decrease. Could it be that Mamba doesn't perform as effectively as the Transformer in the diffusion model? Looking forward to your response。 Here are my codes.嗨!用Mamba替换八层Transformer编码器后,训练损失未能减少。难道曼巴在扩散模型中的表现不如Transformer有效?期待您的回复。这是我的密码 mamba.txt mdm.txt minimamba.txt loss_log.txt I tried to replace the transformer layer with other architectures and encountered the same situation as yours, where the loss is around 2.4 and cannot be reduced. May I ask if you have found a solution later on?我尝试用其他架构替换Transformer层,遇到了和你一样的情况,损耗在2.4左右,无法降低。我能问一下你后来有没有找到解决办法吗? Sry, I have not continued this part of the work at present, but my overall loss after replacement is about 0.2. If you can find the problem, could you please tell me the solution? I have some ideas. Perhaps the original encoder layer is not suitable after replacing transformer. You can try other encoder methods. — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
我用过4层mamba+4层transformer,loss也可以正常的降下去,而且效果还可以。我采用的一层mamba和一层transformer交替的结构,然后速度可能会比较快一点,参数量基本相同。
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***>
|
The logger output during training looks like this, I can't find the definition of loss_q3 in the code.| grad_norm | 1.84 |
|
Hi!
After replacing an eight-layer Transformer encoder with Mamba, the training loss fails to decrease. Could it be that Mamba doesn't perform as effectively as the Transformer in the diffusion model? Looking forward to your response。
Here are my codes.
mamba.txt
mdm.txt
minimamba.txt
loss_log.txt
The text was updated successfully, but these errors were encountered: