Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optim fused linear grad add #55927

Merged
merged 2 commits into from
Aug 3, 2023

Conversation

FeixLiu
Copy link
Contributor

@FeixLiu FeixLiu commented Aug 2, 2023

PR types

Others

PR changes

Others

Description

Optim fused linear grad add

PCard-70444

@paddle-bot
Copy link

paddle-bot bot commented Aug 2, 2023

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@FeixLiu FeixLiu force-pushed the optim_fused_linear_grad_add branch from 1fd0700 to 17b0fc2 Compare August 3, 2023 01:51
@FeixLiu FeixLiu force-pushed the optim_fused_linear_grad_add branch from 13bf600 to 39fddc6 Compare August 3, 2023 02:15
Copy link
Contributor

@qingqing01 qingqing01 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@Xreki Xreki left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Almost LGTM

@@ -65,7 +66,7 @@ void FusedLinearParamGradAddImpl(const Context &ctx,
use_addto);
}

if (dbias_out == nullptr) return;
if (!has_bias) return;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

实际情况是,dbias_out永远都不会是nullptr

@@ -159,7 +161,7 @@ void FusedLinearParamGradAdd(const Context &ctx,
multi_precision = false;
}

if (dbias_out) {
if (has_bias && dbias_out) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里dbias_out的处理逻辑要保持跟上面L136 dweight_out的处理方式一样吗?

@FeixLiu FeixLiu merged commit 9187346 into PaddlePaddle:develop Aug 3, 2023
@FeixLiu FeixLiu deleted the optim_fused_linear_grad_add branch August 3, 2023 13:09
FeixLiu added a commit to FeixLiu/Paddle that referenced this pull request Aug 9, 2023
FeixLiu added a commit that referenced this pull request Aug 9, 2023
* skip CopyOrAdd when tmp grad is None (#55679)

* Optim fused linear grad add (#55927)
FeixLiu added a commit to FeixLiu/Paddle that referenced this pull request Aug 29, 2023
hitywt pushed a commit to hitywt/Paddle that referenced this pull request Nov 25, 2023
… optim (PaddlePaddle#56094)

* skip CopyOrAdd when tmp grad is None (PaddlePaddle#55679)

* Optim fused linear grad add (PaddlePaddle#55927)
hitywt pushed a commit to hitywt/Paddle that referenced this pull request Nov 28, 2023
… optim (PaddlePaddle#56094)

* skip CopyOrAdd when tmp grad is None (PaddlePaddle#55679)

* Optim fused linear grad add (PaddlePaddle#55927)
hitywt pushed a commit to hitywt/Paddle that referenced this pull request Nov 28, 2023
… optim (PaddlePaddle#56094)

* skip CopyOrAdd when tmp grad is None (PaddlePaddle#55679)

* Optim fused linear grad add (PaddlePaddle#55927)
hitywt pushed a commit to hitywt/Paddle that referenced this pull request Nov 28, 2023
… optim (PaddlePaddle#56094)

* skip CopyOrAdd when tmp grad is None (PaddlePaddle#55679)

* Optim fused linear grad add (PaddlePaddle#55927)
hitywt pushed a commit to hitywt/Paddle that referenced this pull request Dec 4, 2023
… optim (PaddlePaddle#56094)

* skip CopyOrAdd when tmp grad is None (PaddlePaddle#55679)

* Optim fused linear grad add (PaddlePaddle#55927)
hitywt pushed a commit to hitywt/Paddle that referenced this pull request Dec 4, 2023
… optim (PaddlePaddle#56094)

* skip CopyOrAdd when tmp grad is None (PaddlePaddle#55679)

* Optim fused linear grad add (PaddlePaddle#55927)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants