Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix bug in cpu_bfloat16_pass for MKLDNN #48481

Closed
wants to merge 8 commits into from
Closed

Fix bug in cpu_bfloat16_pass for MKLDNN #48481

wants to merge 8 commits into from

Conversation

yeliang2258
Copy link
Contributor

PR types

Bug fixes

PR changes

Others

Describe

Fix bug in cpu_bfloat16_pass for MKLDNN

@paddle-bot
Copy link

paddle-bot bot commented Nov 28, 2022

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@yaomichael yaomichael requested a review from jakpiase December 5, 2022 08:07
@@ -148,8 +148,8 @@ ProgramDesc BuildProgramDescDoubleInput(bool use_mkldnn) {

TEST(CpuBfloat16Pass, double_input_ops) {
bool use_mkldnn = true;
int quant_op = 4;
int dequant_op = 3;
int quant_op = 3;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Previously, we had a model where one operator output was linked to Matmul X and Y inputs, so we had this unit test in paddle/fluid/framework/ir/mkldnn/cpu_bfloat16_pass_tester.cc.
What was the error that you want to disable bfloat16 for this example? Because maybe you don't need to turn it off, just adjust it to be supported, just like you added an example in the unit test.

Copy link
Contributor Author

@yeliang2258 yeliang2258 Dec 14, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We found such a concat node. I think such a node can be skipped directly, and there is no need to add complicated algorithms to support it.
image

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another solution is that we can delete the following code on line 44 in cpu_bfloat16_pass.cc:
if (IsAlreadyLinked(linked_xputs, physical_xput_name)) continue;

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@wozna Which plan do you think is more reasonable?

Copy link
Contributor

@wozna wozna Dec 14, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yeliang2258 Ah yes I also noticed such a connection in Picodet int8 and just added support for quantization in this PR #48872. I think the solution with if (IsAlreadyLinked(linked_xputs, physical_xput_name)) continue; can be a better solution.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, thank you for the suggestion, then I will modify it and delete the relevant logic of if (IsAlreadyLinked(linked_xputs, physical_xput_name)) continue;

@yeliang2258 yeliang2258 requested a review from wozna December 14, 2022 09:52
@paddle-bot paddle-bot bot closed this Dec 19, 2023
Copy link

paddle-bot bot commented Dec 19, 2023

Since you haven't replied for more than a year, we have closed this issue/pr.
If the problem is not solved or there is a follow-up one, please reopen it at any time and we will continue to follow up.
由于您超过一年未回复,我们将关闭这个issue/pr。
若问题未解决或有后续问题,请随时重新打开,我们会继续跟进。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants