Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MLU] Fix Llama attrntion_mask in npu and mlu #9075

Merged
merged 2 commits into from
Sep 3, 2024

Conversation

DrownFish19
Copy link
Collaborator

PR types

Bug fixes

PR changes

Others

Description

Fix Llama attrntion_mask in npu and mlu.

Copy link

paddle-bot bot commented Sep 3, 2024

Thanks for your contribution!

Copy link

codecov bot commented Sep 3, 2024

Codecov Report

Attention: Patch coverage is 0% with 1 line in your changes missing coverage. Please review.

Project coverage is 53.51%. Comparing base (4e7fb49) to head (519b4c7).
Report is 216 commits behind head on develop.

Files with missing lines Patch % Lines
paddlenlp/transformers/llama/modeling.py 0.00% 1 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #9075      +/-   ##
===========================================
- Coverage    53.56%   53.51%   -0.05%     
===========================================
  Files          652      652              
  Lines       106397   105187    -1210     
===========================================
- Hits         56987    56293     -694     
+ Misses       49410    48894     -516     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@@ -1653,7 +1653,7 @@ def forward(
is_casual = True
else:
is_casual = is_casual_mask(attention_mask)
if get_env_device() != "npu" or get_env_device() != "mlu":
if get_env_device() != "npu" and get_env_device() != "mlu":
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if get_env_device() != "npu" and get_env_device() != "mlu":
if get_env_device() not in ["npu", "mlu"]:

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这样会不会好一点

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

嗯嗯,明确一些

Copy link
Collaborator

@ZHUI ZHUI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ZHUI ZHUI merged commit 9939f84 into PaddlePaddle:develop Sep 3, 2024
9 of 12 checks passed
@DrownFish19 DrownFish19 deleted the dev_20240903_fix_llama_mlu branch September 3, 2024 12:47
Mangodadada pushed a commit to Mangodadada/PaddleNLP that referenced this pull request Sep 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants