Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[npu model bug]fix_global_bug #8399

Merged
merged 2 commits into from
May 9, 2024
Merged

[npu model bug]fix_global_bug #8399

merged 2 commits into from
May 9, 2024

Conversation

Galaxy1458
Copy link
Contributor

@Galaxy1458 Galaxy1458 commented May 9, 2024

PR types

Bug fixes

PR changes

Models

Description

fix bug of #8389

Copy link

paddle-bot bot commented May 9, 2024

Thanks for your contribution!

Copy link

codecov bot commented May 9, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 55.42%. Comparing base (d6ac1bd) to head (8ebdcfa).

Additional details and impacted files
@@           Coverage Diff            @@
##           develop    #8399   +/-   ##
========================================
  Coverage    55.42%   55.42%           
========================================
  Files          615      615           
  Lines        96235    96235           
========================================
  Hits         53335    53335           
  Misses       42900    42900           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@Galaxy1458 Galaxy1458 changed the title update [npu model bug]fix_global_bug May 9, 2024
Copy link
Collaborator

@wawltor wawltor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@wawltor wawltor merged commit 6b3875c into PaddlePaddle:develop May 9, 2024
9 of 11 checks passed
wawltor pushed a commit that referenced this pull request May 24, 2024
* [XPU] llama add xpu support (#8282)

* [XPU] llama add xpu support

* fix

* use try import

* fix

* refine

* refine

* refine

* refine

* update (#8399)

* [LLM] Support fuse attention q, k, v weights  (#8202)

1. add use-interface & fuse action

1.1. modify 1., code order

2. switch to name_mapping

3. solve tp branch

3.2 follow hui, handel qkv separately

3.3 handle pdparams

3.4 from torch

3.5 abandon low_cpu_mem_usage

3.6 solve shard branch

* 3.6.1 solve shard branch after rebase develop

* code clean

* remove debug comment

* Redefine fuse and split functions

* Redefine fuse and split functions

* comment and fix

* update method

* update QKV fuse and split

* support fuse weights in multi-files

* add precision compare

* simplify function call

* support use_fast_ffn

* clean modeling and configuration

* add test for gpt and opt

* fix tp_actions get

* add fast_ffn test

* add Qwen2Moe

* Revert "add Qwen2Moe"

This reverts commit 113b883.

* add test for split

* update doc

* update filter_dict_keys

---------

Co-authored-by: Zii <[email protected]>

* [LLM] Fix fuse or split with same key (#8378)

* fix fuse or split with same key

* fix

* fix eps

* update format

* [LLM] add decay steps option for finetuning (#8251)

* [LLM] add memory stats to logger of trainer (#8269)

* [Distributed] fix lora (#8325)

* [LLM] fix lora target modules on llama (#8372)

* [Distributed] metric calculation supports tp logits (#8370)

* Update model_utils.py

* Update model_utils.py

* Update model_utils.py

---------

Co-authored-by: Jianbang Yang <[email protected]>
Co-authored-by: DrownFish19 <[email protected]>
Co-authored-by: Zii <[email protected]>
Co-authored-by: Tian <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants