Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Inference] Qwen2 support fp8 inference #8954

Merged
merged 8 commits into from
Sep 2, 2024

Conversation

ckl117
Copy link
Contributor

@ckl117 ckl117 commented Aug 16, 2024

PR types

New features

PR changes

Others

Description

qwen2 支持a8w8_fp8和a8w8c8_fp8推理

Copy link

paddle-bot bot commented Aug 16, 2024

Thanks for your contribution!

Copy link

codecov bot commented Aug 16, 2024

Codecov Report

Attention: Patch coverage is 0% with 176 lines in your changes missing coverage. Please review.

Project coverage is 53.44%. Comparing base (a275ab7) to head (e4c7690).
Report is 214 commits behind head on develop.

Files with missing lines Patch % Lines
...dlenlp/experimental/transformers/qwen2/modeling.py 0.00% 176 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #8954      +/-   ##
===========================================
+ Coverage    53.34%   53.44%   +0.09%     
===========================================
  Files          652      652              
  Lines       105484   105188     -296     
===========================================
- Hits         56270    56214      -56     
+ Misses       49214    48974     -240     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@ckl117 ckl117 changed the title Qwen2 fp8 [LLM Inference] Qwen2 support a8w8c8 and fp8 inference Aug 16, 2024
@ckl117 ckl117 changed the title [LLM Inference] Qwen2 support a8w8c8 and fp8 inference [LLM Inference] Qwen2 support fp8 inference Aug 20, 2024
@ckl117 ckl117 changed the title [LLM Inference] Qwen2 support fp8 inference [Inference] Qwen2 support fp8 inference Sep 2, 2024
Copy link
Collaborator

@wawltor wawltor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@wawltor wawltor merged commit 84469d6 into PaddlePaddle:develop Sep 2, 2024
10 of 14 checks passed
@ckl117 ckl117 deleted the qwen2-fp8 branch September 9, 2024 05:51
Mangodadada pushed a commit to Mangodadada/PaddleNLP that referenced this pull request Sep 10, 2024
* qwen2 fp8

* fp8 check

* fp8 cutlass

* int8 cachekv

* a8w8c8_fp8
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants