-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[LLM] Add sequence_parallel support for qwen #8558
Conversation
Thanks for your contribution! |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## develop #8558 +/- ##
===========================================
- Coverage 55.81% 55.80% -0.02%
===========================================
Files 620 620
Lines 96599 96642 +43
===========================================
+ Hits 53917 53928 +11
- Misses 42682 42714 +32 ☔ View full report in Codecov by Sentry. |
@@ -476,15 +525,22 @@ def get_tensor_parallel_split_mappings(num_hidden_layers): | |||
base_actions = { | |||
# Column Linear | |||
"lm_head.weight": partial(fn, is_column=True), | |||
"qwen.h.0.mlp.w2.weight": partial(fn, is_column=True), | |||
"qwen.h.0.mlp.w1.weight": partial(fn, is_column=True), | |||
"qwen.h.0.attn.c_attn.weight": partial(fn, is_column=True, is_naive_3fuse=True), | |||
"qwen.h.0.attn.c_attn.bias": partial(fn, is_column=True, is_naive_3fuse=True), | |||
# Row Linear | |||
"qwen.wte.weight": partial(fn, is_column=False), | |||
"qwen.h.0.mlp.c_proj.weight": partial(fn, is_column=False), | |||
"qwen.h.0.attn.c_proj.weight": partial(fn, is_column=False), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
c_proj->o_proj
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
话说,为啥把c_proj改成了o_proj,会不会影响模型权重的加载了,这一点确认一下吧。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
命名习惯,那我改回来吧...
input_is_parallel=True, | ||
) | ||
else: | ||
self.c_attn = nn.Linear(config.hidden_size, 3 * self.projection_size, bias_attr=True) | ||
self.c_proj = nn.Linear(config.hidden_size, self.projection_size, bias_attr=not config.no_bias) | ||
self.o_proj = nn.Linear( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里的改动有测试过么?
@DesmonDay hi,可以再review下吗~ |
@@ -252,18 +284,26 @@ def forward( | |||
encoder_hidden_states=None, | |||
encoder_attention_mask=None, | |||
output_attentions=False, | |||
alibi=None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
为什么会增加alibi?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已修改
) | ||
else: | ||
attn_output, attn_weight = self._attn(query, key, value, attention_mask) | ||
context_layer = self._merge_heads(attn_output, self.num_heads, self.head_dim) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个为啥删掉了,有测试过么?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这一步改到_attn里了,自测记录在Description里添加了
从自测记录里看,打开和关闭sequence_parallel,loss精度对不齐,需要对齐精度。 |
LGTM |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
tensor_parallel_output=True, | ||
sequence_parallel=False, | ||
fuse_sequence_parallel_allreduce=False, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
都删除吧,已经内置了
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
PR types
Performance optimization
PR changes
Models
Description
千问模型增加sequence_parallel支持
自测记录:
关闭sequence_parallel:
workerlog_disable_sp.0.log
开启sequence_parallel:
workerlog_sp.0.log