-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Unified Checkpoint] update merge tensor parallel #8856
Conversation
Thanks for your contribution! |
ret = distributed_allgather(tensor, group=tp_group, offload=False) | ||
# Get tensor size | ||
tensor_bytes = tensor.numel().item() * dtype_byte_size(tensor.dtype) * tp_group.nranks | ||
if tensor_bytes >= 5368709120: # temporarily set 5GB as threshold |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No Magic Number。 tensor_bytes > 5*1024*1024*1024
建议再设置小一点。
0a21315
to
8ddf398
Compare
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## develop #8856 +/- ##
===========================================
- Coverage 55.43% 55.42% -0.02%
===========================================
Files 631 631
Lines 98544 98577 +33
===========================================
+ Hits 54632 54633 +1
- Misses 43912 43944 +32 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
PR types
Others
PR changes
Others
Description
When merging large tensors, there may be not enough GPU memory to allocate due to VRAM fragmentation. Therefore we split the large tensors into smaller parts, and then merge these tensors on CPU.