-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Issues: Dao-AILab/flash-attention
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Installing flash-attn & torch with requirements.txt raises build error
#1515
opened Feb 27, 2025 by
erenirmak
Anyone publishing pre compiled wheels for windows for python 3.10 and 3.11?
#1502
opened Feb 22, 2025 by
FurkanGozukara
【RuntimeError: FlashAttention only supports Ampere GPUs or newer. 】on DGX A800 station
#1485
opened Feb 10, 2025 by
Alicia320
How to Extend FlashAttention to Nearly Infinite HeadDim and Achieve Fully Fused MLA?
#1483
opened Feb 9, 2025 by
DefTruth
1 of 3 tasks
FlashAttention-3 is only supported on CUDA 12.3 and above but torch.__version__ = 2.5.0+cu124
#1477
opened Feb 6, 2025 by
focusunsink
Previous Next
ProTip!
Adding no:label will show everything without a label.