Skip to content

Issues: Dao-AILab/flash-attention

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Assignee
Filter by who’s assigned
Sort

Issues list

Does it support 2080Ti?
#1514 opened Feb 27, 2025 by Lsnxiaoxiong
Does it support RTX 8000?
#1513 opened Feb 27, 2025 by youde2000
Flash_attn 1.x whl ?
#1508 opened Feb 25, 2025 by darkon12
Where is the flash_attn_2_cuda
#1506 opened Feb 24, 2025 by VirgoAsumita
How to adapt code from FA2 to FA3
#1501 opened Feb 21, 2025 by wangchuan
No module named 'flash_attn_2_cuda'
#1499 opened Feb 18, 2025 by TheAyes
About Triton version of FA
#1496 opened Feb 15, 2025 by YufangMo
Missing Requirements
#1494 opened Feb 14, 2025 by domklement
ProTip! Adding no:label will show everything without a label.