-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Issues: triton-lang/triton
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
[Feature Request] Add tl.all and tl.any like torch.all and torch.any
#6119
opened Mar 6, 2025 by
l1351868270
[AMD] Determine Non-Negative behavior across a two-option block argument
performance
#6096
opened Mar 4, 2025 by
njriasan
[AMD] Improve LLVM Debugging messages for BufferOps Non-Negative Calculations
#6093
opened Mar 4, 2025 by
njriasan
runtime JIT execution of the same kernel incurs high overhead(binder function)
performance
#6064
opened Feb 28, 2025 by
Learnmore666
Triton incorrectly interprets uint8 indices as int8 in pointer arithmetic operations
bug
#6043
opened Feb 27, 2025 by
Ivan1248
fused attention (flash attention v2) doesn't support bfloat16?
#6005
opened Feb 24, 2025 by
guanzhchen
tl.sort and torch.sort give inconsistent results when input contains inf or nan
bug
#5999
opened Feb 24, 2025 by
chenmiao1919
Bug in tutorials/06-fused-attention.py: test_op assertion fails for specific input.
bug
#5971
opened Feb 20, 2025 by
p81sunshine
Previous Next
ProTip!
Updated in the last three days: updated:>2025-03-03.