-
Notifications
You must be signed in to change notification settings - Fork 28.2k
Issues: huggingface/transformers
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Error when changing vocab size when fine tuning llama-vision
bug
#36590
opened Mar 6, 2025 by
Ssukriti
4 tasks done
Inconsistent Outputs When Using Flash Attention 2 and SDPA Attention with Attention Mask
bug
#36585
opened Mar 6, 2025 by
tartarleft
2 of 4 tasks
Significant Increase in Computation Time When Using Attention Mask in SDPA Attention
bug
#36584
opened Mar 6, 2025 by
tartarleft
3 of 4 tasks
TypeError: LlavaProcessor: got multiple values for keyword argument 'images'
bug
#36578
opened Mar 6, 2025 by
albertvillanova
4 tasks
paligemma2-3B-mix in version4.49.0 not use GPU and 4.50.0.dev broken
bug
Cache
#36575
opened Mar 6, 2025 by
hanggun
4 tasks
After tokenizers upgrade, the length of the token does not correspond to the length of the model
bug
#36574
opened Mar 6, 2025 by
CurtainRight
2 of 4 tasks
In the latest version of transformers (4.49.0) matrix transformation error is encountered
bug
#36571
opened Mar 6, 2025 by
idebroy
2 of 4 tasks
Add support for StableAdamW optimizer in Trainer
Feature request
Request for a new feature
#36564
opened Mar 5, 2025 by
capemox
Allow video objects (np array etc.) in apply_chat_template (not just paths or urls)
Chat Template
Feature request
Request for a new feature
VLM
#36560
opened Mar 5, 2025 by
FredrikNoren
Error during processing: MllamaForCausalLM does not support Flash Attention 2.0 yet.
Feature request
Request for a new feature
#36557
opened Mar 5, 2025 by
sangramddreg
disable_compile
not honored as a kwarg in generate
bug
#36544
opened Mar 4, 2025 by
pcuenca
1 of 4 tasks
Bug when computing positional IDs from embeddings
bug
#36537
opened Mar 4, 2025 by
SabrinaRichter
4 tasks
Model.generate use_cache=True generates different results than use_cache=False
bug
#36536
opened Mar 4, 2025 by
edenlum
2 of 4 tasks
Warning related to torch.tensor() usage in transformers.models.encodec.modeling_encodec.py (Version 4.47.0)
bug
#36533
opened Mar 4, 2025 by
GiorgiaAuroraAdorni
4 tasks
Bug in LlaveNextProcessor when using do_pad=False
bug
#36531
opened Mar 4, 2025 by
aarbelle
2 of 4 tasks
Request: Add Flash Attention 2.0 Support for ViTMAEForPreTraining
Feature request
Request for a new feature
Flash Attention
Good Second Issue
Issues that are more difficult to do than "Good First" issues - give it a try if you want!
Vision
#36527
opened Mar 4, 2025 by
noelEOS
Previous Next
ProTip!
Adding no:label will show everything without a label.