Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: V1 engine ignores logits processors and min-p sampling #12678

Open
1 task done
FrederickVu opened this issue Feb 3, 2025 · 1 comment
Open
1 task done

[Bug]: V1 engine ignores logits processors and min-p sampling #12678

FrederickVu opened this issue Feb 3, 2025 · 1 comment
Labels
bug Something isn't working v1

Comments

@FrederickVu
Copy link

Your current environment

vLLM Version: 0.7.0

Model Input Dumps

No response

🐛 Describe the bug

Issue: V1 engine ignores custom logits processors and does not implement min-p sampling

Problem

  1. Custom logits processors: In the new V1 engine, specifying a logits_processor in SamplingParams for LLM.generate() has no effect. The code in gpu_model_runner.py never passes any sampling metadata into self.model.compute_logits(...), so the logits processor is silently ignored.

  2. Min-p: Similarly, min_p (a sampling parameter supported in V0 akin to top_k and top_p) is not applied at all in V1. The sampler.py for the new engine appears to skip it entirely, so it never factors into the final token selection.

If those features are not yet supported, consider at least raising a warning or error to avoid silent failures.

Possible Fix for Logits Processor Issue

  1. Create a new data class to hold relevant metadata for self.model.compute_logits(...).
    • Could simply hold request ids and and request states (CachedRequestState).
  2. Collate metadata inside GPUModelRunner.execute_model(...).
  3. Patch LogitsProcessor.forward() inside logits_processor.py to handle the new V1 metadata class alongside old V0 SamplingMetadata class.
  4. Define LogitsProcessor._apply_logits_processor_v1(...) or something similar to properly handle preprocessed hidden_states tensor in V1 model runner, as opposed to re-using the V0 version.

Possible Fix for Min-p Issue

  1. Add min_p attribute to InputBatch in gpu_input_batch.py.
  2. Add min_p field to SamplingMetadata data class in metadata.py.
  3. Modify forward function of Sampler in sampler.py to apply min-p filtering.

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
@FrederickVu FrederickVu added the bug Something isn't working label Feb 3, 2025
@njhill njhill added the v1 label Feb 4, 2025
@alejopaullier96
Copy link

@FrederickVu has there been any solutions to this issue? I've been encountering the same problem with 0.7.2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working v1
Projects
None yet
Development

No branches or pull requests

3 participants