Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[V1] TPU - Remove self.kv_caches #14309

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

[V1] TPU - Remove self.kv_caches #14309

wants to merge 1 commit into from

Conversation

alexm-redhat
Copy link
Collaborator

@alexm-redhat alexm-redhat commented Mar 5, 2025

This PR removes self.kv_caches from the tpu_model_runner.py in V1, so that @heheda12345 #14098 can cleanly land.

@mgoin @NickLucche feel free to make a pass.

Copy link

github-actions bot commented Mar 5, 2025

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@alexm-redhat alexm-redhat requested a review from mgoin March 5, 2025 20:44
@mergify mergify bot added documentation Improvements or additions to documentation v1 labels Mar 5, 2025
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please revert this - we should make a examples/offline_inference/tpu/ folder to keep this

Copy link
Collaborator

@heheda12345 heheda12345 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we add is_profile_run to forward_context, we need to change other backends to pass the is_profile_run arg. Can we achieve it by passing the attributes to ModelWrapperV1 like the pseudo code?

.static_forward_context
for layer_name, kv_cache in kv_caches.items():
# NOTE: Use list because of v0 PP virtual engine.
forward_context[layer_name].kv_cache = [kv_cache]


class ModelWrapperV1(nn.Module):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible to implement ModelWrapperV1 like this?

class ModelWrapperV1(nn.Module):

     def __init__(self, model: nn.Module, num_kv_heads, num_blocks, block_size):
         super().__init__()
         self.model = model
         self.num_kv_heads = num_kv_heads
         ...

     def forward(
         self,
         input_ids: torch.Tensor,
         positions: torch.Tensor,
         inputs_embeds: Optional[torch.Tensor] = None,
         is_profile_run: bool,
     ) -> torch.Tensor:
         if not is_profile_run:
              num_kv_heads = self.num_kv_heads
              ...

class TPUModelRunner:
    def _dummy_run(
           self,
           num_tokens: int,
           is_profile_run: bool,
     ) -> None:
         self.model.forward(..., is_profile_run=is_profile_run)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@heheda12345 this is not possible because num_blocks is not known until determine_num_available_blocks is done and initialize_kv_cache is executed.

@alexm-redhat
Copy link
Collaborator Author

@heheda12345 the is_profile_run arg is set to False by default, so it should not be necessary to pass this parameter explicitly to set_forward_context(..) function in other backends. Is there any specific code example where you would need to specify it explicitly?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation v1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants