-
Notifications
You must be signed in to change notification settings - Fork 267
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Append ray head label selector in PodAutoscaler #789
Merged
Jeffwan
merged 1 commit into
vllm-project:main
from
Jeffwan:jiaxin/autoscaling-for-multi-node
Mar 4, 2025
Merged
Append ray head label selector in PodAutoscaler #789
Jeffwan
merged 1 commit into
vllm-project:main
from
Jeffwan:jiaxin/autoscaling-for-multi-node
Mar 4, 2025
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
It helps to only consider the engine pod for multi-node inference. Ray worker doesn’t have http server running and it can not expose any application but only resource metrics. For resource metrics, since we use Tensor Parallelism, we think the utilization across GPU are same. Signed-off-by: Jiaxin Shan <[email protected]>
7381415
to
17b528c
Compare
varungup90
approved these changes
Mar 4, 2025
xieus
pushed a commit
that referenced
this pull request
Mar 5, 2025
It helps to only consider the engine pod for multi-node inference. Ray worker doesn’t have http server running and it can not expose any application but only resource metrics. For resource metrics, since we use Tensor Parallelism, we think the utilization across GPU are same. Signed-off-by: Jiaxin Shan <[email protected]> Signed-off-by: Liguang Xie <[email protected]>
Jeffwan
added a commit
to Jeffwan/aibrix
that referenced
this pull request
Mar 6, 2025
It helps to only consider the engine pod for multi-node inference. Ray worker doesn’t have http server running and it can not expose any application but only resource metrics. For resource metrics, since we use Tensor Parallelism, we think the utilization across GPU are same. Signed-off-by: Jiaxin Shan <[email protected]>
This was referenced Mar 6, 2025
Jeffwan
added a commit
to Jeffwan/aibrix
that referenced
this pull request
Mar 6, 2025
It helps to only consider the engine pod for multi-node inference. Ray worker doesn’t have http server running and it can not expose any application but only resource metrics. For resource metrics, since we use Tensor Parallelism, we think the utilization across GPU are same. Signed-off-by: Jiaxin Shan <[email protected]>
Jeffwan
added a commit
that referenced
this pull request
Mar 6, 2025
* Ignore worker pods for gateway routing (#776) * Ignore worker pods for gateway routing * ignore worker pods in UpdatePod as well * use node worker as const Signed-off-by: Varun Gupta <[email protected]> * Disable ENABLE_PROBES_INJECTION in correct way (#779) fix env error caused by helm set command, use —set-string for env bool Signed-off-by: Jiaxin Shan <[email protected]> * Make stream include usage as optional (#788) * Make stream include usage as optional --------- Signed-off-by: Varun Gupta <[email protected]> * Append ray head label selector in PodAutoscaler (#789) It helps to only consider the engine pod for multi-node inference. Ray worker doesn’t have http server running and it can not expose any application but only resource metrics. For resource metrics, since we use Tensor Parallelism, we think the utilization across GPU are same. Signed-off-by: Jiaxin Shan <[email protected]> * Update request message processing for /v1/completion input (#794) Update request message processing for prompt input Signed-off-by: Varun Gupta <[email protected]> * Fix a cherry-pick package name issue Signed-off-by: Jiaxin Shan <[email protected]> * Add missing release branch pull requests trigger Signed-off-by: Jiaxin Shan <[email protected]> --------- Signed-off-by: Varun Gupta <[email protected]> Signed-off-by: Jiaxin Shan <[email protected]> Co-authored-by: Varun Gupta <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Pull Request Description
It helps to only consider the engine pod for multi-node inference. Ray worker doesn’t have http server running and it can not expose any application but only resource metrics. For resource metrics, since we use Tensor Parallelism, we think the utilization across GPU are same.
Related Issues
Resolves: part of #758
Important: Before submitting, please complete the description above and review the checklist below.
testing is good

Contribution Guidelines (Expand for Details)
We appreciate your contribution to aibrix! To ensure a smooth review process and maintain high code quality, please adhere to the following guidelines:
Pull Request Title Format
Your PR title should start with one of these prefixes to indicate the nature of the change:
[Bug]
: Corrections to existing functionality[CI]
: Changes to build process or CI pipeline[Docs]
: Updates or additions to documentation[API]
: Modifications to aibrix's API or interface[CLI]
: Changes or additions to the Command Line Interface[Misc]
: For changes not covered above (use sparingly)Note: For changes spanning multiple categories, use multiple prefixes in order of importance.
Submission Checklist
By submitting this PR, you confirm that you've read these guidelines and your changes align with the project's contribution standards.