Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Append ray head label selector in PodAutoscaler #789

Merged

Conversation

Jeffwan
Copy link
Collaborator

@Jeffwan Jeffwan commented Mar 4, 2025

Pull Request Description

It helps to only consider the engine pod for multi-node inference. Ray worker doesn’t have http server running and it can not expose any application but only resource metrics. For resource metrics, since we use Tensor Parallelism, we think the utilization across GPU are same.

Related Issues

Resolves: part of #758

Important: Before submitting, please complete the description above and review the checklist below.

testing is good
image


Contribution Guidelines (Expand for Details)

We appreciate your contribution to aibrix! To ensure a smooth review process and maintain high code quality, please adhere to the following guidelines:

Pull Request Title Format

Your PR title should start with one of these prefixes to indicate the nature of the change:

  • [Bug]: Corrections to existing functionality
  • [CI]: Changes to build process or CI pipeline
  • [Docs]: Updates or additions to documentation
  • [API]: Modifications to aibrix's API or interface
  • [CLI]: Changes or additions to the Command Line Interface
  • [Misc]: For changes not covered above (use sparingly)

Note: For changes spanning multiple categories, use multiple prefixes in order of importance.

Submission Checklist

  • PR title includes appropriate prefix(es)
  • Changes are clearly explained in the PR description
  • New and existing tests pass successfully
  • Code adheres to project style and best practices
  • Documentation updated to reflect changes (if applicable)
  • Thorough testing completed, no regressions introduced

By submitting this PR, you confirm that you've read these guidelines and your changes align with the project's contribution standards.

It helps to only consider the engine pod for multi-node inference. Ray worker doesn’t have http server running and it can not expose any application but only resource metrics. For resource metrics, since we use Tensor Parallelism, we think the utilization across GPU are same.

Signed-off-by: Jiaxin Shan <[email protected]>
@Jeffwan Jeffwan force-pushed the jiaxin/autoscaling-for-multi-node branch from 7381415 to 17b528c Compare March 4, 2025 01:50
@Jeffwan Jeffwan merged commit c73e063 into vllm-project:main Mar 4, 2025
11 checks passed
xieus pushed a commit that referenced this pull request Mar 5, 2025
It helps to only consider the engine pod for multi-node inference. Ray worker doesn’t have http server running and it can not expose any application but only resource metrics. For resource metrics, since we use Tensor Parallelism, we think the utilization across GPU are same.

Signed-off-by: Jiaxin Shan <[email protected]>
Signed-off-by: Liguang Xie <[email protected]>
Jeffwan added a commit to Jeffwan/aibrix that referenced this pull request Mar 6, 2025
It helps to only consider the engine pod for multi-node inference. Ray worker doesn’t have http server running and it can not expose any application but only resource metrics. For resource metrics, since we use Tensor Parallelism, we think the utilization across GPU are same.

Signed-off-by: Jiaxin Shan <[email protected]>
Jeffwan added a commit to Jeffwan/aibrix that referenced this pull request Mar 6, 2025
It helps to only consider the engine pod for multi-node inference. Ray worker doesn’t have http server running and it can not expose any application but only resource metrics. For resource metrics, since we use Tensor Parallelism, we think the utilization across GPU are same.

Signed-off-by: Jiaxin Shan <[email protected]>
Jeffwan added a commit that referenced this pull request Mar 6, 2025
* Ignore worker pods for gateway routing (#776)
* Ignore worker pods for gateway routing
* ignore worker pods in UpdatePod as well
* use node worker as const

Signed-off-by: Varun Gupta <[email protected]>

* Disable ENABLE_PROBES_INJECTION in correct way (#779)

fix env error caused by helm set command, use —set-string for env bool

Signed-off-by: Jiaxin Shan <[email protected]>

* Make stream include usage as optional (#788)

* Make stream include usage as optional

---------

Signed-off-by: Varun Gupta <[email protected]>

* Append ray head label selector in PodAutoscaler (#789)

It helps to only consider the engine pod for multi-node inference. Ray worker doesn’t have http server running and it can not expose any application but only resource metrics. For resource metrics, since we use Tensor Parallelism, we think the utilization across GPU are same.

Signed-off-by: Jiaxin Shan <[email protected]>

* Update request message processing for /v1/completion input (#794)

Update request message processing for prompt input

Signed-off-by: Varun Gupta <[email protected]>

* Fix a cherry-pick package name issue

Signed-off-by: Jiaxin Shan <[email protected]>

* Add missing release branch pull requests trigger

Signed-off-by: Jiaxin Shan <[email protected]>

---------

Signed-off-by: Varun Gupta <[email protected]>
Signed-off-by: Jiaxin Shan <[email protected]>
Co-authored-by: Varun Gupta <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants