Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use vllm metrics for routing #274

Merged
merged 8 commits into from
Oct 7, 2024
Merged

Use vllm metrics for routing #274

merged 8 commits into from
Oct 7, 2024

Conversation

varungup90
Copy link
Collaborator

No description provided.

return metricValue, nil
}

func parseMetricFromBody(body []byte, metricName string) (float64, error) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

autoscaler has similar features. Let's refactor this part later and make sure cache and autoscaler fetcher can use same library

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree. Let me discuss with him

@Jeffwan
Copy link
Collaborator

Jeffwan commented Oct 5, 2024

PR looks good to me.

@varungup90
Copy link
Collaborator Author

PR looks good to me.

Updated the MR with cache to pull metrics once for each pod.

@varungup90 varungup90 merged commit c6e1c2b into main Oct 7, 2024
10 checks passed
@varungup90 varungup90 deleted the use-vllm-metric branch October 7, 2024 20:58
gangmuk pushed a commit that referenced this pull request Jan 25, 2025
* Cache bug fix in update pod and model mapping (#259)

* test

* Use vllm metrics for routing

* nit reverts

* update log level

* refactor cache to fetch metrics once

* remove port from random routing
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants