-
-
Notifications
You must be signed in to change notification settings - Fork 6.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Usage]: vllm OpenAI API Offline Batch Inference #8567
Comments
You are trying to conduct batch inference using the OpenAI client which connects to the online server. For offline batch inference via OpenAI API, you should instead use |
Okay, but how can I integrate that with my docker compose, because the docker image always uses api_server.py. So I have to create a new docker image for that or does vllm provides one? |
A simpler way would be to run the vLLM docker container as is, then open a new interactive shell inside it and run any commands you want. |
Well, I have tried to run it inside of the docker and it is not working properly. |
Yes, this is why it is called offline inference. Feel free to open an issue to request for online support. |
Sure, |
If you intend to only run batch inference inside Docker, what you can do is modify the image to run something like |
Your current environment
How would you like to use vllm
I want to use the OpenAI library to do offline inference on my local vllm Model.
I use this compose.yml to create an api-server using vllm.
When I try to use batch API endpoint like this, I get for both create calls
NotFoundError: Error code: 404 - {'detail': 'Not Found'}
. The test.jsonl file has the same format as in this tutorial: https://platform.openai.com/docs/guides/batch/getting-started?lang=curl, except that the name of the model is aligned with the correct model name.I would assume that there is a problem with the endpoints using vllm as backend. Is it possible to use them or initialize them?
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: