-
Notifications
You must be signed in to change notification settings - Fork 267
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[batch] E2E works with driver and request proxy #272
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please rebase the main change and resolve the testing issues. then I will start the review
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add some docs and examples?
@@ -77,6 +99,7 @@ def schedule_get_job(self): | |||
else: | |||
print("Unsupported scheduling policy!") | |||
|
|||
self._job_manager.start_execute_job(job_id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what's the role of scheduler? if it's used to select id. should the main workflow be responsible for launching the job?? or you like it to include the execution?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the main workflow is to launch the job. It is the scheduler's responsibility to select which job. Job manager is to handle jobs' state transition. What you mark here on "start_execute_job", maybe I need to change the name of this function. It is just used to change job's state.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is the scheduler's responsibility to select which job. Job manager is to handle jobs' state transition
My point is you embed job manager inside the scheduler. should it removed from scheduler and added in into driver?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Insider of the scheduler, I have to check job's status by exposing manager to schduler.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
em. scheduler could be static and you can pass job status into the scheduler. now, the coupling makes the testing etc a little bit harder. you need to mock lots of things in future
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am ok if you like to get this merged now. Let's revisit it and make necessary refactor
@xinchen384 seems some of the comments and TODOs are not addressed yet. Let's continue the work and try to merge it tomorrow |
Now all comments are addressed. Documents with examples are added as well. @Jeffwan |
python/aibrix/aibrix/batch/README.md
Outdated
The request's format is in json. | ||
The json should cover multiple attributes as specified here, https://platform.openai.com/docs/guides/batch/getting-started, such as endpoint and completion window. | ||
|
||
## Submit job input data |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is something not expected. As we discussed, we want to support openAI style batch API interface. If we do not have that supported, it increase the user's learning curve and usage curve a lot.
For example, where's the running instance of user's python script. can it be interrupted? it's much hard to use it comparing to submit a remote API
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you saying these interfaces are different from openAI batch API? I think that they are the same. The only missing part is a proxy that expose there interfaces to client side.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's leave to next PR
python/aibrix/aibrix/batch/README.md
Outdated
# Batch API Tutorial | ||
|
||
## Prepare dataset | ||
Before submitting a batch job, you need to prepare input data as a file. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you give an example dataset here to reduce the efforts to prepare data?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest to reference the file in the readme so people can easily find it. Otherwise, how can user know there's one in testing folder
There're some minor issues we can address in future PRs. this one looks good to me |
* Update manifests version to v0.1.0-rc.3 (#287) * [Misc] Add sync images step and scripts in release process (#283) Add sync images step and scripts in release process * [batch] E2E works with driver and request proxy (#272) * e2e driver and test * comment functions * check job status in test * format update * update copyright * add examples with instructions and interfaces * move batch tutorial --------- Co-authored-by: xin.chen <[email protected]> * Fix address already in use when AIRuntime start in pod (#289) add uvicorn startup into file entrypoint * Read model name from request body (#290) * Use model name from request body * rename dummy to reserved router * Fix redis bootstrap flaky connection issue (#293) * skip docs CI if no changes in /docs dir (#294) * skip docs CI if no changes in /docs dir * test docs build * Improve Rayclusterreplicaset Status (#295) * improve rayclusterreplicaset status * nit * fix lint error * improve isClusterActive logic * fix lint error * remove redundant isRayPodCreateOrDeleteFailed check --------- Signed-off-by: Yicheng-Lu-llll <[email protected]> * Add request trace for profiling (#291) * Add request trace for profiling * add to redis at 10 second interval * nit * round to nearest 10s interval * round timestamp to nearest 10s interval and aggregate data by model * add go routine to add request trace * Update the crd definiton due to runtime upgrade (#298) #295 introduce the latest kuberay api and the dependencies bumps sigs.k8s.io/controller-runtime from v0.17.3 to v0.17.5. Due to that change, make manifest update the CRD definitions * Push images to Github registry in release pipeline (#301) * Disable docker build github workflow to cut CI cost * Push images to Github registry in release pipeline * Build autoscaler abstractions like fetcher, client and scaler (#300) * minor clean up on the autoscaler controller * Extract the algorithm package algorithm is extracted to distinguish with the scaler. * Refactor scaler interface 1. Split the Scaler interface and BaseAutoscaler implementation 2. Create APA/KPA scaler separately and adopt the corresponding algorithms * Introduce the scalingContext in algorithm * Introduce k8s.io/metrics for resource & custom metrics fetching * Extract metric fetcher to cover the fetching logic * Optimize the scaler workflow to adopt fetch and client interface * Further refactor the code structure * Support pod autoscaler periodically check (#306) * Support pod autoscaler periodically check * Fix the error case * Add timeout in nc check for redis bootstrap (#309) * Refactor AutoScaler: metricClient, context, reconcile (#308) * Refactor AutoScaler: optimize metric client, context, and reconcile processes. * fix make lint-all * fix typos --------- Signed-off-by: Yicheng-Lu-llll <[email protected]> Co-authored-by: xinchen384 <[email protected]> Co-authored-by: xin.chen <[email protected]> Co-authored-by: brosoul <[email protected]> Co-authored-by: Varun Gupta <[email protected]> Co-authored-by: Yicheng-Lu-llll <[email protected]> Co-authored-by: Rong-Kang <[email protected]>
* e2e driver and test * comment functions * check job status in test * format update * update copyright * add examples with instructions and interfaces * move batch tutorial --------- Co-authored-by: xin.chen <[email protected]>
Pull Request Description
Related Issues
Resolves: part of #182
Important: Before submitting, please complete the description above and review the checklist below.
Contribution Guidelines (Expand for Details)
We appreciate your contribution to aibrix! To ensure a smooth review process and maintain high code quality, please adhere to the following guidelines:
Pull Request Title Format
Your PR title should start with one of these prefixes to indicate the nature of the change:
[Bug]
: Corrections to existing functionality[CI]
: Changes to build process or CI pipeline[Docs]
: Updates or additions to documentation[API]
: Modifications to aibrix's API or interface[CLI]
: Changes or additions to the Command Line Interface[Misc]
: For changes not covered above (use sparingly)Note: For changes spanning multiple categories, use multiple prefixes in order of importance.
Submission Checklist
By submitting this PR, you confirm that you've read these guidelines and your changes align with the project's contribution standards.