-
Notifications
You must be signed in to change notification settings - Fork 225
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update to use token based pagination for forward compatability #936
Comments
Food for thought: gRPC docs advise the use of Sage advice. Since, this can allow different storage backends to potentially alter the pagination mechanism. While a relative offset might be useful for one backend, an ID offset might be more appropriate in another. |
Love it. Great idea/advice. Much better to follow Google's best practices than trying to come up / use my own scheme. Will change the branch Im working on to use tokens instead of |
I'm changing this title to simply be 'update to use token based pagination instead of limit/offset' as I realized we cant enforce a default limit (and likely even a max) without it being a breaking API change, requiring a new v2 of the public API. Since we haven't fully planned out all of the breaking changes in v2 API, Im going to defer on adding a max / default for pagination. I still think its a good idea to do so, however it also has unattended consequences of breaking pagination in the UI, as currently all of the UI pagination is done client side. tl;dr I'm going to re-create adding Will proceed with token based pagination but also not remove the For #935 , I think we can go with an opt in approach, meaning that if you want caching of list responses you HAVE to include a limit as part of your query to protect you blowing up cache size, this limit could be as large as you like but its still a conscious choice that the user would be making when enabling list caching. What do you think @GeorgeMac ? |
That sounds great. Equally, since the client in the UI is controlled within this project, it is something that can be taken advantage of by the UI immediately. |
Is your feature request related to a problem? Please describe.
Currently there are no default or enforced limits for requests that return multiple objects in their response, such as ListFlags, ListSegments or BatchEvaluate calls.This creates a risk of crashing the server if a user has more resources than will fit in memory if they call/use one of these endpoints.It also prevents us from being able to cache these objects (see #935)This has been changed to simply enable token based pagination to allow us to be able to move to support different datastores in the future that do not support limit/offset (ie NoSQL type datastores).
See this comment
Will create a separate issue tracking the original thoughts of adding default/max limits, but it would be a breaking API change. See this comment
Describe the solution you'd like
1. Add default and maxlimit
values for list/batch requests on the server2. Change the UI pagination to use thelimit
+offset
params instead of the client side pagination it's using todayAttach an export
N/A
Additional context
Add any other context or screenshots about the feature request here.
The text was updated successfully, but these errors were encountered: