Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What are you trying to accomplish with this PR?
Try to make fetching slightly faster by reducing the number of requests we need to make to get the full result set. Take a look at the tests below--WDYT, does this change make sense?
Also removes the
-a
option, which is ignored for the json format anyway.Tests
tl;dr The variance in the numbers across runs is pretty big, but the unlimited chunk size seems to consistently be a bit faster relative to the others requests in the same run.
The first two test below tests invoke
kubectl get pods -o json
twice per chunk size against a production cluster during a deploy. The average sync cycle time for that cluster is 40s (so faster than I get locally with any chunk size, but still brutal).On this run I did the calls in reverse order in case the stage of the deploy was interfering (reordered for ease of comparison):
This run is against a cluster that for some reason has a much faster sync cycle in production (~20s), so I did 4 calls for each chunk size instead of 2:
And finally here's a test against a smaller cluster using a larger number of runs (15):
How is this accomplished?
Changing an option. The chunking was added to improve perceived latency:
What could go wrong?
Things get slightly slower instead of slightly faster.