-
Notifications
You must be signed in to change notification settings - Fork 448
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add concurrency limit to QueryRunner and DataSourceSettings #2513
base: main
Are you sure you want to change the base?
Conversation
Your preview environment pr-2513-bttf has been deployed. Preview environment endpoints are available at: |
278d57e
to
197f1b3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In startExperimentResultQueries
, we currently start all queries at once and do a Promise.all
. This is causing the race condition you mention in your PR description.
We can fix this fairly easily by changing it to await each promise before starting the next query.
It's really only the experiment query that has this issue since all of the other ones fire 1 (or maybe 2) queries total.
One thing I can see happening is someone's experiment being stuck in a queued state because all of the slots are being hogged by a few super long running queries. What would be really cool is to pair this with #2323 so you can easily see a list of all of your running queries and where they are coming from and maybe even kill them directly from that list. |
This is one thing I was thinking about; didn't know that we have a page for that now! Do you think showing a link to that page while data is updating would do a good enough job at explaining what's happening? |
Yeah, in The page isn't super useful right now (it doesn't link to the experiment that spawned the query and there's no way to cancel), but it's still helpful and we can improve it over time. |
Makes sense. I tried rewriting the promise logic in |
…e display to modal
ba84649
to
0651db1
Compare
@@ -279,7 +279,7 @@ export const startExperimentResultQueries = async ( | |||
organization | |||
); | |||
|
|||
const singlePromises = singles.map(async (m) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The ES6 for loop format allows for awaiting within each iteration of the loop without going to the next, so we can drop the concurrency granularity to 1
<div className="col-auto mb-2"> | ||
<em>Time queued</em>:{" "} | ||
<strong> | ||
{formatDistanceStrict( | ||
getValidDate(query.createdAt), | ||
getValidDate(query.startedAt) | ||
)} | ||
</strong> | ||
</div> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is definitely an optional addition but I found myself wanting to check the queued time after re-running the queries so I thought I'd toss it in here. Let me know whether this seems useful to keep or if I should get rid of it.
@jdorn I fixed the issue I had before & the queries now go one at a time. I updated the gifs with the new functionality and dropped the wait time by half because it's much more impactful now with a low concurrency limit. Should be ready for a review again once you're done with the 3.0 stuff |
Features and Changes
Creates a new field in
DataSourceSettings
:maxConcurrentQueries
. This field is respected byQueryRunner
'sstartQuery
andstartReadyQueries
entrypoints, which now check the number of running queries before committing to starting a new query.Added a helper to
QueryModel
to grab the count of running queries & a new section to theConnectionSettings
modal to allow for editing of this setting (and create a place for other future shared settings to live).Note: the limit is only respected across multiple experiments (i.e. one experiment which runs N queries where N is over the concurrency limit will not trigger the protections) due to race conditions with queries starting simultaneously.Testing
a. (If the delay isn't noticeable, you can change the default
currentTimeout: number = 500
to 8000.)Screenshots
Here's a screen recording of the UX with a concurrency limit of 1
And here's a corresponding recording of what's happening on the backend with a few extra
console.log
s added for clarity.