You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To simplify our scheduling logic and minimize the edge cases and decision branches, I would like us to move to a state where bid rejections by compute nodes are only for terminal reasons and not for transient reasons, such as no available capacity. Compute nodes already queue jobs locally if they don't have immediate capacity, they can share both the available and queued capacity with the orchestrator, and the orchestrator can make a decision on ranking nodes based on those capacities and can filter compute nodes with a large queue and retry later. This enables higher priority jobs to be scheduled in compute nodes with large queue of pending jobs, but get the chance to run sooner if we use priority queues in the executor_buffer
Even if the Orchestrator's view of the network's utilization is stale, we have solutions that can mitigation that including:
Use Sparrow scheduling where the orchestrator will select multiple compute nodes, and they only accept when the job reaches the head of the of the queue and ready to be executed Remove bidding related semantics from compute APIs #3856
Add a Housekeeping task that tracks executions taking too long in negotiation phase and enqueue an evaluation. The scheduler might decide to pick another node if there are better ranked ones or keep the same node otherwise
The text was updated successfully, but these errors were encountered:
This PR not only closes#3992, but also adds a node ranker based on
available and queued capacity, which we didn't have before. I had to
introduce the ranker along with this PR to make sure some tests are
passing that were relying on bid rejection due to exceeding resource
limits, specifically `TestParallelGPU`
To simplify our scheduling logic and minimize the edge cases and decision branches, I would like us to move to a state where bid rejections by compute nodes are only for terminal reasons and not for transient reasons, such as no available capacity. Compute nodes already queue jobs locally if they don't have immediate capacity, they can share both the available and queued capacity with the orchestrator, and the orchestrator can make a decision on ranking nodes based on those capacities and can filter compute nodes with a large queue and retry later. This enables higher priority jobs to be scheduled in compute nodes with large queue of pending jobs, but get the chance to run sooner if we use priority queues in the executor_buffer
Even if the Orchestrator's view of the network's utilization is stale, we have solutions that can mitigation that including:
The text was updated successfully, but these errors were encountered: