You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We typically see new nodes come up in about 30-40s in a ready state. A lot of this depends on what instance type you are launching, what you are doing in the startup in your image, and what's in your userData. We don't publish public benchmarks on this specifically for this reason -- there's a lot of variation among images and instance types.
It's also worth noting that you can scale pretty damn quickly just in bigger steps - if you bump the replica count of a deployment with 1 CPU/1Gi RAM to 100 replicas, you'll get some large nodes deployed, so yes it'll take maybe 60s to get them and deployed but you can serve a lot of traffic in that step up.
We run services that can handle thousands of requests a second per pod, so it's a different scaling metric completely to lambda's one request = one execution model.
Hi, any benchmark on how fast Karpenter scales out compared to aws lambda? This will depend on the number of nodes and node types. Per https://aws.amazon.com/blogs/containers/eliminate-kubernetes-node-scaling-lag-with-pod-priority-and-over-provisioning/, it will take 1-2 minutes to add new nodes. Per my test, EC2 API can provision new running EC2 in just seconds (status check will still take couple minutes to change to checks passed, but Karpenter doesn't wait for that).
The text was updated successfully, but these errors were encountered: