-
Notifications
You must be signed in to change notification settings - Fork 773
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Switch to C3 machine types for prow build cluster #6525
base: main
Are you sure you want to change the base?
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: upodroid The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
We really shouldn't need 2 cores of daemonsets ...? We should be able to tighten this up, discarding 20% of the node as overhead is excessive. |
Maybe we can import from the existing metrics collection? |
According to the calculations here: |
That sounds good to me (have not reviewed machine types yet). I don't recall an discussion about enabling network policy and I don't think we need it, 400m CPU is quite a bit back to fit other daemonsets cc @ameukam |
Just checked: k8s-prow-builds in google.com ( |
One more question, are we ok with nodes having 32gb memory instead of 64gb? |
EDIT: jinx I'm not sure if that's a problem for any of our workloads or not. I think we might have a few that are a bit lopsided in memory>cpu requests currently |
Apply plan:
After lgtm is applied on the PR. |
I think we should probably query the jobs to see if any have 30+ GB or a larger memory : core ratio than c3, if so, we can do n2? |
Network Policy was enabled by default at the time the cluster was created. We never took the opportunity to disable it. +1 for the disablement. |
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Instead of tweaking the prowjobs, lets go with N2 instances(Ice Lake platform) and keep the same machine-type and merge that change this weekend. |
/cc @ameukam @BenTheElder @xmudrii
We should get better performance out of these instances. I also used a custom type to insert 2 extra cores for all the extra daemonset we need to run.
C3 instance types are a bit tricky. They don't support custom sizing so our options are either c3-standard-8-lssd(same problem as today) or c3-standard-22-lssd(no guarantees on a job being dedicated to a specific node).
https://cloud.google.com/compute/docs/general-purpose-machines#disk_and_capacity_limits_2