Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] Is there a way to cordon control-plane nodes by default? #18497

Closed
anthony-S93 opened this issue Mar 25, 2024 · 9 comments
Closed
Labels
kind/support Categorizes issue or PR as a support question.

Comments

@anthony-S93
Copy link

If we start a multi-node cluster like so:

$ minikube start -n 5 -p some-cluster

The control-plane node in the cluster will not have SchedulingDisabled status by default. As a result, we have to run kubectl cordon manually to set that status.

Is there a way to enable this by default?

@Ritikaa96
Copy link

It is not recommended to schedule pods on control plane Node. The control-plane node runs pods with critical services, thus by default, Kubernetes clusters don't schedule pods on the control-plane node for security reasons issues.

@Ritikaa96
Copy link

/kind support

@k8s-ci-robot k8s-ci-robot added the kind/support Categorizes issue or PR as a support question. label Apr 8, 2024
@anthony-S93
Copy link
Author

It is not recommended to schedule pods on control plane Node. The control-plane node runs pods with critical services, thus by default, Kubernetes clusters don't schedule pods on the control-plane node for security reasons issues.

Indeed. Which is why I'm asking this question in the first place. I'm asking for a way to have SchedulingDisabled automatically set for Control Plane nodes so that workloads will not get scheduled to them.

@Ritikaa96
Copy link

oh. I understand. I agree and the value of " node-role.kubernetes.io/control-plane=" is empty.
This is the default behavior of Minikube as Minikube is not used in production level cluster creation is good for learning and dev/test environments.
We can taint the node manually by adding "NoSchedule" though if we want to make the master node unschedulable.

@anthony-S93
Copy link
Author

oh. I understand. I agree and the value of " node-role.kubernetes.io/control-plane=" is empty. This is the default behavior of Minikube as Minikube is not used in production level cluster creation is good for learning and dev/test environments. We can taint the node manually by adding "NoSchedule" though if we want to make the master node unschedulable.

Understood. And thank you for your response.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Apr 16, 2024

It is still a good feature for multi-node, to avoid removing the taint if there is more than one node.
Having to add it back is a workaround and as mentioned it would be nice if it is was automatic...

EDIT: this is supposed to already be the case. Maybe "NoSchedule" vs "SchedulingDisabled" ?

        // primary control-plane and worker nodes should be untainted by default
        if n.ControlPlane && !config.IsPrimaryControlPlane(cfg, n) {

There are some pods that are scheduled on the control plane node, i.e. the control plane pods

@Ritikaa96
Copy link

@anthony-S93 How about a flag in the beginning
minikube start -n 4 -p multinode --control-plane-unSchedulable

@Ritikaa96
Copy link

Even if i understand the reasoning behind the untainted control plane , if the NoSchedule is present by default, it will help in understanding the prod level work to new developers trying minikube.

@Rdxv
Copy link

Rdxv commented May 17, 2024

Yeah I had this issue in minikube as well. I was trying to learn how hard antiAffinity works with replication redis and after making several other configs I ended up having one of the replicas scheduled on the control plane so I had to taint that node.
Good learning material since they told me it's not supposed to happen in prod clusters.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests

5 participants