New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add e2e test for restart kubelet #124445
base: master
Are you sure you want to change the base?
add e2e test for restart kubelet #124445
Conversation
Hi @chengjoey. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
fbdbb0c
to
a63cc95
Compare
/ok-to-test |
test/e2e/common/node/kubelet.go
Outdated
@@ -260,3 +262,87 @@ var _ = SIGDescribe("Kubelet with pods in a privileged namespace", func() { | |||
}) | |||
}) | |||
}) | |||
|
|||
var _ = SIGDescribe("Kubelet rejects pods that do not satisfy affinity after restart", func() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you move the test here actually? https://github.com/kubernetes/kubernetes/blob/a63cc95cbec32e94054921877f08a75e215cc74d/test/e2e_node/restart_test.go
We have examples of restarting kubelet so I don't think you need changes to storage/utils.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done, thanks @kannon92
a63cc95
to
daa96e7
Compare
fe740ab
to
0128c2c
Compare
/test |
/test pull-kubernetes-node-kubelet-serial-crio-cgroupv2 /test pull-kubernetes-node-kubelet-serial-containerd |
/test pull-kubernetes-node-kubelet-serial-containerd |
1 similar comment
/test pull-kubernetes-node-kubelet-serial-containerd |
/test pull-kubernetes-node-kubelet-serial-containerd Fixed the eviction issues so this should look better now. |
passed first run. |
/test pull-kubernetes-node-kubelet-serial-containerd second run passed. |
/cc @gjkim42 I see that you are also increasing coverage around restarts. |
/test pull-kubernetes-node-kubelet-serial-containerd |
2 similar comments
/test pull-kubernetes-node-kubelet-serial-containerd |
/test pull-kubernetes-node-kubelet-serial-containerd |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks good to have an e2e test for this scenario.
/lgtm
/assign @SergeyKanzhelev
I think we can add this e2e test for now and fix it later if we want to fix the behavior. What do you think?
xref: #124586
test/e2e_node/restart_test.go
Outdated
gomega.Eventually(ctx, func() bool { | ||
pod, err = e2epod.NewPodClient(f).Get(ctx, podName, metav1.GetOptions{}) | ||
framework.ExpectNoError(err) | ||
// pod it is a final state |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: pod is in a final state?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
test/e2e_node/restart_test.go
Outdated
@@ -474,6 +474,73 @@ var _ = SIGDescribe("Restart", framework.WithSerial(), framework.WithSlow(), fra | |||
return checkMirrorPodDisappear(ctx, f.ClientSet, pod.Name, pod.Namespace) | |||
}, f.Timeouts.PodDelete, f.Timeouts.Poll).Should(gomega.BeNil()) | |||
}) | |||
// Regression test for an extended scenario for https://issues.k8s.io/123980 | |||
ginkgo.It("should kill running pods that were satisfied the affinity before kubelet restart", func(ctx context.Context) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: should evict running pods that do not meet the affinity after the kubelet restart
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
LGTM label has been added. Git tree hash: ae126d8e7a828e63cd0c9fe02c818fdc88e04a20
|
def47cc
to
94c98bb
Compare
New changes are detected. LGTM label has been removed. |
when nodes removes the label that satisfies the pod affinity, the running pods are not affected, but restarting the kubelet will kill these pods. Signed-off-by: joey <zchengjoey@gmail.com>
94c98bb
to
abf25c0
Compare
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: chengjoey The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Could you add another e2e test for pod.Spec.NodeSelector, please? |
@chengjoey: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/test pull-kubernetes-unit |
hi @Homura222 , the effects of using nodeAffinity and nodeSelector should be the same. In this e2e test, the purpose is just to schedule the pod to a specific node and restart the kubelet to test the evict logic. As for the node assignment function of the two itself, it should not belong to this test. This test is just to verify the characteristics of kubelet. |
What type of PR is this?
e2e test for kubelet
/kind flake
What this PR does / why we need it:
when nodes removes the label that satisfies the pod affinity, the running pods are not affected, but restarting the kubelet will kill these pods.
#123980 As discussed in the related issue, there is no decision on whether to change the behavior of kubelet, but e2e tests can be added for tracking to facilitate subsequent verification of whether the changed code meets the expected behavior.
Special notes for your reviewer:
Does this PR introduce a user-facing change?