-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test/k8s: for services test, wait for all applied manifests to delete #25341
test/k8s: for services test, wait for all applied manifests to delete #25341
Conversation
/test |
5d799d2
to
a269d79
Compare
/test |
a269d79
to
caf857e
Compare
/test |
caf857e
to
8f506a8
Compare
/test |
8f506a8
to
a06c785
Compare
/test |
1 similar comment
/test |
bunch of failures from GH outages, will merge once I'm convinced everything relevant is passing. |
/test |
In the K8sDatapathConfig tests "echo-svc" deployment Pods are failing to terminate while waiting to terminate all Pods. These come from a deployment, that shouldn't be applied in this suite. Presumably failure to delete is being cause by the Deployment controller restarting the Pods as they're deleted. To try to fix this, going to wait for applied yamls in K8sServices to be fully deleted, including finalizers. Addresses: cilium#25255 Signed-off-by: Tom Hadlaw <tom.hadlaw@isovalent.com>
a06c785
to
3cfe858
Compare
/test Job 'Cilium-PR-K8s-1.26-kernel-net-next' failed: Click to show.Test Name
Failure Output
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.26-kernel-net-next/2214/ If it is a flake and a GitHub issue doesn't already exist to track it, comment Then please upload the Jenkins artifacts to that issue. |
Go unit tests hit: #25353 - theres a PR open to fix. |
/test-1.16-4.19 |
/test-1.26-net-next |
In the K8sDatapathConfig tests "echo-svc" deployment Pods are failing to terminate while waiting to terminate all Pods. These come from a deployment, that don't appear to be used in this suite. Presumably failure to delete is being cause by the Deployment controller restarting the Pods as they're deleted.
To try to fix this, going to wait for applied yamls in K8sServices to be fully deleted, including finalizers.
Addresses: #25255