-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v1.13 backports 2023-04-28 #25184
v1.13 backports 2023-04-28 #25184
Conversation
[ upstream commit 71af0a2 ] There are condition possible in which CEP changes CES. This leads to CEP being present in multiple CESs for some time. In such cases the standard logic may not work as it always expect to have a single CEP representation. This commit changes to logic to handle multiple CEPs properly. Signed-off-by: Alan Kutniewski <kutniewski@google.com> Signed-off-by: Tam Mach <tam.mach@cilium.io>
[ upstream commit 0691784 ] The 'cilium cleanup' command wasn't actually removing XDP programs attached to the generic hook point. This was causing issues when an XDP test in ginkgo was immediatly followed by the upgrade test. The upgrade test would run 'cilium cleanup' and remove everything except for the XDP programs. That would isolate the Cilium-managed nodes from the outside network (both SSH and intra-cluster connectivity). Obviously, this works when we change the Cilium configuration to disable XDP. There we have some logic to remove any leftover XDP programs on agent startup. Comparing that logic with the 'cilium cleanup' logic, we can see that the difference is that the agent removes XDP programs explicitely for each hook point (generic and driver). Let's do the same. This fix was tested manually on a setup with Cilium's generic XDP programs installed. Fixes: 6ed1fe5 ("cilium: Remove attached bpf_xdp upon "cilium cleanup"") Signed-off-by: Paul Chaignon <paul@cilium.io> Signed-off-by: Tam Mach <tam.mach@cilium.io>
[ upstream commit f171378 ] We have one service test running with a host policy to ensure the host firewall doesn't break any service handling stuff. The host policy we use for that test however doesn't account for health and probe traffic. As a result, some pods (cilium-monitor and kube-dns) fail to get Ready. That isn't really causing issues for the specific test because we don't rely on those pods, but still better to fix to avoid any red herring in latter troubleshooting. Signed-off-by: Paul Chaignon <paul@cilium.io> Signed-off-by: Tam Mach <tam.mach@cilium.io>
[ upstream commit c484692 ] The main bug affecting this test case was an IPv6 host firewall bug which was fixed in previous commits. Other issues also affected the overall test healthiness and were fixed in previous commits. Time to try and unquarantine! Signed-off-by: Paul Chaignon <paul@cilium.io> Signed-off-by: Tam Mach <tam.mach@cilium.io>
/test-backport-1.13 Job 'Cilium-PR-K8s-1.24-kernel-4.19' failed: Click to show.Test Name
Failure Output
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.24-kernel-4.19/25/ If it is a flake and a GitHub issue doesn't already exist to track it, comment Then please upload the Jenkins artifacts to that issue. Job 'Cilium-PR-K8s-1.23-kernel-4.19' failed: Click to show.Test Name
Failure Output
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.23-kernel-4.19/751/ If it is a flake and a GitHub issue doesn't already exist to track it, comment Then please upload the Jenkins artifacts to that issue. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My PRs look good. Thanks!
/test-1.24-5.4 |
/test-1.23-4.19 |
/test-1.24-4.19 |
@sayboras Lots of CI job restarts. Did you try and triage the failures? |
Yes, the below is in my note, I forgot copied and pasted here Test Summary
|
Once this PR is merged, you can update the PR labels via: