-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v1.13 Backports 2023-05-22 #25588
v1.13 Backports 2023-05-22 #25588
Conversation
a74a66d
to
d035452
Compare
/test-backport-1.13 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've checked the first three backports and they look good. Thanks a lot Tobias!
[ upstream commit 439a0a0 ] First see the code comments for the full explanation. This issue with the faulty conntrack entries when enforcing host policies is suspected to cause the flakes that have been polluting host firewall tests. We've seen this faulty conntrack issue happen mostly to health and kube-apiserver connections. And it turns out that the host firewall flakes look like they are caused by connectivity blips on kube-apiserver's side, which error messages such as: error: unable to upgrade connection: Authorization error (user=kube-apiserver-kubelet-client, verb=create, resource=nodes, subresource=proxy) This commit therefore tries to workaround the issue of faulty conntrack entries in host firewall tests. If the flakes are indeed caused by those faulty entries, we shouldn't see them happen anymore. Signed-off-by: Paul Chaignon <paul@cilium.io> Signed-off-by: Tom Hadlaw <tom.hadlaw@isovalent.com> Signed-off-by: Tobias Klauser <tklauser@distanz.ch>
[ upstream commit 0cfce97 ] Change 439a0a0 introduced workaround to common flake we've been seeing relating to issue #15455. Any test enabling hostfw/host-policy will may suffer from the same issue. Addresses: #25411 Signed-off-by: Tom Hadlaw <tom.hadlaw@isovalent.com> Signed-off-by: Tobias Klauser <tklauser@distanz.ch>
[ upstream commit b2de07a ] This is largely possible because a macro-defined func is reused in several places, and send_drop_notify is assumed to be deferred up the chain - but not all parents would actually invoke it, and by definition it seems clearer/less error prone to always explicitly issue drop notification at the source, where the drop is decided. This is a smidge more verbose, but it avoids the problem of bad assumptions or hard-to-catch mistakes causing missing drop notifications. Signed-off-by: Benjamin Leggett <benjamin.leggett@solo.io> Signed-off-by: Tobias Klauser <tklauser@distanz.ch>
[ upstream commit 76eca78 ] Currently we discard CiliumNode updates based on DeepEqual and labels. However DeepEqual is set to ignore Annotations, and the wg-pub-key annotation is used to exchange rotated Wireguard keys. The wireguard tunnel is broken when any node restart happens. After restart, the restarted node's public key got refreshed but not propagated to other nodes. The issue is caused by cilium dropping CiliumNode update events when the spec, status and labels between the old and new nodes are the same. Wireguard public key updates are transmitted through annotations. Signed-off-by: Lin Dong <lindongchn@gmail.com> Signed-off-by: Tobias Klauser <tklauser@distanz.ch>
d035452
to
3f20a57
Compare
/test-backport-1.13 Job 'Cilium-PR-K8s-1.26-kernel-net-next' failed: Click to show.Test Name
Failure Output
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.26-kernel-net-next/141/ If it is a flake and a GitHub issue doesn't already exist to track it, comment Then please upload the Jenkins artifacts to that issue. Job 'Cilium-PR-K8s-1.26-kernel-net-next' failed: Click to show.Test Name
Failure Output
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.26-kernel-net-next/170/ If it is a flake and a GitHub issue doesn't already exist to track it, comment Then please upload the Jenkins artifacts to that issue. Job 'Cilium-PR-K8s-1.26-kernel-net-next' failed: Click to show.Test Name
Failure Output
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.26-kernel-net-next/213/ If it is a flake and a GitHub issue doesn't already exist to track it, comment Then please upload the Jenkins artifacts to that issue. |
/test-1.26-net-next |
prepareHostPolicyEnforcement(kubectl)
got changed inmain
vs.v1.13
. Please check whether I've resolved it correctly.void *map
is not present inv1.13
branch. Please check whether I've resolved it correctly.Once this PR is merged, you can update the PR labels via:
or with