Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v1.13 backports 2023-04-28 #25184

Merged
merged 4 commits into from
May 1, 2023

Conversation

sayboras
Copy link
Member

@sayboras sayboras commented Apr 28, 2023

Once this PR is merged, you can update the PR labels via:

$ for pr in 24838 25117 25025; do contrib/backporting/set-labels.py $pr done 1.13; done

alan-kut and others added 4 commits April 28, 2023 17:19
[ upstream commit 71af0a2 ]

There are condition possible in which CEP changes CES.
This leads to CEP being present in multiple CESs for some time.
In such cases the standard logic may not work as it always expect
to have a single CEP representation.

This commit changes to logic to handle multiple CEPs properly.

Signed-off-by: Alan Kutniewski <kutniewski@google.com>
Signed-off-by: Tam Mach <tam.mach@cilium.io>
[ upstream commit 0691784 ]

The 'cilium cleanup' command wasn't actually removing XDP programs
attached to the generic hook point. This was causing issues when an XDP
test in ginkgo was immediatly followed by the upgrade test. The upgrade
test would run 'cilium cleanup' and remove everything except for the
XDP programs. That would isolate the Cilium-managed nodes from the
outside network (both SSH and intra-cluster connectivity).

Obviously, this works when we change the Cilium configuration to disable
XDP. There we have some logic to remove any leftover XDP programs on
agent startup. Comparing that logic with the 'cilium cleanup' logic, we
can see that the difference is that the agent removes XDP programs
explicitely for each hook point (generic and driver). Let's do the same.

This fix was tested manually on a setup with Cilium's generic XDP
programs installed.

Fixes: 6ed1fe5 ("cilium: Remove attached bpf_xdp upon "cilium cleanup"")
Signed-off-by: Paul Chaignon <paul@cilium.io>
Signed-off-by: Tam Mach <tam.mach@cilium.io>
[ upstream commit f171378 ]

We have one service test running with a host policy to ensure the host
firewall doesn't break any service handling stuff. The host policy we
use for that test however doesn't account for health and probe traffic.

As a result, some pods (cilium-monitor and kube-dns) fail to get Ready.
That isn't really causing issues for the specific test because we don't
rely on those pods, but still better to fix to avoid any red herring in
latter troubleshooting.

Signed-off-by: Paul Chaignon <paul@cilium.io>
Signed-off-by: Tam Mach <tam.mach@cilium.io>
[ upstream commit c484692 ]

The main bug affecting this test case was an IPv6 host firewall bug
which was fixed in previous commits. Other issues also affected the
overall test healthiness and were fixed in previous commits. Time to try
and unquarantine!

Signed-off-by: Paul Chaignon <paul@cilium.io>
Signed-off-by: Tam Mach <tam.mach@cilium.io>
@sayboras sayboras requested a review from a team as a code owner April 28, 2023 07:29
@sayboras sayboras added kind/backports This PR provides functionality previously merged into master. backport/1.13 This PR represents a backport for Cilium 1.13.x of a PR that was merged to main. labels Apr 28, 2023
@sayboras sayboras requested a review from pchaigno April 28, 2023 07:29
@sayboras
Copy link
Member Author

sayboras commented Apr 28, 2023

/test-backport-1.13

Job 'Cilium-PR-K8s-1.24-kernel-4.19' failed:

Click to show.

Test Name

K8sDatapathConfig Host firewall With native routing

Failure Output

FAIL: Failed to reach 192.168.56.12:80 from testclient-8t7rp

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.24-kernel-4.19/25/

If it is a flake and a GitHub issue doesn't already exist to track it, comment /mlh new-flake Cilium-PR-K8s-1.24-kernel-4.19 so I can create one.

Then please upload the Jenkins artifacts to that issue.

Job 'Cilium-PR-K8s-1.23-kernel-4.19' failed:

Click to show.

Test Name

K8sDatapathServicesTest Checks E/W loadbalancing (ClusterIP, NodePort from inside cluster, etc) with L7 policy Tests NodePort with L7 Policy

Failure Output

FAIL: Request from testclient-5nm84 pod to service tftp://[fd04::12]:31865/hello failed

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.23-kernel-4.19/751/

If it is a flake and a GitHub issue doesn't already exist to track it, comment /mlh new-flake Cilium-PR-K8s-1.23-kernel-4.19 so I can create one.

Then please upload the Jenkins artifacts to that issue.

Copy link
Member

@pchaigno pchaigno left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My PRs look good. Thanks!

@sayboras
Copy link
Member Author

/test-1.24-5.4

@sayboras
Copy link
Member Author

/test-1.23-4.19

@sayboras
Copy link
Member Author

/test-1.24-4.19

@maintainer-s-little-helper maintainer-s-little-helper bot added the ready-to-merge This PR has passed all tests and received consensus from code owners to merge. label Apr 28, 2023
@pchaigno
Copy link
Member

pchaigno commented May 1, 2023

@sayboras Lots of CI job restarts. Did you try and triage the failures?

@sayboras
Copy link
Member Author

sayboras commented May 1, 2023

Lots of CI job restarts. Did you try and triage the failures?

Yes, the below is in my note, I forgot copied and pasted here

Test Summary

@pchaigno pchaigno merged commit e206835 into cilium:v1.13 May 1, 2023
62 checks passed
@sayboras sayboras deleted the pr/v1.13-backport-2023-04-28 branch May 1, 2023 09:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backport/1.13 This PR represents a backport for Cilium 1.13.x of a PR that was merged to main. kind/backports This PR provides functionality previously merged into master. ready-to-merge This PR has passed all tests and received consensus from code owners to merge.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants