-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bpf: l3: limit kube-proxy workaround in l3_local_delivery() to bpf_overlay #27908
Merged
youngnick
merged 1 commit into
cilium:main
from
julianwiedmann:1.15-bpf-local-delivery-overlay
Sep 5, 2023
Merged
bpf: l3: limit kube-proxy workaround in l3_local_delivery() to bpf_overlay #27908
youngnick
merged 1 commit into
cilium:main
from
julianwiedmann:1.15-bpf-local-delivery-overlay
Sep 5, 2023
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…erlay cilium#22333 fixed a bug for configs with tunnel-routing and per-EP routes. Here ingress policy was applied twice: first via tail-call, and then a second time by the to-container program as the packet traverses the veth pair. The fix was to avoid the tail-call, and only apply policy with the to-container program. But the tail-call also contains a kube-proxy workaround (potential service replies need to pass through kube-proxy for RevDNAT, so the tail-call punts them to the stack instead of calling redirect_ep() to forward them straight to the endpoint). So we copied that workaround into the l3_local_delivery() path. The tail-call is compiled as part of bpf_lxc, and thus couldn't easily tell if a packet was received from the tunnel. But as l3_local_delivery() is inlined into bpf_overlay, we can now limit the work-around to IS_BPF_OVERLAY. This ensures that the workaround is not applied to eg. plain pod-to-pod traffic, where bpf_lxc also calls l3_local_delivery(). Fixes: 3d2ceaf ("bpf: Preserve overlay->lxc path with kube-proxy") Signed-off-by: Julian Wiedmann <jwi@isovalent.com>
julianwiedmann
added
kind/bug
This is a bug in the Cilium logic.
sig/datapath
Impacts bpf/ or low-level forwarding details, including map management and monitor messages.
release-note/bug
This PR fixes an issue in a previous release of Cilium.
area/kube-proxy
Issues related to kube-proxy (not the kube-proxy-free mode).
needs-backport/1.14
This PR / issue needs backporting to the v1.14 branch
labels
Sep 4, 2023
julianwiedmann
changed the title
bpf: l3: limit kube-proxy workaround in l3_local_delivery() to bpf_ov…
bpf: l3: limit kube-proxy workaround in l3_local_delivery() to bpf_overlay
Sep 4, 2023
/test |
markpash
approved these changes
Sep 4, 2023
maintainer-s-little-helper
bot
added
the
ready-to-merge
This PR has passed all tests and received consensus from code owners to merge.
label
Sep 4, 2023
gandro
added
backport-pending/1.14
The backport for Cilium 1.14.x for this PR is in progress.
and removed
needs-backport/1.14
This PR / issue needs backporting to the v1.14 branch
labels
Sep 12, 2023
gandro
added
backport-done/1.14
The backport for Cilium 1.14.x for this PR is done.
and removed
backport-pending/1.14
The backport for Cilium 1.14.x for this PR is in progress.
labels
Sep 25, 2023
jrajahalme
moved this from Needs backport from main
to Backport done to v1.14
in 1.14.3
Oct 18, 2023
julianwiedmann
added
the
needs-backport/1.13
This PR / issue needs backporting to the v1.13 branch
label
Jan 3, 2024
jibi
added
backport-pending/1.13
The backport for Cilium 1.13.x for this PR is in progress.
and removed
needs-backport/1.13
This PR / issue needs backporting to the v1.13 branch
labels
Jan 10, 2024
github-actions
bot
removed
the
backport-pending/1.13
The backport for Cilium 1.13.x for this PR is in progress.
label
Jan 15, 2024
github-actions
bot
added
the
backport-done/1.13
The backport for Cilium 1.13.x for this PR is done.
label
Jan 15, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
area/kube-proxy
Issues related to kube-proxy (not the kube-proxy-free mode).
backport-done/1.13
The backport for Cilium 1.13.x for this PR is done.
backport-done/1.14
The backport for Cilium 1.14.x for this PR is done.
kind/bug
This is a bug in the Cilium logic.
ready-to-merge
This PR has passed all tests and received consensus from code owners to merge.
release-note/bug
This PR fixes an issue in a previous release of Cilium.
sig/datapath
Impacts bpf/ or low-level forwarding details, including map management and monitor messages.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
#22333 fixed a bug for configs with tunnel-routing and per-EP routes. Here ingress policy was applied twice: first via tail-call, and then a second time by the to-container program as the packet traverses the veth pair.
The fix was to avoid the tail-call, and only apply policy with the to-container program. But the tail-call also contains a kube-proxy workaround (potential service replies need to pass through kube-proxy for RevDNAT, so the tail-call punts them to the stack instead of calling redirect_ep() to forward them straight to the endpoint). So we copied that workaround into the l3_local_delivery() path.
The tail-call is compiled as part of bpf_lxc, and thus couldn't easily tell if a packet was received from the tunnel. But as l3_local_delivery() is inlined into bpf_overlay, we can now limit the work-around to IS_BPF_OVERLAY. This ensures that the workaround is not applied to eg. plain pod-to-pod traffic, where bpf_lxc also calls l3_local_delivery().
Fixes: 3d2ceaf ("bpf: Preserve overlay->lxc path with kube-proxy")