-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v1.13 Backports 2023-07-24 #27036
v1.13 Backports 2023-07-24 #27036
Conversation
[ upstream commit a58cb6a ] The 'Skip conntrack for pod traffic' test currently downloads the conntrack package at runtime to be able to flush and list Linux's conntrack entries. This sometimes fail because of connectivity issues to the package repositories. Instead, we've now included the conntrack package in the log-gatherer image. We can use those pods to run conntrack commands instead of using the Cilium agent pods. Fixes: 496ce42 ("iptables: add support for NOTRACK rules for pod-to-pod traffic") Signed-off-by: Paul Chaignon <paul@cilium.io> Signed-off-by: Nicolas Busseneau <nicolas@isovalent.com>
[ upstream commit 67a3ab3 ] If we check res.WasSuccessful() instead of res, then ginkgo won't print the error message in case the command wasn't successful. Signed-off-by: Paul Chaignon <paul@cilium.io> Signed-off-by: Nicolas Busseneau <nicolas@isovalent.com>
[ upstream commit 3ba76e5 ] Recent bugs with IPsec have highlighted a need to document several caveats of IPsec operations. This commit documents those caveats as well as common XFRM errors. Signed-off-by: Paul Chaignon <paul.chaignon@gmail.com> Signed-off-by: Nicolas Busseneau <nicolas@isovalent.com>
[ upstream commit 0297c6c ] Azure does not allow having multiple clusters with the same name in the same subscription even if they are hosted on different locations. In order to avoid name conflicts, we previously added the location name to the cluster name, however in some cases this leads to cluster names exceeding the maximum length. As a quick fix, we replace the location name with a simple index. Signed-off-by: Nicolas Busseneau <nicolas@isovalent.com> Signed-off-by: Nicolas Busseneau <nicolas@isovalent.com>
[ upstream commit 4e9bbcd ] [ Backporter's note: minor conflicts due to 5da5882, introducing `UNENCRYPTED_TRAFFIC` to the flow proto definition, not being present on v1.13. Regerenated `flow.pb.go` with `make proto` after resolving. ] Make the TTL drops from ipv4_l3() more visible. Signed-off-by: Julian Wiedmann <jwi@isovalent.com> Signed-off-by: Nicolas Busseneau <nicolas@isovalent.com>
[ upstream commit d29f101 ] [ Backporter's notes: conflicts due to c49ef45 not having been backported to v1.13. ] The metric name is called "cilium_services_events_total" yet the variable name is called ServicesCount. The typical code pattern is to name the variable after a substring of the metric name for ease of grep. This commit does so and is a non-functional change. Signed-off-by: Chris Tarazi <chris@isovalent.com> Signed-off-by: Nicolas Busseneau <nicolas@isovalent.com>
[ upstream commit b86dab8 ] When the metric variable is defined as a global variable (within the `var` scope at the package level), then it will be instantiated as a NoOp metric. Once the metrics package is initialized, then all the metrics variables will transition from NoOp metrics to a real metric type. This problem occurred because the global variables instantiation happened before the metrics package initialization. This commit fixes it by using the metrics variable after the metrics package has been initialized. We can assume it's been initialized when the code executed is production ("live") code. Fixes: #26511 Fixes: 978b27c ("Metrics: Add services metrics") Signed-off-by: Chris Tarazi <chris@isovalent.com> Signed-off-by: Nicolas Busseneau <nicolas@isovalent.com>
[ upstream commit 12fc68a ] - For the main branch latest docs, clone the Cilium GitHub repo and use "--chart-directory ./install/kubernetes/cilium" flag. - For stable branches, set "--version" flag to the version in the top-level VERSION file. Fixes: #26931 Signed-off-by: Michi Mutsuzaki <michi@isovalent.com> Signed-off-by: Nicolas Busseneau <nicolas@isovalent.com>
[ upstream commit c9983ef ] All IPsec traffic between two nodes is always send on a single IPsec flow (defined by outer source and destination IP addresses). As a consequence, RSS on such traffic is ineffective and throughput will be limited to the decryption performance of a single core. Reported-by: Ryan Drew <ryan.drew@isovalent.com> Signed-off-by: Paul Chaignon <paul.chaignon@gmail.com> Signed-off-by: Nicolas Busseneau <nicolas@isovalent.com>
cabc86d
to
d2e877a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💕
@julianwiedmann @giorio94 Please see backporter's notes for your commits, had conflicts to resolve. |
[ upstream commit fe4dda7 ] [ Backporter's notes: small conflict due to de00caa not having been backported to v1.13, resulting in `svc.Cluster` not being available. ] Cilium already implements a restore path to prevent dropping existing connections on agent restart. Yet, there's currently an issue which causes the removal of valid backends from a service when receiving incomplete service updates, either because backends are spread across multiple endpointslices or some belong to remote clusters. Indeed, all previously known backends get replaced with the ones we just heard about (and present as part of the service cache event), possibly causing connectivity disruptions. The same issue can also occur in case of dual stack services, as they trigger the generation of two different endpointslices, one for each family. More specifically, let's consider the case in which a given service *foo* is associated with the *foo-1* and *foo-2* epslices, each containing a set of backends. Upon restart, the Cilium agent restores services and backends from the BPF map. Then it starts the service and epslice informers, both of which propagate the received events to the service cache. Let's say that we first receive the event about the service: at this point the service is considered not ready (as we have not yet seen any epslice), and nothing gets propagated. Then, we receive the event for the *foo-1* epslice, the service cache processes it and, given that the service has now backends, propagates an event down to the service subsystem for the service *foo*, including all backends part of *foo-1* (i.e., the ones known at the moment). At this point, all previously known backends get replaced by the new ones in the BPF maps, breaking the connections targeting the backends that were part of the *foo-2* epslice. Once an event for that epslice is also seen, then the backends will be merged and restored. The clustermesh case is similar because it triggers the same behavior as if we had a different epslice for each remote cluster. Let's prevent this behavior keeping a list of restored backends for each service, and continuing merging them with the ones we received an update for, until the bootstrap phase completes. After synchronization, an update is triggered for each service still associated with stale backends, so that they can be removed. Fixes: #23823 Fixes: #26944 Signed-off-by: Marco Iorio <marco.iorio@isovalent.com> Signed-off-by: Nicolas Busseneau <nicolas@isovalent.com>
d2e877a
to
77c5e92
Compare
/test-backport-1.13 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My changes look good. Thanks Nicolas!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My commit looks good. Thanks!
/test-1.23-4.19 |
/test-runtime |
All testing has passed and reviews are in, marking |
Skip conntrack
test #25038 (@pchaigno)PRs skipped due to conflicts:
Once this PR is merged, you can update the PR labels via:
or with