Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CI: K8sDatapathConfig Encapsulation Check iptables masquerading with random-fully #17353

Closed
maintainer-s-little-helper bot opened this issue Sep 9, 2021 · 15 comments · Fixed by #23228
Closed
Assignees
Labels
area/CI Continuous Integration testing issue or flake ci/flake This is a known failure that occurs in the tree. Please investigate me! feature/ipv6 Relates to IPv6 protocol support sig/datapath Impacts bpf/ or low-level forwarding details, including map management and monitor messages.
Projects

Comments

@maintainer-s-little-helper
Copy link

Test Name

K8sDatapathConfig Encapsulation Check iptables masquerading with random-fully

Failure Output

FAIL: Pod "testclient-8rmwm" can not connect to "http://google.com"

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-5.4/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:518
Pod "testclient-8rmwm" can not connect to "http://google.com"
Expected command: kubectl exec -n 202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera testclient-8rmwm -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://google.com -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" --retry 10 
To succeed, but it failed:
Exitcode: 7 
Err: exit status 7
Stdout:
 	 time-> DNS: '0.002990()', Connect: '0.000000',Transfer '0.000000', total '2.229656'
Stderr:
 	 command terminated with exit code 7
	 

/home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-5.4/src/github.com/cilium/cilium/test/k8sT/DatapathConfiguration.go:276

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 2
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 1 errors/warnings:
Session affinity for host reachable services needs kernel 5.7.0 or newer to work properly when accessed from inside cluster: the same service endpoint will be selected from all network namespaces on the host.
Cilium pods: [cilium-c5t2t cilium-wk9qf]
Netpols loaded: 
CiliumNetworkPolicies loaded: 202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera::l3-policy-demo 
Endpoint Policy Enforcement:
Pod                          Ingress   Egress
coredns-867bf6789f-ph95s               
test-k8s2-79ff876c9d-txkwv             
testclient-8rmwm                       
testclient-xmztn                       
testds-84x6d                           
testds-hfgmz                           
Cilium agent 'cilium-c5t2t': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 37 Failed 0
Cilium agent 'cilium-wk9qf': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 29 Failed 0


Standard Error

Click to show.
13:51:29 STEP: Installing Cilium
13:51:31 STEP: Waiting for Cilium to become ready
13:52:18 STEP: Validating if Kubernetes DNS is deployed
13:52:18 STEP: Checking if deployment is ready
13:52:18 STEP: Checking if pods have identity
13:52:18 STEP: Checking if kube-dns service is plumbed correctly
13:52:18 STEP: Checking if DNS can resolve
13:52:19 STEP: Kubernetes DNS is not ready: %!s(<nil>)
13:52:19 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
13:52:37 STEP: Waiting for Kubernetes DNS to become operational
13:52:37 STEP: Checking if deployment is ready
13:52:37 STEP: Checking if kube-dns service is plumbed correctly
13:52:37 STEP: Checking if DNS can resolve
13:52:37 STEP: Checking if pods have identity
13:52:40 STEP: Validating Cilium Installation
13:52:40 STEP: Performing Cilium controllers preflight check
13:52:40 STEP: Performing Cilium status preflight check
13:52:40 STEP: Performing Cilium health check
13:52:42 STEP: Performing Cilium service preflight check
13:52:42 STEP: Performing K8s service preflight check
13:52:43 STEP: Waiting for cilium-operator to be ready
13:52:43 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
13:52:43 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
13:52:43 STEP: Making sure all endpoints are in ready state
13:52:44 STEP: Creating namespace 202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera
13:52:44 STEP: Deploying demo_ds.yaml in namespace 202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera
13:52:45 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-5.4/src/github.com/cilium/cilium/test/k8sT/manifests/l3-policy-demo.yaml
13:52:49 STEP: Waiting for 4m0s for 5 pods of deployment demo_ds.yaml to become ready
13:52:49 STEP: WaitforNPods(namespace="202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="")
13:52:58 STEP: WaitforNPods(namespace="202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="") => <nil>
13:52:58 STEP: Checking pod connectivity between nodes
13:52:58 STEP: WaitforPods(namespace="202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDSClient")
13:52:58 STEP: WaitforPods(namespace="202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDSClient") => <nil>
13:52:58 STEP: WaitforPods(namespace="202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDS")
13:52:58 STEP: WaitforPods(namespace="202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDS") => <nil>
13:53:03 STEP: Test iptables masquerading
13:53:03 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-5.4/src/github.com/cilium/cilium/test/k8sT/manifests/l3-policy-demo.yaml
13:53:03 STEP: Waiting for 4m0s for 5 pods of deployment demo_ds.yaml to become ready
13:53:03 STEP: WaitforNPods(namespace="202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="")
13:53:03 STEP: WaitforNPods(namespace="202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="") => <nil>
13:53:03 STEP: Making ten curl requests from "testclient-8rmwm" to "http://google.com"
FAIL: Pod "testclient-8rmwm" can not connect to "http://google.com"
Expected command: kubectl exec -n 202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera testclient-8rmwm -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://google.com -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" --retry 10 
To succeed, but it failed:
Exitcode: 7 
Err: exit status 7
Stdout:
 	 time-> DNS: '0.002990()', Connect: '0.000000',Transfer '0.000000', total '2.229656'
Stderr:
 	 command terminated with exit code 7
	 

=== Test Finished at 2021-09-09T13:53:09Z====
13:53:09 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
13:53:10 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                              READY   STATUS    RESTARTS   AGE    IP              NODE   NOMINATED NODE   READINESS GATES
	 202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera   test-k8s2-79ff876c9d-txkwv        2/2     Running   0          29s    10.0.1.35       k8s2   <none>           <none>
	 202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera   testclient-8rmwm                  1/1     Running   0          29s    10.0.1.26       k8s2   <none>           <none>
	 202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera   testclient-xmztn                  1/1     Running   0          29s    10.0.0.19       k8s1   <none>           <none>
	 202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera   testds-84x6d                      2/2     Running   0          29s    10.0.1.131      k8s2   <none>           <none>
	 202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera   testds-hfgmz                      2/2     Running   0          29s    10.0.0.2        k8s1   <none>           <none>
	 cilium-monitoring                                                 grafana-d69c97b9b-pg698           0/1     Running   0          19m    10.0.1.11       k8s1   <none>           <none>
	 cilium-monitoring                                                 prometheus-655fb888d7-nk54t       1/1     Running   0          19m    10.0.1.27       k8s1   <none>           <none>
	 kube-system                                                       cilium-c5t2t                      1/1     Running   0          102s   192.168.36.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-fb784899f-498lk   1/1     Running   0          102s   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-fb784899f-bgldg   1/1     Running   0          102s   192.168.36.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-wk9qf                      1/1     Running   0          102s   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       coredns-867bf6789f-ph95s          1/1     Running   0          53s    10.0.1.20       k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                         1/1     Running   0          22m    192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1               1/1     Running   0          22m    192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1      1/1     Running   0          22m    192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-bnhzj                  1/1     Running   0          20m    192.168.36.12   k8s2   <none>           <none>
	 kube-system                                                       kube-proxy-hfbhr                  1/1     Running   0          22m    192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1               1/1     Running   0          22m    192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-wjfvl                1/1     Running   0          20m    192.168.36.12   k8s2   <none>           <none>
	 kube-system                                                       log-gatherer-xk7p6                1/1     Running   0          20m    192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-nrhvk              1/1     Running   0          20m    192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-xwnlk              1/1     Running   0          20m    192.168.36.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-c5t2t cilium-wk9qf]
cmd: kubectl exec -n kube-system cilium-c5t2t -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.19 (v1.19.13) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s3 10.0.2.15 fd04::12, enp0s8 192.168.36.12 fd04::12 (Direct Routing)]
	 Host firewall:          Disabled
	 Cilium:                 Ok   1.10.90 (v1.10.90-c762736)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 6/254 allocated from 10.0.1.0/24, IPv6: 6/254 allocated from fd02::100/120
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:      37/37 healthy
	 Proxy Status:           OK, ip 10.0.1.229, 0 redirects active on ports 10000-20000
	 Hubble:                 Ok   Current/Max Flows: 854/4095 (20.85%), Flows/s: 7.80   Metrics: Disabled
	 Encryption:             Disabled
	 Cluster health:         2/2 reachable   (2021-09-09T13:52:41Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-c5t2t -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                       IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                             
	 253        Enabled            Disabled          28638      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::161   10.0.1.131   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera                                    
	                                                            k8s:zgroup=testDS                                                                                                                  
	 289        Disabled           Disabled          62050      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::11c   10.0.1.35    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera                                    
	                                                            k8s:zgroup=test-k8s2                                                                                                               
	 1484       Disabled           Disabled          4          reserved:health                                                                                   fd02::1a8   10.0.1.97    ready   
	 1995       Disabled           Disabled          26606      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::175   10.0.1.20    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                        
	                                                            k8s:k8s-app=kube-dns                                                                                                               
	 2559       Disabled           Disabled          34195      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::1bd   10.0.1.26    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera                                    
	                                                            k8s:zgroup=testDSClient                                                                                                            
	 2868       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                 ready   
	                                                            reserved:host                                                                                                                      
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-wk9qf -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.19 (v1.19.13) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s3 10.0.2.15 fd04::11, enp0s8 192.168.36.11 fd04::11 (Direct Routing)]
	 Host firewall:          Disabled
	 Cilium:                 Ok   1.10.90 (v1.10.90-c762736)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 4/254 allocated from 10.0.0.0/24, IPv6: 4/254 allocated from fd02::/120
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:      29/29 healthy
	 Proxy Status:           OK, ip 10.0.0.174, 0 redirects active on ports 10000-20000
	 Hubble:                 Ok   Current/Max Flows: 773/4095 (18.88%), Flows/s: 6.91   Metrics: Disabled
	 Encryption:             Disabled
	 Cluster health:         2/2 reachable   (2021-09-09T13:52:43Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-wk9qf -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                       IPv6       IPv4        STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                           
	 285        Disabled           Disabled          34195      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::65   10.0.0.19   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera                                  
	                                                            k8s:zgroup=testDSClient                                                                                                          
	 631        Disabled           Disabled          4          reserved:health                                                                                   fd02::9b   10.0.0.97   ready   
	 666        Enabled            Disabled          28638      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::3b   10.0.0.2    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera                                  
	                                                            k8s:zgroup=testDS                                                                                                                
	 3998       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                               ready   
	                                                            k8s:node-role.kubernetes.io/master                                                                                               
	                                                            reserved:host                                                                                                                    
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
13:54:20 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
13:54:20 STEP: Deleting deployment demo_ds.yaml
13:54:20 STEP: Deleting namespace 202109091352k8sdatapathconfigencapsulationcheckiptablesmasquera
13:54:34 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|b2fccc59_K8sDatapathConfig_Encapsulation_Check_iptables_masquerading_with_random-fully.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-5.4//960/artifact/b2fccc59_K8sDatapathConfig_Encapsulation_Check_iptables_masquerading_with_random-fully.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-5.4//960/artifact/test_results_Cilium-PR-K8s-1.19-kernel-5.4_960_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-5.4/960/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper maintainer-s-little-helper bot added the ci/flake This is a known failure that occurs in the tree. Please investigate me! label Sep 9, 2021
@tklauser
Copy link
Member

tklauser commented Sep 9, 2021

From a quick initial glance this looks similar to #13773

@pchaigno
Copy link
Member

I'm not convinced it's #13773 given we had fixed that flake and it didn't appear for a long time. However, if you want to double-check, the full analysis of #13773's root cause is at #13773 (comment).

Also, according to our CI dashboard, this is the only occurrence of this flake in the past two weeks. IMO, based on that, most likely cause would be: 1) it's a random, one-off google.com failure or 2) the PR where it happened introduced it.

@maintainer-s-little-helper
Copy link
Author

PR #17368 hit this flake with 89.02% similarity:

Click to show.

Test Name

K8sDatapathConfig Encapsulation Check iptables masquerading with random-fully

Failure Output

FAIL: Pod "testclient-jlfvj" can not connect to "http://google.com"

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:518
Pod "testclient-jlfvj" can not connect to "http://google.com"
Expected command: kubectl exec -n 202110022254k8sdatapathconfigencapsulationcheckiptablesmasquera testclient-jlfvj -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://google.com -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" --retry 10 
To succeed, but it failed:
Exitcode: 7 
Err: exit status 7
Stdout:
 	 time-> DNS: '0.016704()', Connect: '0.000000',Transfer '0.000000', total '2.230505'
Stderr:
 	 command terminated with exit code 7
	 

/home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.19/src/github.com/cilium/cilium/test/k8sT/DatapathConfiguration.go:276

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 2
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 1 errors/warnings:
Session affinity for host reachable services needs kernel 5.7.0 or newer to work properly when accessed from inside cluster: the same service endpoint will be selected from all network namespaces on the host.
Cilium pods: [cilium-j8tnr cilium-v5ksf]
Netpols loaded: 
CiliumNetworkPolicies loaded: 202110022254k8sdatapathconfigencapsulationcheckiptablesmasquera::l3-policy-demo 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
testds-wxxrd                            
grafana-d69c97b9b-767nb                 
prometheus-655fb888d7-csrdw             
coredns-867bf6789f-zj6mt                
test-k8s2-79ff876c9d-j5lgj              
testclient-74f4f                        
testclient-jlfvj                        
testds-bsvkq                            
Cilium agent 'cilium-j8tnr': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 33 Failed 0
Cilium agent 'cilium-v5ksf': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 43 Failed 0


Standard Error

Click to show.
22:53:41 STEP: Installing Cilium
22:53:43 STEP: Waiting for Cilium to become ready
22:54:39 STEP: Validating if Kubernetes DNS is deployed
22:54:39 STEP: Checking if deployment is ready
22:54:39 STEP: Checking if kube-dns service is plumbed correctly
22:54:39 STEP: Checking if DNS can resolve
22:54:39 STEP: Checking if pods have identity
22:54:40 STEP: Kubernetes DNS is not ready: %!s(<nil>)
22:54:40 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
22:54:47 STEP: Waiting for Kubernetes DNS to become operational
22:54:47 STEP: Checking if deployment is ready
22:54:47 STEP: Checking if kube-dns service is plumbed correctly
22:54:47 STEP: Checking if DNS can resolve
22:54:47 STEP: Checking if pods have identity
22:54:47 STEP: Validating Cilium Installation
22:54:47 STEP: Performing Cilium controllers preflight check
22:54:47 STEP: Performing Cilium health check
22:54:47 STEP: Performing Cilium status preflight check
22:54:49 STEP: Performing Cilium service preflight check
22:54:49 STEP: Performing K8s service preflight check
22:54:50 STEP: Waiting for cilium-operator to be ready
22:54:50 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
22:54:50 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
22:54:50 STEP: Making sure all endpoints are in ready state
22:54:51 STEP: Creating namespace 202110022254k8sdatapathconfigencapsulationcheckiptablesmasquera
22:54:51 STEP: Deploying demo_ds.yaml in namespace 202110022254k8sdatapathconfigencapsulationcheckiptablesmasquera
22:54:52 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.19/src/github.com/cilium/cilium/test/k8sT/manifests/l3-policy-demo.yaml
22:54:56 STEP: Waiting for 4m0s for 5 pods of deployment demo_ds.yaml to become ready
22:54:56 STEP: WaitforNPods(namespace="202110022254k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="")
22:54:59 STEP: WaitforNPods(namespace="202110022254k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="") => <nil>
22:54:59 STEP: Checking pod connectivity between nodes
22:54:59 STEP: WaitforPods(namespace="202110022254k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDSClient")
22:54:59 STEP: WaitforPods(namespace="202110022254k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDSClient") => <nil>
22:54:59 STEP: WaitforPods(namespace="202110022254k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDS")
22:54:59 STEP: WaitforPods(namespace="202110022254k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDS") => <nil>
22:55:04 STEP: Test iptables masquerading
22:55:04 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.19/src/github.com/cilium/cilium/test/k8sT/manifests/l3-policy-demo.yaml
22:55:04 STEP: Waiting for 4m0s for 5 pods of deployment demo_ds.yaml to become ready
22:55:04 STEP: WaitforNPods(namespace="202110022254k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="")
22:55:04 STEP: WaitforNPods(namespace="202110022254k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="") => <nil>
22:55:04 STEP: Making ten curl requests from "testclient-74f4f" to "http://google.com"
22:55:06 STEP: Making ten curl requests from "testclient-jlfvj" to "http://google.com"
FAIL: Pod "testclient-jlfvj" can not connect to "http://google.com"
Expected command: kubectl exec -n 202110022254k8sdatapathconfigencapsulationcheckiptablesmasquera testclient-jlfvj -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://google.com -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" --retry 10 
To succeed, but it failed:
Exitcode: 7 
Err: exit status 7
Stdout:
 	 time-> DNS: '0.016704()', Connect: '0.000000',Transfer '0.000000', total '2.230505'
Stderr:
 	 command terminated with exit code 7
	 

=== Test Finished at 2021-10-02T22:55:10Z====
22:55:10 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
22:55:10 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS    RESTARTS   AGE   IP              NODE   NOMINATED NODE   READINESS GATES
	 202110022254k8sdatapathconfigencapsulationcheckiptablesmasquera   test-k8s2-79ff876c9d-j5lgj         2/2     Running   0          21s   10.0.0.49       k8s2   <none>           <none>
	 202110022254k8sdatapathconfigencapsulationcheckiptablesmasquera   testclient-74f4f                   1/1     Running   0          21s   10.0.1.38       k8s1   <none>           <none>
	 202110022254k8sdatapathconfigencapsulationcheckiptablesmasquera   testclient-jlfvj                   1/1     Running   0          21s   10.0.0.1        k8s2   <none>           <none>
	 202110022254k8sdatapathconfigencapsulationcheckiptablesmasquera   testds-bsvkq                       2/2     Running   0          21s   10.0.0.241      k8s2   <none>           <none>
	 202110022254k8sdatapathconfigencapsulationcheckiptablesmasquera   testds-wxxrd                       2/2     Running   0          21s   10.0.1.236      k8s1   <none>           <none>
	 cilium-monitoring                                                 grafana-d69c97b9b-767nb            1/1     Running   0          14m   10.0.0.128      k8s2   <none>           <none>
	 cilium-monitoring                                                 prometheus-655fb888d7-csrdw        1/1     Running   0          14m   10.0.0.232      k8s2   <none>           <none>
	 kube-system                                                       cilium-j8tnr                       1/1     Running   0          90s   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-6cfcfdc98d-bwvbh   1/1     Running   0          90s   192.168.36.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-6cfcfdc98d-fd52t   1/1     Running   0          90s   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-v5ksf                       1/1     Running   0          90s   192.168.36.12   k8s2   <none>           <none>
	 kube-system                                                       coredns-867bf6789f-zj6mt           1/1     Running   0          33s   10.0.1.113      k8s1   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running   0          18m   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running   0          18m   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running   0          18m   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-2ccrk                   1/1     Running   0          15m   192.168.36.12   k8s2   <none>           <none>
	 kube-system                                                       kube-proxy-2xjvz                   1/1     Running   0          18m   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running   0          18m   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-92xsg                 1/1     Running   0          14m   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-bh7r6                 1/1     Running   0          14m   192.168.36.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-672mc               1/1     Running   0          15m   192.168.36.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-xz4t2               1/1     Running   0          15m   192.168.36.11   k8s1   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-j8tnr cilium-v5ksf]
cmd: kubectl exec -n kube-system cilium-j8tnr -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.20 (v1.20.9) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s3 10.0.2.15 fd04::11, enp0s8 192.168.36.11 fd04::11 (Direct Routing)]
	 Host firewall:          Disabled
	 Cilium:                 Ok   1.10.90 (v1.10.90-ff7f8d9)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 5/254 allocated from 10.0.1.0/24, IPv6: 5/254 allocated from fd02::100/120
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:      33/33 healthy
	 Proxy Status:           OK, ip 10.0.1.142, 0 redirects active on ports 10000-20000
	 Hubble:                 Ok   Current/Max Flows: 868/4095 (21.20%), Flows/s: 11.49   Metrics: Disabled
	 Encryption:             Disabled
	 Cluster health:         2/2 reachable   (2021-10-02T22:54:49Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-j8tnr -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                       IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                             
	 984        Disabled           Disabled          6040       k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::125   10.0.1.113   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                        
	                                                            k8s:k8s-app=kube-dns                                                                                                               
	 1574       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                 ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                          
	                                                            k8s:node-role.kubernetes.io/master                                                                                                 
	                                                            reserved:host                                                                                                                      
	 2676       Disabled           Disabled          20564      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::119   10.0.1.38    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=202110022254k8sdatapathconfigencapsulationcheckiptablesmasquera                                    
	                                                            k8s:zgroup=testDSClient                                                                                                            
	 3567       Disabled           Disabled          4          reserved:health                                                                                   fd02::192   10.0.1.21    ready   
	 3689       Enabled            Disabled          10604      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::126   10.0.1.236   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=202110022254k8sdatapathconfigencapsulationcheckiptablesmasquera                                    
	                                                            k8s:zgroup=testDS                                                                                                                  
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-v5ksf -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.20 (v1.20.9) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s3 10.0.2.15 fd04::12, enp0s8 192.168.36.12 fd04::12 (Direct Routing)]
	 Host firewall:          Disabled
	 Cilium:                 Ok   1.10.90 (v1.10.90-ff7f8d9)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 7/254 allocated from 10.0.0.0/24, IPv6: 7/254 allocated from fd02::/120
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:      43/43 healthy
	 Proxy Status:           OK, ip 10.0.0.58, 0 redirects active on ports 10000-20000
	 Hubble:                 Ok   Current/Max Flows: 886/4095 (21.64%), Flows/s: 11.37   Metrics: Disabled
	 Encryption:             Disabled
	 Cluster health:         2/2 reachable   (2021-10-02T22:54:50Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-v5ksf -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                       IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                            
	 149        Enabled            Disabled          10604      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::4b   10.0.0.241   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202110022254k8sdatapathconfigencapsulationcheckiptablesmasquera                                   
	                                                            k8s:zgroup=testDS                                                                                                                 
	 547        Disabled           Disabled          4          reserved:health                                                                                   fd02::53   10.0.0.14    ready   
	 637        Disabled           Disabled          11130      k8s:app=prometheus                                                                                fd02::21   10.0.0.232   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                            
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                 
	 717        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                ready   
	                                                            reserved:host                                                                                                                     
	 814        Disabled           Disabled          20208      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::2d   10.0.0.49    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202110022254k8sdatapathconfigencapsulationcheckiptablesmasquera                                   
	                                                            k8s:zgroup=test-k8s2                                                                                                              
	 1017       Disabled           Disabled          20564      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::d6   10.0.0.1     ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202110022254k8sdatapathconfigencapsulationcheckiptablesmasquera                                   
	                                                            k8s:zgroup=testDSClient                                                                                                           
	 4072       Disabled           Disabled          46538      k8s:app=grafana                                                                                   fd02::74   10.0.0.128   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                 
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
22:55:28 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
22:55:28 STEP: Deleting deployment demo_ds.yaml
22:55:29 STEP: Deleting namespace 202110022254k8sdatapathconfigencapsulationcheckiptablesmasquera
22:55:43 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|1726431d_K8sDatapathConfig_Encapsulation_Check_iptables_masquerading_with_random-fully.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.20-kernel-4.19//1430/artifact/1726431d_K8sDatapathConfig_Encapsulation_Check_iptables_masquerading_with_random-fully.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.20-kernel-4.19//1430/artifact/test_results_Cilium-PR-K8s-1.20-kernel-4.19_1430_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.20-kernel-4.19/1430/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper
Copy link
Author

PR #17596 hit this flake with 90.54% similarity:

Click to show.

Test Name

K8sDatapathConfig Encapsulation Check iptables masquerading with random-fully

Failure Output

FAIL: Pod "testclient-6cvsp" can not connect to "http://google.com"

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-5.4/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:527
Pod "testclient-6cvsp" can not connect to "http://google.com"
Expected command: kubectl exec -n 202110181454k8sdatapathconfigencapsulationcheckiptablesmasquera testclient-6cvsp -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://google.com -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" --retry 10 
To succeed, but it failed:
Exitcode: 7 
Err: exit status 7
Stdout:
 	 time-> DNS: '0.002830()', Connect: '0.000000',Transfer '0.000000', total '2.205840'
Stderr:
 	 command terminated with exit code 7
	 

/home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-5.4/src/github.com/cilium/cilium/test/k8sT/DatapathConfiguration.go:273

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 2
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 1 errors/warnings:
Session affinity for host reachable services needs kernel 5.7.0 or newer to work properly when accessed from inside cluster: the same service endpoint will be selected from all network namespaces on the host.
Cilium pods: [cilium-7krjx cilium-mx4qh]
Netpols loaded: 
CiliumNetworkPolicies loaded: 202110181454k8sdatapathconfigencapsulationcheckiptablesmasquera::l3-policy-demo 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
test-k8s2-79ff876c9d-2t8vj              
testclient-6cvsp                        
testclient-7xnhp                        
testds-6sv22                            
testds-jnhc4                            
grafana-d69c97b9b-gfltm                 
prometheus-655fb888d7-cbrg6             
coredns-867bf6789f-75zg8                
Cilium agent 'cilium-7krjx': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 38 Failed 0
Cilium agent 'cilium-mx4qh': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 38 Failed 0


Standard Error

Click to show.
14:53:20 STEP: Installing Cilium
14:53:22 STEP: Waiting for Cilium to become ready
14:54:00 STEP: Validating if Kubernetes DNS is deployed
14:54:00 STEP: Checking if deployment is ready
14:54:00 STEP: Checking if kube-dns service is plumbed correctly
14:54:00 STEP: Checking if DNS can resolve
14:54:00 STEP: Checking if pods have identity
14:54:01 STEP: Kubernetes DNS is not ready: %!s(<nil>)
14:54:01 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
14:54:09 STEP: Waiting for Kubernetes DNS to become operational
14:54:09 STEP: Checking if deployment is ready
14:54:09 STEP: Checking if kube-dns service is plumbed correctly
14:54:09 STEP: Checking if pods have identity
14:54:09 STEP: Checking if DNS can resolve
14:54:09 STEP: Validating Cilium Installation
14:54:09 STEP: Performing Cilium controllers preflight check
14:54:09 STEP: Performing Cilium health check
14:54:09 STEP: Performing Cilium status preflight check
14:54:11 STEP: Performing Cilium service preflight check
14:54:11 STEP: Performing K8s service preflight check
14:54:12 STEP: Waiting for cilium-operator to be ready
14:54:12 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
14:54:12 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
14:54:12 STEP: Making sure all endpoints are in ready state
14:54:13 STEP: Creating namespace 202110181454k8sdatapathconfigencapsulationcheckiptablesmasquera
14:54:13 STEP: Deploying demo_ds.yaml in namespace 202110181454k8sdatapathconfigencapsulationcheckiptablesmasquera
14:54:14 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-5.4/src/github.com/cilium/cilium/test/k8sT/manifests/l3-policy-demo.yaml
14:54:17 STEP: Waiting for 4m0s for 5 pods of deployment demo_ds.yaml to become ready
14:54:17 STEP: WaitforNPods(namespace="202110181454k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="")
14:54:26 STEP: WaitforNPods(namespace="202110181454k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="") => <nil>
14:54:26 STEP: Checking pod connectivity between nodes
14:54:26 STEP: WaitforPods(namespace="202110181454k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDSClient")
14:54:26 STEP: WaitforPods(namespace="202110181454k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDSClient") => <nil>
14:54:26 STEP: WaitforPods(namespace="202110181454k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDS")
14:54:26 STEP: WaitforPods(namespace="202110181454k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDS") => <nil>
14:54:30 STEP: Test iptables masquerading
14:54:30 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-5.4/src/github.com/cilium/cilium/test/k8sT/manifests/l3-policy-demo.yaml
14:54:31 STEP: Waiting for 4m0s for 5 pods of deployment demo_ds.yaml to become ready
14:54:31 STEP: WaitforNPods(namespace="202110181454k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="")
14:54:31 STEP: WaitforNPods(namespace="202110181454k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="") => <nil>
14:54:31 STEP: Making ten curl requests from "testclient-6cvsp" to "http://google.com"
FAIL: Pod "testclient-6cvsp" can not connect to "http://google.com"
Expected command: kubectl exec -n 202110181454k8sdatapathconfigencapsulationcheckiptablesmasquera testclient-6cvsp -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://google.com -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" --retry 10 
To succeed, but it failed:
Exitcode: 7 
Err: exit status 7
Stdout:
 	 time-> DNS: '0.002830()', Connect: '0.000000',Transfer '0.000000', total '2.205840'
Stderr:
 	 command terminated with exit code 7
	 

=== Test Finished at 2021-10-18T14:54:34Z====
14:54:34 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
14:54:35 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS    RESTARTS   AGE   IP              NODE   NOMINATED NODE   READINESS GATES
	 202110181454k8sdatapathconfigencapsulationcheckiptablesmasquera   test-k8s2-79ff876c9d-2t8vj         2/2     Running   0          22s   10.0.1.222      k8s2   <none>           <none>
	 202110181454k8sdatapathconfigencapsulationcheckiptablesmasquera   testclient-6cvsp                   1/1     Running   0          22s   10.0.1.37       k8s2   <none>           <none>
	 202110181454k8sdatapathconfigencapsulationcheckiptablesmasquera   testclient-7xnhp                   1/1     Running   0          22s   10.0.0.210      k8s1   <none>           <none>
	 202110181454k8sdatapathconfigencapsulationcheckiptablesmasquera   testds-6sv22                       2/2     Running   0          22s   10.0.1.170      k8s2   <none>           <none>
	 202110181454k8sdatapathconfigencapsulationcheckiptablesmasquera   testds-jnhc4                       2/2     Running   0          22s   10.0.0.31       k8s1   <none>           <none>
	 cilium-monitoring                                                 grafana-d69c97b9b-gfltm            1/1     Running   0          12m   10.0.0.149      k8s1   <none>           <none>
	 cilium-monitoring                                                 prometheus-655fb888d7-cbrg6        1/1     Running   0          12m   10.0.1.205      k8s2   <none>           <none>
	 kube-system                                                       cilium-7krjx                       1/1     Running   0          74s   192.168.36.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-mx4qh                       1/1     Running   0          74s   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-57d858dbbc-64g97   1/1     Running   0          74s   192.168.36.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-57d858dbbc-qq9bv   1/1     Running   0          74s   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       coredns-867bf6789f-75zg8           1/1     Running   0          35s   10.0.0.86       k8s1   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running   0          15m   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running   0          15m   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running   0          15m   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-4mtnx                   1/1     Running   0          15m   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-n9zzc                   1/1     Running   0          13m   192.168.36.12   k8s2   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running   0          15m   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-gwm4d                 1/1     Running   0          12m   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-ksdl4                 1/1     Running   0          12m   192.168.36.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-4mwcs               1/1     Running   0          13m   192.168.36.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-5lhx7               1/1     Running   0          13m   192.168.36.11   k8s1   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-7krjx cilium-mx4qh]
cmd: kubectl exec -n kube-system cilium-7krjx -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.19 (v1.19.13) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s3 10.0.2.15 fd04::12, enp0s8 192.168.36.12 fd04::12 (Direct Routing)]
	 Host firewall:          Disabled
	 Cilium:                 Ok   1.10.90 (v1.10.90-5587b15)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 6/254 allocated from 10.0.1.0/24, IPv6: 6/254 allocated from fd02::100/120
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:      38/38 healthy
	 Proxy Status:           OK, ip 10.0.1.150, 0 redirects active on ports 10000-20000
	 Hubble:                 Ok   Current/Max Flows: 701/4095 (17.12%), Flows/s: 11.05   Metrics: Disabled
	 Encryption:             Disabled
	 Cluster health:         2/2 reachable   (2021-10-18T14:54:11Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-7krjx -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                       IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                             
	 248        Disabled           Disabled          512        k8s:app=prometheus                                                                                fd02::147   10.0.1.205   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                             
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                  
	 370        Enabled            Disabled          4148       k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::1dc   10.0.1.170   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=202110181454k8sdatapathconfigencapsulationcheckiptablesmasquera                                    
	                                                            k8s:zgroup=testDS                                                                                                                  
	 465        Disabled           Disabled          30676      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::188   10.0.1.222   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=202110181454k8sdatapathconfigencapsulationcheckiptablesmasquera                                    
	                                                            k8s:zgroup=test-k8s2                                                                                                               
	 1114       Disabled           Disabled          28509      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::1ae   10.0.1.37    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=202110181454k8sdatapathconfigencapsulationcheckiptablesmasquera                                    
	                                                            k8s:zgroup=testDSClient                                                                                                            
	 1330       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                 ready   
	                                                            reserved:host                                                                                                                      
	 3491       Disabled           Disabled          4          reserved:health                                                                                   fd02::134   10.0.1.245   ready   
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-mx4qh -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.19 (v1.19.13) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s3 10.0.2.15 fd04::11, enp0s8 192.168.36.11 fd04::11 (Direct Routing)]
	 Host firewall:          Disabled
	 Cilium:                 Ok   1.10.90 (v1.10.90-5587b15)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 6/254 allocated from 10.0.0.0/24, IPv6: 6/254 allocated from fd02::/120
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           IPTables [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:      38/38 healthy
	 Proxy Status:           OK, ip 10.0.0.84, 0 redirects active on ports 10000-20000
	 Hubble:                 Ok   Current/Max Flows: 835/4095 (20.39%), Flows/s: 10.80   Metrics: Disabled
	 Encryption:             Disabled
	 Cluster health:         2/2 reachable   (2021-10-18T14:54:12Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-mx4qh -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                       IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                            
	 196        Disabled           Disabled          12915      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::a9   10.0.0.86    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                       
	                                                            k8s:k8s-app=kube-dns                                                                                                              
	 290        Disabled           Disabled          4          reserved:health                                                                                   fd02::2a   10.0.0.188   ready   
	 361        Disabled           Disabled          31671      k8s:app=grafana                                                                                   fd02::cd   10.0.0.149   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                 
	 541        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                ready   
	                                                            k8s:node-role.kubernetes.io/master                                                                                                
	                                                            reserved:host                                                                                                                     
	 2572       Disabled           Disabled          28509      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::2c   10.0.0.210   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202110181454k8sdatapathconfigencapsulationcheckiptablesmasquera                                   
	                                                            k8s:zgroup=testDSClient                                                                                                           
	 3050       Enabled            Disabled          4148       k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::17   10.0.0.31    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202110181454k8sdatapathconfigencapsulationcheckiptablesmasquera                                   
	                                                            k8s:zgroup=testDS                                                                                                                 
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
14:54:51 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
14:54:51 STEP: Deleting deployment demo_ds.yaml
14:54:51 STEP: Deleting namespace 202110181454k8sdatapathconfigencapsulationcheckiptablesmasquera
14:55:05 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|c58b47d9_K8sDatapathConfig_Encapsulation_Check_iptables_masquerading_with_random-fully.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-5.4//1246/artifact/c58b47d9_K8sDatapathConfig_Encapsulation_Check_iptables_masquerading_with_random-fully.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-5.4//1246/artifact/test_results_Cilium-PR-K8s-1.19-kernel-5.4_1246_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-5.4/1246/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper
Copy link
Author

PR #17704 hit this flake with 85.95% similarity:

Click to show.

Test Name

K8sDatapathConfig Encapsulation Check iptables masquerading with random-fully

Failure Output


Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-GKE@2/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:527
Test Panicked
/home/jenkins/workspace/Cilium-PR-K8s-GKE@2/src/github.com/cilium/cilium/vendor/github.com/onsi/gomega/internal/asyncassertion/async_assertion.go:95

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Cilium pods: [cilium-2tjlt cilium-4xqfk]
Netpols loaded: 
CiliumNetworkPolicies loaded: 202110281428k8sdatapathconfigencapsulationcheckiptablesmasquera::l3-policy-demo 
Endpoint Policy Enforcement:
Pod                                      Ingress   Egress
testclient-44m5s                                   
testds-9l2zx                                       
event-exporter-gke-67986489c8-kk6xj                
kube-dns-b4f5c58c7-plqvg                           
kube-dns-b4f5c58c7-w8jnk                           
l7-default-backend-66579f5d7-2w8wf                 
metrics-server-v0.3.6-6c47ffd7d7-z87tj             
test-k8s2-79ff876c9d-ghpr8                         
testclient-t2wxc                                   
testds-4ps5b                                       
kube-dns-autoscaler-58cbd4f75c-vlnpd               
Cilium agent 'cilium-2tjlt': Status: Ok  Health: Ok Nodes " Controllers: Total 0 Failed 0
Cilium agent 'cilium-4xqfk': Status: Ok  Health: Ok Nodes " Controllers: Total 0 Failed 0


Standard Error

Click to show.
14:27:32 STEP: Installing Cilium
14:27:42 STEP: Waiting for Cilium to become ready
14:28:18 STEP: Validating if Kubernetes DNS is deployed
14:28:18 STEP: Checking if deployment is ready
14:28:18 STEP: Checking if kube-dns service is plumbed correctly
14:28:18 STEP: Checking if pods have identity
14:28:18 STEP: Checking if DNS can resolve
14:28:21 STEP: Kubernetes DNS is up and operational
14:28:21 STEP: Validating Cilium Installation
14:28:21 STEP: Performing Cilium health check
14:28:21 STEP: Performing Cilium status preflight check
14:28:21 STEP: Performing Cilium controllers preflight check
14:28:25 STEP: Performing Cilium service preflight check
14:28:25 STEP: Performing K8s service preflight check
14:28:25 STEP: Waiting for cilium-operator to be ready
14:28:25 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
14:28:26 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
14:28:26 STEP: WaitforPods(namespace="kube-system", filter="")
14:28:26 STEP: WaitforPods(namespace="kube-system", filter="") => <nil>
14:28:26 STEP: Making sure all endpoints are in ready state
14:28:28 STEP: Creating namespace 202110281428k8sdatapathconfigencapsulationcheckiptablesmasquera
14:28:29 STEP: Deploying demo_ds.yaml in namespace 202110281428k8sdatapathconfigencapsulationcheckiptablesmasquera
14:28:33 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-GKE@2/src/github.com/cilium/cilium/test/k8sT/manifests/l3-policy-demo.yaml
14:28:40 STEP: Waiting for 4m0s for 5 pods of deployment demo_ds.yaml to become ready
14:28:40 STEP: WaitforNPods(namespace="202110281428k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="")
14:28:42 STEP: WaitforNPods(namespace="202110281428k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="") => <nil>
14:28:42 STEP: Checking pod connectivity between nodes
14:28:42 STEP: WaitforPods(namespace="202110281428k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDSClient")
14:28:42 STEP: WaitforPods(namespace="202110281428k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDSClient") => <nil>
14:28:43 STEP: WaitforPods(namespace="202110281428k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDS")
14:28:43 STEP: WaitforPods(namespace="202110281428k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDS") => <nil>
14:28:49 STEP: Test iptables masquerading
14:28:49 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-GKE@2/src/github.com/cilium/cilium/test/k8sT/manifests/l3-policy-demo.yaml
14:28:51 STEP: Waiting for 4m0s for 5 pods of deployment demo_ds.yaml to become ready
14:28:51 STEP: WaitforNPods(namespace="202110281428k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="")
14:28:51 STEP: WaitforNPods(namespace="202110281428k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="") => <nil>
14:28:52 STEP: Making ten curl requests from "testclient-44m5s" to "http://google.com"
=== Test Finished at 2021-10-28T14:32:52Z====
14:32:52 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
14:32:55 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                                                   READY   STATUS    RESTARTS   AGE     IP            NODE                                        NOMINATED NODE   READINESS GATES
	 202110281428k8sdatapathconfigencapsulationcheckiptablesmasquera   test-k8s2-79ff876c9d-ghpr8                             2/2     Running   0          4m27s   10.84.1.176   gke-cilium-ci-4-cilium-ci-4-63f48da5-htsj   <none>           <none>
	 202110281428k8sdatapathconfigencapsulationcheckiptablesmasquera   testclient-44m5s                                       1/1     Running   0          4m27s   10.84.2.1     gke-cilium-ci-4-cilium-ci-4-63f48da5-0pt0   <none>           <none>
	 202110281428k8sdatapathconfigencapsulationcheckiptablesmasquera   testclient-t2wxc                                       1/1     Running   0          4m27s   10.84.1.54    gke-cilium-ci-4-cilium-ci-4-63f48da5-htsj   <none>           <none>
	 202110281428k8sdatapathconfigencapsulationcheckiptablesmasquera   testds-4ps5b                                           2/2     Running   0          4m27s   10.84.1.242   gke-cilium-ci-4-cilium-ci-4-63f48da5-htsj   <none>           <none>
	 202110281428k8sdatapathconfigencapsulationcheckiptablesmasquera   testds-9l2zx                                           2/2     Running   0          4m27s   10.84.2.36    gke-cilium-ci-4-cilium-ci-4-63f48da5-0pt0   <none>           <none>
	 cilium-monitoring                                                 grafana-d69c97b9b-h4jzn                                1/1     Running   0          69m     10.84.1.7     gke-cilium-ci-4-cilium-ci-4-63f48da5-htsj   <none>           <none>
	 cilium-monitoring                                                 prometheus-655fb888d7-m9mx7                            1/1     Running   0          69m     10.84.2.4     gke-cilium-ci-4-cilium-ci-4-63f48da5-0pt0   <none>           <none>
	 kube-system                                                       cilium-2tjlt                                           1/1     Running   0          5m17s   10.128.0.44   gke-cilium-ci-4-cilium-ci-4-63f48da5-htsj   <none>           <none>
	 kube-system                                                       cilium-4xqfk                                           1/1     Running   0          5m17s   10.128.0.43   gke-cilium-ci-4-cilium-ci-4-63f48da5-0pt0   <none>           <none>
	 kube-system                                                       cilium-node-init-569qn                                 1/1     Running   0          5m16s   10.128.0.44   gke-cilium-ci-4-cilium-ci-4-63f48da5-htsj   <none>           <none>
	 kube-system                                                       cilium-node-init-5dp4k                                 1/1     Running   0          5m16s   10.128.0.43   gke-cilium-ci-4-cilium-ci-4-63f48da5-0pt0   <none>           <none>
	 kube-system                                                       cilium-operator-7f7f478986-5tcbt                       1/1     Running   0          5m16s   10.128.0.43   gke-cilium-ci-4-cilium-ci-4-63f48da5-0pt0   <none>           <none>
	 kube-system                                                       cilium-operator-7f7f478986-nzqxm                       1/1     Running   0          5m16s   10.128.0.44   gke-cilium-ci-4-cilium-ci-4-63f48da5-htsj   <none>           <none>
	 kube-system                                                       event-exporter-gke-67986489c8-kk6xj                    2/2     Running   0          59m     10.84.1.221   gke-cilium-ci-4-cilium-ci-4-63f48da5-htsj   <none>           <none>
	 kube-system                                                       fluentbit-gke-5qvk9                                    2/2     Running   0          70m     10.128.0.43   gke-cilium-ci-4-cilium-ci-4-63f48da5-0pt0   <none>           <none>
	 kube-system                                                       fluentbit-gke-k57dl                                    2/2     Running   0          70m     10.128.0.44   gke-cilium-ci-4-cilium-ci-4-63f48da5-htsj   <none>           <none>
	 kube-system                                                       gke-metrics-agent-fvzp8                                1/1     Running   0          70m     10.128.0.44   gke-cilium-ci-4-cilium-ci-4-63f48da5-htsj   <none>           <none>
	 kube-system                                                       gke-metrics-agent-sxdqz                                1/1     Running   0          70m     10.128.0.43   gke-cilium-ci-4-cilium-ci-4-63f48da5-0pt0   <none>           <none>
	 kube-system                                                       kube-dns-autoscaler-58cbd4f75c-vlnpd                   1/1     Running   0          59m     10.84.1.46    gke-cilium-ci-4-cilium-ci-4-63f48da5-htsj   <none>           <none>
	 kube-system                                                       kube-dns-b4f5c58c7-plqvg                               4/4     Running   0          11m     10.84.2.93    gke-cilium-ci-4-cilium-ci-4-63f48da5-0pt0   <none>           <none>
	 kube-system                                                       kube-dns-b4f5c58c7-w8jnk                               4/4     Running   0          11m     10.84.1.201   gke-cilium-ci-4-cilium-ci-4-63f48da5-htsj   <none>           <none>
	 kube-system                                                       kube-proxy-gke-cilium-ci-4-cilium-ci-4-63f48da5-0pt0   1/1     Running   0          70m     10.128.0.43   gke-cilium-ci-4-cilium-ci-4-63f48da5-0pt0   <none>           <none>
	 kube-system                                                       kube-proxy-gke-cilium-ci-4-cilium-ci-4-63f48da5-htsj   1/1     Running   0          70m     10.128.0.44   gke-cilium-ci-4-cilium-ci-4-63f48da5-htsj   <none>           <none>
	 kube-system                                                       l7-default-backend-66579f5d7-2w8wf                     1/1     Running   0          59m     10.84.2.235   gke-cilium-ci-4-cilium-ci-4-63f48da5-0pt0   <none>           <none>
	 kube-system                                                       log-gatherer-2bvtr                                     1/1     Running   0          69m     10.128.0.44   gke-cilium-ci-4-cilium-ci-4-63f48da5-htsj   <none>           <none>
	 kube-system                                                       log-gatherer-6ftxc                                     1/1     Running   0          69m     10.128.0.43   gke-cilium-ci-4-cilium-ci-4-63f48da5-0pt0   <none>           <none>
	 kube-system                                                       metrics-server-v0.3.6-6c47ffd7d7-z87tj                 2/2     Running   0          59m     10.84.2.144   gke-cilium-ci-4-cilium-ci-4-63f48da5-0pt0   <none>           <none>
	 kube-system                                                       pdcsi-node-7wpx5                                       2/2     Running   0          70m     10.128.0.43   gke-cilium-ci-4-cilium-ci-4-63f48da5-0pt0   <none>           <none>
	 kube-system                                                       pdcsi-node-8n9zz                                       2/2     Running   0          70m     10.128.0.44   gke-cilium-ci-4-cilium-ci-4-63f48da5-htsj   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-2tjlt cilium-4xqfk]
cmd: kubectl exec -n kube-system cilium-2tjlt -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.19+ (v1.19.13-gke.1900) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [eth0 10.128.0.44 (Direct Routing)]
	 Host firewall:          Disabled
	 Cilium:                 Ok   1.10.90 (v1.10.90-c83d983)
	 NodeMonitor:            Listening for events on 4 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 8/254 allocated from 10.84.1.0/24, 
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           IPTables [IPv4: Enabled, IPv6: Disabled]
	 Controller Status:      48/48 healthy
	 Proxy Status:           OK, ip 10.84.1.245, 0 redirects active on ports 10000-20000
	 Hubble:                 Ok   Current/Max Flows: 2525/4095 (61.66%), Flows/s: 7.68   Metrics: Disabled
	 Encryption:             Disabled
	 Cluster health:         2/2 reachable   (2021-10-28T14:31:13Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-2tjlt -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                       IPv6   IPv4          STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                         
	 1051       Enabled            Disabled          11202      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.84.1.242   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                
	                                                            k8s:io.kubernetes.pod.namespace=202110281428k8sdatapathconfigencapsulationcheckiptablesmasquera                                
	                                                            k8s:zgroup=testDS                                                                                                              
	 1217       Disabled           Disabled          47047      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.84.1.221   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=event-exporter-sa                                                                      
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                    
	                                                            k8s:k8s-app=event-exporter                                                                                                     
	                                                            k8s:version=v0.3.4                                                                                                             
	 1832       Disabled           Disabled          8713       k8s:io.cilium.k8s.policy.cluster=default                                                                 10.84.1.201   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=kube-dns                                                                               
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                    
	                                                            k8s:k8s-app=kube-dns                                                                                                           
	 3119       Disabled           Disabled          54875      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.84.1.54    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                
	                                                            k8s:io.kubernetes.pod.namespace=202110281428k8sdatapathconfigencapsulationcheckiptablesmasquera                                
	                                                            k8s:zgroup=testDSClient                                                                                                        
	 3483       Disabled           Disabled          4201       k8s:io.cilium.k8s.policy.cluster=default                                                                 10.84.1.46    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=kube-dns-autoscaler                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                    
	                                                            k8s:k8s-app=kube-dns-autoscaler                                                                                                
	 3617       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                             ready   
	                                                            k8s:cloud.google.com/gke-boot-disk=pd-standard                                                                                 
	                                                            k8s:cloud.google.com/gke-container-runtime=containerd                                                                          
	                                                            k8s:cloud.google.com/gke-nodepool=cilium-ci-4                                                                                  
	                                                            k8s:cloud.google.com/gke-os-distribution=cos                                                                                   
	                                                            k8s:cloud.google.com/machine-family=n1                                                                                         
	                                                            k8s:node.kubernetes.io/instance-type=n1-standard-4                                                                             
	                                                            k8s:topology.gke.io/zone=us-west1-b                                                                                            
	                                                            k8s:topology.kubernetes.io/region=us-west1                                                                                     
	                                                            k8s:topology.kubernetes.io/zone=us-west1-b                                                                                     
	                                                            reserved:host                                                                                                                  
	 3797       Disabled           Disabled          4          reserved:health                                                                                          10.84.1.122   ready   
	 3991       Disabled           Disabled          40008      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.84.1.176   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                
	                                                            k8s:io.kubernetes.pod.namespace=202110281428k8sdatapathconfigencapsulationcheckiptablesmasquera                                
	                                                            k8s:zgroup=test-k8s2                                                                                                           
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-4xqfk -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.19+ (v1.19.13-gke.1900) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [eth0 10.128.0.43 (Direct Routing)]
	 Host firewall:          Disabled
	 Cilium:                 Ok   1.10.90 (v1.10.90-c83d983)
	 NodeMonitor:            Listening for events on 4 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 7/254 allocated from 10.84.2.0/24, 
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           IPTables [IPv4: Enabled, IPv6: Disabled]
	 Controller Status:      44/44 healthy
	 Proxy Status:           OK, ip 10.84.2.104, 0 redirects active on ports 10000-20000
	 Hubble:                 Ok   Current/Max Flows: 2520/4095 (61.54%), Flows/s: 7.84   Metrics: Disabled
	 Encryption:             Disabled
	 Cluster health:         2/2 reachable   (2021-10-28T14:31:10Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-4xqfk -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                       IPv6   IPv4          STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                         
	 235        Disabled           Disabled          4          reserved:health                                                                                          10.84.2.105   ready   
	 359        Disabled           Disabled          59462      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.84.2.144   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=metrics-server                                                                         
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                    
	                                                            k8s:k8s-app=metrics-server                                                                                                     
	                                                            k8s:version=v0.3.6                                                                                                             
	 398        Enabled            Disabled          11202      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.84.2.36    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                
	                                                            k8s:io.kubernetes.pod.namespace=202110281428k8sdatapathconfigencapsulationcheckiptablesmasquera                                
	                                                            k8s:zgroup=testDS                                                                                                              
	 1858       Disabled           Disabled          8713       k8s:io.cilium.k8s.policy.cluster=default                                                                 10.84.2.93    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=kube-dns                                                                               
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                    
	                                                            k8s:k8s-app=kube-dns                                                                                                           
	 2675       Disabled           Disabled          40730      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.84.2.235   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                    
	                                                            k8s:k8s-app=glbc                                                                                                               
	                                                            k8s:name=glbc                                                                                                                  
	 3481       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                             ready   
	                                                            k8s:cloud.google.com/gke-boot-disk=pd-standard                                                                                 
	                                                            k8s:cloud.google.com/gke-container-runtime=containerd                                                                          
	                                                            k8s:cloud.google.com/gke-nodepool=cilium-ci-4                                                                                  
	                                                            k8s:cloud.google.com/gke-os-distribution=cos                                                                                   
	                                                            k8s:cloud.google.com/machine-family=n1                                                                                         
	                                                            k8s:node.kubernetes.io/instance-type=n1-standard-4                                                                             
	                                                            k8s:topology.gke.io/zone=us-west1-b                                                                                            
	                                                            k8s:topology.kubernetes.io/region=us-west1                                                                                     
	                                                            k8s:topology.kubernetes.io/zone=us-west1-b                                                                                     
	                                                            reserved:host                                                                                                                  
	 3673       Disabled           Disabled          54875      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.84.2.1     ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                
	                                                            k8s:io.kubernetes.pod.namespace=202110281428k8sdatapathconfigencapsulationcheckiptablesmasquera                                
	                                                            k8s:zgroup=testDSClient                                                                                                        
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
14:33:42 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
14:33:42 STEP: Deleting deployment demo_ds.yaml
14:33:52 STEP: Deleting namespace 202110281428k8sdatapathconfigencapsulationcheckiptablesmasquera
14:34:12 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|33bbbc48_K8sDatapathConfig_Encapsulation_Check_iptables_masquerading_with_random-fully.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-GKE//6778/artifact/src/github.com/cilium/cilium/33bbbc48_K8sDatapathConfig_Encapsulation_Check_iptables_masquerading_with_random-fully.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-GKE//6778/artifact/src/github.com/cilium/cilium/90dcc1d4_K8sDatapathConfig_Host_firewall_Check_connectivity_with_IPv6_disabled.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-GKE//6778/artifact/src/github.com/cilium/cilium/test_results_Cilium-PR-K8s-GKE_6778_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-GKE/6778/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@pchaigno
Copy link
Member

No MLH, that last one is not the same flake. It's a panic and it seems to match #17661 (comment).

@maintainer-s-little-helper
Copy link
Author

PR #17780 hit this flake with 88.79% similarity:

Click to show.

Test Name

K8sDatapathConfig Encapsulation Check iptables masquerading with random-fully

Failure Output

FAIL: Pod "testclient-2zjlb" can not connect to "http://google.com"

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-GKE/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:527
Pod "testclient-2zjlb" can not connect to "http://google.com"
Expected command: kubectl exec -n 202111160935k8sdatapathconfigencapsulationcheckiptablesmasquera testclient-2zjlb -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://google.com -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" --retry 10 
To succeed, but it failed:
Exitcode: -1 
Err: signal: killed
Stdout:
 	 
Stderr:
 	 

/home/jenkins/workspace/Cilium-PR-K8s-GKE/src/github.com/cilium/cilium/test/k8sT/DatapathConfiguration.go:273

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
⚠️  Number of "level=warning" in logs: 8
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 1 errors/warnings:
Deleting no longer present service
Cilium pods: [cilium-bmfmc cilium-nx44k]
Netpols loaded: 
CiliumNetworkPolicies loaded: 202111160935k8sdatapathconfigencapsulationcheckiptablesmasquera::l3-policy-demo 
Endpoint Policy Enforcement:
Pod                                              Ingress   Egress
testclient-6xg6w                                           
event-exporter-gke-67986489c8-2jc9m                        
konnectivity-agent-64cd6658b7-b8c8p                        
l7-default-backend-56cb9644f6-tw2hp                        
test-k8s2-79ff876c9d-rzlvb                                 
testds-mfspf                                               
konnectivity-agent-64cd6658b7-9blkb                        
konnectivity-agent-autoscaler-6cb774c9cc-nx8bj             
kube-dns-b4f5c58c7-4jv8q                                   
testclient-2zjlb                                           
testds-wbzbr                                               
metrics-server-v0.3.6-9c5bbf784-mgmq2                      
kube-dns-autoscaler-844c9d9448-sk2n6                       
kube-dns-b4f5c58c7-gm7w7                                   
Cilium agent 'cilium-bmfmc': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 59 Failed 0
Cilium agent 'cilium-nx44k': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 48 Failed 0


Standard Error

Click to show.
09:34:31 STEP: Installing Cilium
09:34:40 STEP: Waiting for Cilium to become ready
09:35:09 STEP: Validating if Kubernetes DNS is deployed
09:35:09 STEP: Checking if deployment is ready
09:35:09 STEP: Checking if kube-dns service is plumbed correctly
09:35:09 STEP: Checking if pods have identity
09:35:09 STEP: Checking if DNS can resolve
09:35:12 STEP: Kubernetes DNS is up and operational
09:35:12 STEP: Validating Cilium Installation
09:35:12 STEP: Performing Cilium controllers preflight check
09:35:12 STEP: Performing Cilium health check
09:35:12 STEP: Performing Cilium status preflight check
09:35:16 STEP: Performing Cilium service preflight check
09:35:16 STEP: Performing K8s service preflight check
09:35:16 STEP: Waiting for cilium-operator to be ready
09:35:16 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
09:35:16 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
09:35:16 STEP: WaitforPods(namespace="kube-system", filter="")
09:35:17 STEP: WaitforPods(namespace="kube-system", filter="") => <nil>
09:35:17 STEP: Making sure all endpoints are in ready state
09:35:19 STEP: Creating namespace 202111160935k8sdatapathconfigencapsulationcheckiptablesmasquera
09:35:19 STEP: Deploying demo_ds.yaml in namespace 202111160935k8sdatapathconfigencapsulationcheckiptablesmasquera
09:35:23 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-GKE/src/github.com/cilium/cilium/test/k8sT/manifests/l3-policy-demo.yaml
09:35:28 STEP: Waiting for 4m0s for 5 pods of deployment demo_ds.yaml to become ready
09:35:28 STEP: WaitforNPods(namespace="202111160935k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="")
09:35:34 STEP: WaitforNPods(namespace="202111160935k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="") => <nil>
09:35:34 STEP: Checking pod connectivity between nodes
09:35:34 STEP: WaitforPods(namespace="202111160935k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDSClient")
09:35:34 STEP: WaitforPods(namespace="202111160935k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDSClient") => <nil>
09:35:35 STEP: WaitforPods(namespace="202111160935k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDS")
09:35:35 STEP: WaitforPods(namespace="202111160935k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDS") => <nil>
09:35:41 STEP: Test iptables masquerading
09:35:41 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-GKE/src/github.com/cilium/cilium/test/k8sT/manifests/l3-policy-demo.yaml
09:35:43 STEP: Waiting for 4m0s for 5 pods of deployment demo_ds.yaml to become ready
09:35:43 STEP: WaitforNPods(namespace="202111160935k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="")
09:35:44 STEP: WaitforNPods(namespace="202111160935k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="") => <nil>
09:35:44 STEP: Making ten curl requests from "testclient-2zjlb" to "http://google.com"
FAIL: Pod "testclient-2zjlb" can not connect to "http://google.com"
Expected command: kubectl exec -n 202111160935k8sdatapathconfigencapsulationcheckiptablesmasquera testclient-2zjlb -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://google.com -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" --retry 10 
To succeed, but it failed:
Exitcode: -1 
Err: signal: killed
Stdout:
 	 
Stderr:
 	 

=== Test Finished at 2021-11-16T09:39:44Z====
09:39:44 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
09:39:45 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                                                   READY   STATUS    RESTARTS   AGE     IP            NODE                                        NOMINATED NODE   READINESS GATES
	 202111160935k8sdatapathconfigencapsulationcheckiptablesmasquera   test-k8s2-79ff876c9d-rzlvb                             2/2     Running   0          4m29s   10.84.1.54    gke-cilium-ci-4-cilium-ci-4-8e7d9e23-w7q0   <none>           <none>
	 202111160935k8sdatapathconfigencapsulationcheckiptablesmasquera   testclient-2zjlb                                       1/1     Running   0          4m29s   10.84.1.1     gke-cilium-ci-4-cilium-ci-4-8e7d9e23-w7q0   <none>           <none>
	 202111160935k8sdatapathconfigencapsulationcheckiptablesmasquera   testclient-6xg6w                                       1/1     Running   0          4m29s   10.84.2.116   gke-cilium-ci-4-cilium-ci-4-8e7d9e23-r5wf   <none>           <none>
	 202111160935k8sdatapathconfigencapsulationcheckiptablesmasquera   testds-mfspf                                           2/2     Running   0          4m29s   10.84.1.79    gke-cilium-ci-4-cilium-ci-4-8e7d9e23-w7q0   <none>           <none>
	 202111160935k8sdatapathconfigencapsulationcheckiptablesmasquera   testds-wbzbr                                           2/2     Running   0          4m29s   10.84.2.46    gke-cilium-ci-4-cilium-ci-4-8e7d9e23-r5wf   <none>           <none>
	 cilium-monitoring                                                 grafana-d69c97b9b-qzpmw                                1/1     Running   0          27m     10.84.1.10    gke-cilium-ci-4-cilium-ci-4-8e7d9e23-w7q0   <none>           <none>
	 cilium-monitoring                                                 prometheus-655fb888d7-d8fbq                            1/1     Running   0          27m     10.84.1.11    gke-cilium-ci-4-cilium-ci-4-8e7d9e23-w7q0   <none>           <none>
	 kube-system                                                       cilium-bmfmc                                           1/1     Running   0          5m10s   10.128.0.43   gke-cilium-ci-4-cilium-ci-4-8e7d9e23-r5wf   <none>           <none>
	 kube-system                                                       cilium-node-init-l2vph                                 1/1     Running   0          5m10s   10.128.0.44   gke-cilium-ci-4-cilium-ci-4-8e7d9e23-w7q0   <none>           <none>
	 kube-system                                                       cilium-node-init-qwjxx                                 1/1     Running   0          5m10s   10.128.0.43   gke-cilium-ci-4-cilium-ci-4-8e7d9e23-r5wf   <none>           <none>
	 kube-system                                                       cilium-nx44k                                           1/1     Running   0          5m10s   10.128.0.44   gke-cilium-ci-4-cilium-ci-4-8e7d9e23-w7q0   <none>           <none>
	 kube-system                                                       cilium-operator-74fb958744-452vd                       1/1     Running   0          5m10s   10.128.0.44   gke-cilium-ci-4-cilium-ci-4-8e7d9e23-w7q0   <none>           <none>
	 kube-system                                                       cilium-operator-74fb958744-tzq85                       1/1     Running   0          5m10s   10.128.0.43   gke-cilium-ci-4-cilium-ci-4-8e7d9e23-r5wf   <none>           <none>
	 kube-system                                                       event-exporter-gke-67986489c8-2jc9m                    2/2     Running   0          17m     10.84.1.76    gke-cilium-ci-4-cilium-ci-4-8e7d9e23-w7q0   <none>           <none>
	 kube-system                                                       fluentbit-gke-7sg6p                                    2/2     Running   0          29m     10.128.0.43   gke-cilium-ci-4-cilium-ci-4-8e7d9e23-r5wf   <none>           <none>
	 kube-system                                                       fluentbit-gke-lc2kc                                    2/2     Running   0          29m     10.128.0.44   gke-cilium-ci-4-cilium-ci-4-8e7d9e23-w7q0   <none>           <none>
	 kube-system                                                       gke-metrics-agent-6sqv7                                1/1     Running   0          29m     10.128.0.44   gke-cilium-ci-4-cilium-ci-4-8e7d9e23-w7q0   <none>           <none>
	 kube-system                                                       gke-metrics-agent-lkc7g                                1/1     Running   0          29m     10.128.0.43   gke-cilium-ci-4-cilium-ci-4-8e7d9e23-r5wf   <none>           <none>
	 kube-system                                                       konnectivity-agent-64cd6658b7-9blkb                    1/1     Running   0          17m     10.84.2.64    gke-cilium-ci-4-cilium-ci-4-8e7d9e23-r5wf   <none>           <none>
	 kube-system                                                       konnectivity-agent-64cd6658b7-b8c8p                    1/1     Running   0          17m     10.84.1.33    gke-cilium-ci-4-cilium-ci-4-8e7d9e23-w7q0   <none>           <none>
	 kube-system                                                       konnectivity-agent-autoscaler-6cb774c9cc-nx8bj         1/1     Running   0          17m     10.84.2.93    gke-cilium-ci-4-cilium-ci-4-8e7d9e23-r5wf   <none>           <none>
	 kube-system                                                       kube-dns-autoscaler-844c9d9448-sk2n6                   1/1     Running   0          17m     10.84.2.49    gke-cilium-ci-4-cilium-ci-4-8e7d9e23-r5wf   <none>           <none>
	 kube-system                                                       kube-dns-b4f5c58c7-4jv8q                               4/4     Running   0          11m     10.84.2.51    gke-cilium-ci-4-cilium-ci-4-8e7d9e23-r5wf   <none>           <none>
	 kube-system                                                       kube-dns-b4f5c58c7-gm7w7                               4/4     Running   0          11m     10.84.1.21    gke-cilium-ci-4-cilium-ci-4-8e7d9e23-w7q0   <none>           <none>
	 kube-system                                                       kube-proxy-gke-cilium-ci-4-cilium-ci-4-8e7d9e23-r5wf   1/1     Running   0          29m     10.128.0.43   gke-cilium-ci-4-cilium-ci-4-8e7d9e23-r5wf   <none>           <none>
	 kube-system                                                       kube-proxy-gke-cilium-ci-4-cilium-ci-4-8e7d9e23-w7q0   1/1     Running   0          29m     10.128.0.44   gke-cilium-ci-4-cilium-ci-4-8e7d9e23-w7q0   <none>           <none>
	 kube-system                                                       l7-default-backend-56cb9644f6-tw2hp                    1/1     Running   0          17m     10.84.2.87    gke-cilium-ci-4-cilium-ci-4-8e7d9e23-r5wf   <none>           <none>
	 kube-system                                                       log-gatherer-qtzrd                                     1/1     Running   0          27m     10.128.0.43   gke-cilium-ci-4-cilium-ci-4-8e7d9e23-r5wf   <none>           <none>
	 kube-system                                                       log-gatherer-xcgmf                                     1/1     Running   0          27m     10.128.0.44   gke-cilium-ci-4-cilium-ci-4-8e7d9e23-w7q0   <none>           <none>
	 kube-system                                                       metrics-server-v0.3.6-9c5bbf784-mgmq2                  2/2     Running   0          17m     10.84.2.206   gke-cilium-ci-4-cilium-ci-4-8e7d9e23-r5wf   <none>           <none>
	 kube-system                                                       pdcsi-node-9p5zm                                       2/2     Running   0          29m     10.128.0.44   gke-cilium-ci-4-cilium-ci-4-8e7d9e23-w7q0   <none>           <none>
	 kube-system                                                       pdcsi-node-zvxbp                                       2/2     Running   0          29m     10.128.0.43   gke-cilium-ci-4-cilium-ci-4-8e7d9e23-r5wf   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-bmfmc cilium-nx44k]
cmd: kubectl exec -n kube-system cilium-bmfmc -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.20+ (v1.20.10-gke.1600) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [eth0 10.128.0.43 (Direct Routing)]
	 Host firewall:          Disabled
	 Cilium:                 Ok   1.10.90 (v1.10.90-32ed74f)
	 NodeMonitor:            Listening for events on 4 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 10/254 allocated from 10.84.2.0/24, 
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           IPTables [IPv4: Enabled, IPv6: Disabled]
	 Controller Status:      59/59 healthy
	 Proxy Status:           OK, ip 10.84.2.107, 0 redirects active on ports 10000-20000
	 Hubble:                 Ok   Current/Max Flows: 2784/4095 (67.99%), Flows/s: 8.85   Metrics: Disabled
	 Encryption:             Disabled
	 Cluster health:         2/2 reachable   (2021-11-16T09:39:09Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-bmfmc -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                       IPv6   IPv4          STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                         
	 299        Disabled           Disabled          44421      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.84.2.206   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=metrics-server                                                                         
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                    
	                                                            k8s:k8s-app=metrics-server                                                                                                     
	                                                            k8s:version=v0.3.6                                                                                                             
	 322        Disabled           Disabled          4          reserved:health                                                                                          10.84.2.103   ready   
	 344        Disabled           Disabled          11967      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.84.2.51    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=kube-dns                                                                               
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                    
	                                                            k8s:k8s-app=kube-dns                                                                                                           
	 483        Disabled           Disabled          25968      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.84.2.93    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=konnectivity-agent-cpha                                                                
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                    
	                                                            k8s:k8s-app=konnectivity-agent-autoscaler                                                                                      
	 707        Disabled           Disabled          8478       k8s:io.cilium.k8s.policy.cluster=default                                                                 10.84.2.49    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=kube-dns-autoscaler                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                    
	                                                            k8s:k8s-app=kube-dns-autoscaler                                                                                                
	 1060       Disabled           Disabled          16271      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.84.2.87    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                    
	                                                            k8s:k8s-app=glbc                                                                                                               
	                                                            k8s:name=glbc                                                                                                                  
	 1221       Disabled           Disabled          40377      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.84.2.64    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=konnectivity-agent                                                                     
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                    
	                                                            k8s:k8s-app=konnectivity-agent                                                                                                 
	 1395       Enabled            Disabled          22352      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.84.2.46    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                
	                                                            k8s:io.kubernetes.pod.namespace=202111160935k8sdatapathconfigencapsulationcheckiptablesmasquera                                
	                                                            k8s:zgroup=testDS                                                                                                              
	 2880       Disabled           Disabled          22119      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.84.2.116   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                
	                                                            k8s:io.kubernetes.pod.namespace=202111160935k8sdatapathconfigencapsulationcheckiptablesmasquera                                
	                                                            k8s:zgroup=testDSClient                                                                                                        
	 3265       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                             ready   
	                                                            k8s:cloud.google.com/gke-boot-disk=pd-standard                                                                                 
	                                                            k8s:cloud.google.com/gke-container-runtime=containerd                                                                          
	                                                            k8s:cloud.google.com/gke-nodepool=cilium-ci-4                                                                                  
	                                                            k8s:cloud.google.com/gke-os-distribution=cos                                                                                   
	                                                            k8s:cloud.google.com/machine-family=n1                                                                                         
	                                                            k8s:node.kubernetes.io/instance-type=n1-standard-4                                                                             
	                                                            k8s:topology.gke.io/zone=us-west1-b                                                                                            
	                                                            k8s:topology.kubernetes.io/region=us-west1                                                                                     
	                                                            k8s:topology.kubernetes.io/zone=us-west1-b                                                                                     
	                                                            reserved:host                                                                                                                  
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-nx44k -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.20+ (v1.20.10-gke.1600) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [eth0 10.128.0.44 (Direct Routing)]
	 Host firewall:          Disabled
	 Cilium:                 Ok   1.10.90 (v1.10.90-32ed74f)
	 NodeMonitor:            Listening for events on 4 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 8/254 allocated from 10.84.1.0/24, 
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           IPTables [IPv4: Enabled, IPv6: Disabled]
	 Controller Status:      48/48 healthy
	 Proxy Status:           OK, ip 10.84.1.57, 0 redirects active on ports 10000-20000
	 Hubble:                 Ok   Current/Max Flows: 2836/4095 (69.26%), Flows/s: 9.02   Metrics: Disabled
	 Encryption:             Disabled
	 Cluster health:         2/2 reachable   (2021-11-16T09:39:10Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-nx44k -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                       IPv6   IPv4          STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                         
	 4          Disabled           Disabled          22119      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.84.1.1     ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                
	                                                            k8s:io.kubernetes.pod.namespace=202111160935k8sdatapathconfigencapsulationcheckiptablesmasquera                                
	                                                            k8s:zgroup=testDSClient                                                                                                        
	 17         Enabled            Disabled          22352      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.84.1.79    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                
	                                                            k8s:io.kubernetes.pod.namespace=202111160935k8sdatapathconfigencapsulationcheckiptablesmasquera                                
	                                                            k8s:zgroup=testDS                                                                                                              
	 70         Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                             ready   
	                                                            k8s:cloud.google.com/gke-boot-disk=pd-standard                                                                                 
	                                                            k8s:cloud.google.com/gke-container-runtime=containerd                                                                          
	                                                            k8s:cloud.google.com/gke-nodepool=cilium-ci-4                                                                                  
	                                                            k8s:cloud.google.com/gke-os-distribution=cos                                                                                   
	                                                            k8s:cloud.google.com/machine-family=n1                                                                                         
	                                                            k8s:node.kubernetes.io/instance-type=n1-standard-4                                                                             
	                                                            k8s:topology.gke.io/zone=us-west1-b                                                                                            
	                                                            k8s:topology.kubernetes.io/region=us-west1                                                                                     
	                                                            k8s:topology.kubernetes.io/zone=us-west1-b                                                                                     
	                                                            reserved:host                                                                                                                  
	 142        Disabled           Disabled          40377      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.84.1.33    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=konnectivity-agent                                                                     
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                    
	                                                            k8s:k8s-app=konnectivity-agent                                                                                                 
	 284        Disabled           Disabled          51553      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.84.1.76    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=event-exporter-sa                                                                      
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                    
	                                                            k8s:k8s-app=event-exporter                                                                                                     
	                                                            k8s:version=v0.3.4                                                                                                             
	 1096       Disabled           Disabled          8267       k8s:io.cilium.k8s.policy.cluster=default                                                                 10.84.1.54    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                
	                                                            k8s:io.kubernetes.pod.namespace=202111160935k8sdatapathconfigencapsulationcheckiptablesmasquera                                
	                                                            k8s:zgroup=test-k8s2                                                                                                           
	 3077       Disabled           Disabled          11967      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.84.1.21    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=kube-dns                                                                               
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                    
	                                                            k8s:k8s-app=kube-dns                                                                                                           
	 3237       Disabled           Disabled          4          reserved:health                                                                                          10.84.1.113   ready   
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
09:40:26 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
09:40:26 STEP: Deleting deployment demo_ds.yaml
09:40:36 STEP: Deleting namespace 202111160935k8sdatapathconfigencapsulationcheckiptablesmasquera
09:40:56 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|cf272726_K8sDatapathConfig_Encapsulation_Check_iptables_masquerading_with_random-fully.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-GKE//6933/artifact/src/github.com/cilium/cilium/cf272726_K8sDatapathConfig_Encapsulation_Check_iptables_masquerading_with_random-fully.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-GKE//6933/artifact/src/github.com/cilium/cilium/test_results_Cilium-PR-K8s-GKE_6933_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-GKE/6933/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper
Copy link
Author

PR #17929 hit this flake with 85.45% similarity:

Click to show.

Test Name

K8sDatapathConfig Encapsulation Check iptables masquerading with random-fully

Failure Output

FAIL: Pod "testclient-5d6mr" can not connect to "http://google.com"

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:527
Pod "testclient-5d6mr" can not connect to "http://google.com"
Expected command: kubectl exec -n 202111240956k8sdatapathconfigencapsulationcheckiptablesmasquera testclient-5d6mr -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://google.com -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" --retry 10 
To succeed, but it failed:
Exitcode: 7 
Err: exit status 7
Stdout:
 	 time-> DNS: '0.013932()', Connect: '0.000000',Transfer '0.000000', total '2.224128'
Stderr:
 	 command terminated with exit code 7
	 

/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8sT/DatapathConfiguration.go:273

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 2
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 2
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 1 errors/warnings:
Session affinity for host reachable services needs kernel 5.7.0 or newer to work properly when accessed from inside cluster: the same service endpoint will be selected from all network namespaces on the host.
Cilium pods: [cilium-kz57h cilium-s7fc4]
Netpols loaded: 
CiliumNetworkPolicies loaded: 202111240956k8sdatapathconfigencapsulationcheckiptablesmasquera::l3-policy-demo 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
test-k8s2-7f96d84c65-lq967              
testclient-5d6mr                        
testclient-r767h                        
testds-fdwr5                            
testds-qmzxk                            
grafana-5747bcc8f9-xwqjp                
prometheus-655fb888d7-4wfdg             
coredns-755cd654d4-62j7b                
Cilium agent 'cilium-kz57h': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 33 Failed 0
Cilium agent 'cilium-s7fc4': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 43 Failed 0


Standard Error

Click to show.
09:55:49 STEP: Installing Cilium
09:55:51 STEP: Waiting for Cilium to become ready
09:56:20 STEP: Validating if Kubernetes DNS is deployed
09:56:20 STEP: Checking if deployment is ready
09:56:21 STEP: Checking if kube-dns service is plumbed correctly
09:56:21 STEP: Checking if pods have identity
09:56:21 STEP: Checking if DNS can resolve
09:56:22 STEP: Kubernetes DNS is not ready: %!s(<nil>)
09:56:22 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
09:56:29 STEP: Waiting for Kubernetes DNS to become operational
09:56:29 STEP: Checking if deployment is ready
09:56:29 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
09:56:30 STEP: Checking if deployment is ready
09:56:30 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
09:56:31 STEP: Checking if deployment is ready
09:56:31 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
09:56:32 STEP: Checking if deployment is ready
09:56:32 STEP: Checking if kube-dns service is plumbed correctly
09:56:32 STEP: Checking if DNS can resolve
09:56:32 STEP: Checking if pods have identity
09:56:32 STEP: Validating Cilium Installation
09:56:32 STEP: Performing Cilium status preflight check
09:56:32 STEP: Performing Cilium health check
09:56:32 STEP: Performing Cilium controllers preflight check
09:56:34 STEP: Performing Cilium service preflight check
09:56:34 STEP: Performing K8s service preflight check
09:56:34 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-kz57h': Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 Defaulted container "cilium-agent" out of: cilium-agent, mount-cgroup (init), clean-cilium-state (init)
	 Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory
	 
	 command terminated with exit code 1
	 

09:56:34 STEP: Performing Cilium status preflight check
09:56:34 STEP: Performing Cilium controllers preflight check
09:56:34 STEP: Performing Cilium health check
09:56:35 STEP: Performing Cilium service preflight check
09:56:35 STEP: Performing K8s service preflight check
09:56:35 STEP: Performing Cilium status preflight check
09:56:35 STEP: Performing Cilium controllers preflight check
09:56:35 STEP: Performing Cilium health check
09:56:36 STEP: Performing Cilium service preflight check
09:56:36 STEP: Performing K8s service preflight check
09:56:36 STEP: Performing Cilium status preflight check
09:56:36 STEP: Performing Cilium health check
09:56:36 STEP: Performing Cilium controllers preflight check
09:56:37 STEP: Performing Cilium service preflight check
09:56:37 STEP: Performing K8s service preflight check
09:56:37 STEP: Performing Cilium controllers preflight check
09:56:37 STEP: Performing Cilium status preflight check
09:56:37 STEP: Performing Cilium health check
09:56:38 STEP: Performing Cilium service preflight check
09:56:38 STEP: Performing K8s service preflight check
09:56:38 STEP: Performing Cilium status preflight check
09:56:38 STEP: Performing Cilium controllers preflight check
09:56:38 STEP: Performing Cilium health check
09:56:40 STEP: Performing Cilium service preflight check
09:56:40 STEP: Performing K8s service preflight check
09:56:40 STEP: Performing Cilium controllers preflight check
09:56:40 STEP: Performing Cilium status preflight check
09:56:40 STEP: Performing Cilium health check
09:56:41 STEP: Performing Cilium service preflight check
09:56:41 STEP: Performing K8s service preflight check
09:56:42 STEP: Waiting for cilium-operator to be ready
09:56:42 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
09:56:42 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
09:56:42 STEP: Making sure all endpoints are in ready state
09:56:43 STEP: Creating namespace 202111240956k8sdatapathconfigencapsulationcheckiptablesmasquera
09:56:44 STEP: Deploying demo_ds.yaml in namespace 202111240956k8sdatapathconfigencapsulationcheckiptablesmasquera
09:56:44 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8sT/manifests/l3-policy-demo.yaml
09:56:47 STEP: Waiting for 4m0s for 5 pods of deployment demo_ds.yaml to become ready
09:56:47 STEP: WaitforNPods(namespace="202111240956k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="")
09:56:49 STEP: WaitforNPods(namespace="202111240956k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="") => <nil>
09:56:49 STEP: Checking pod connectivity between nodes
09:56:49 STEP: WaitforPods(namespace="202111240956k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDSClient")
09:56:49 STEP: WaitforPods(namespace="202111240956k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDSClient") => <nil>
09:56:50 STEP: WaitforPods(namespace="202111240956k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDS")
09:56:50 STEP: WaitforPods(namespace="202111240956k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDS") => <nil>
09:56:54 STEP: Test iptables masquerading
09:56:54 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.19/src/github.com/cilium/cilium/test/k8sT/manifests/l3-policy-demo.yaml
09:56:55 STEP: Waiting for 4m0s for 5 pods of deployment demo_ds.yaml to become ready
09:56:55 STEP: WaitforNPods(namespace="202111240956k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="")
09:56:55 STEP: WaitforNPods(namespace="202111240956k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="") => <nil>
09:56:55 STEP: Making ten curl requests from "testclient-5d6mr" to "http://google.com"
FAIL: Pod "testclient-5d6mr" can not connect to "http://google.com"
Expected command: kubectl exec -n 202111240956k8sdatapathconfigencapsulationcheckiptablesmasquera testclient-5d6mr -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://google.com -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" --retry 10 
To succeed, but it failed:
Exitcode: 7 
Err: exit status 7
Stdout:
 	 time-> DNS: '0.013932()', Connect: '0.000000',Transfer '0.000000', total '2.224128'
Stderr:
 	 command terminated with exit code 7
	 

=== Test Finished at 2021-11-24T09:56:59Z====
09:56:59 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
09:56:59 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS    RESTARTS   AGE   IP              NODE   NOMINATED NODE   READINESS GATES
	 202111240956k8sdatapathconfigencapsulationcheckiptablesmasquera   test-k8s2-7f96d84c65-lq967         2/2     Running   0          16s   10.0.1.191      k8s2   <none>           <none>
	 202111240956k8sdatapathconfigencapsulationcheckiptablesmasquera   testclient-5d6mr                   1/1     Running   0          16s   10.0.1.35       k8s2   <none>           <none>
	 202111240956k8sdatapathconfigencapsulationcheckiptablesmasquera   testclient-r767h                   1/1     Running   0          16s   10.0.0.170      k8s1   <none>           <none>
	 202111240956k8sdatapathconfigencapsulationcheckiptablesmasquera   testds-fdwr5                       2/2     Running   0          16s   10.0.0.77       k8s1   <none>           <none>
	 202111240956k8sdatapathconfigencapsulationcheckiptablesmasquera   testds-qmzxk                       2/2     Running   0          16s   10.0.1.190      k8s2   <none>           <none>
	 cilium-monitoring                                                 grafana-5747bcc8f9-xwqjp           1/1     Running   0          27m   10.0.0.4        k8s1   <none>           <none>
	 cilium-monitoring                                                 prometheus-655fb888d7-4wfdg        1/1     Running   0          27m   10.0.0.202      k8s1   <none>           <none>
	 kube-system                                                       cilium-kz57h                       1/1     Running   0          69s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-7ccd7ffbc8-sfktg   1/1     Running   0          69s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-7ccd7ffbc8-wnh97   1/1     Running   0          69s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-s7fc4                       1/1     Running   0          69s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       coredns-755cd654d4-62j7b           1/1     Running   0          38s   10.0.0.141      k8s1   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running   0          29m   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running   0          29m   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running   0          29m   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-mpfr4                   1/1     Running   0          28m   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       kube-proxy-wprq9                   1/1     Running   0          29m   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running   0          29m   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-9lt44                 1/1     Running   0          27m   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-jtd22                 1/1     Running   0          27m   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-9hvkz               1/1     Running   0          28m   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-nnv29               1/1     Running   0          28m   192.168.56.11   k8s1   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-kz57h cilium-s7fc4]
cmd: kubectl exec -n kube-system cilium-kz57h -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.21 (v1.21.3) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s3 10.0.2.15 fd04::12, enp0s8 192.168.56.12 fd04::12 (Direct Routing)]
	 Host firewall:          Disabled
	 Cilium:                 Ok   1.10.90 (v1.10.90-fe104be)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 5/254 allocated from 10.0.1.0/24, IPv6: 5/254 allocated from fd02::100/120
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           IPTables [IPv4: Enabled, IPv6: Disabled]
	 Controller Status:      33/33 healthy
	 Proxy Status:           OK, ip 10.0.1.154, 0 redirects active on ports 10000-20000
	 Hubble:                 Ok   Current/Max Flows: 642/4095 (15.68%), Flows/s: 7.73   Metrics: Disabled
	 Encryption:             Disabled
	 Cluster health:         2/2 reachable   (2021-11-24T09:56:41Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-kz57h -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                            
	 270        Enabled            Disabled          21972      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202111240956k8sdatapathconfigencapsulationcheckiptablesmasquera   fd02::1e5   10.0.1.190   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202111240956k8sdatapathconfigencapsulationcheckiptablesmasquera                                                                   
	                                                            k8s:zgroup=testDS                                                                                                                                                 
	 1209       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                                                ready   
	                                                            reserved:host                                                                                                                                                     
	 1730       Disabled           Disabled          4          reserved:health                                                                                                                  fd02::158   10.0.1.146   ready   
	 2797       Disabled           Disabled          30231      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202111240956k8sdatapathconfigencapsulationcheckiptablesmasquera   fd02::1f9   10.0.1.35    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202111240956k8sdatapathconfigencapsulationcheckiptablesmasquera                                                                   
	                                                            k8s:zgroup=testDSClient                                                                                                                                           
	 3158       Disabled           Disabled          28769      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202111240956k8sdatapathconfigencapsulationcheckiptablesmasquera   fd02::1b1   10.0.1.191   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202111240956k8sdatapathconfigencapsulationcheckiptablesmasquera                                                                   
	                                                            k8s:zgroup=test-k8s2                                                                                                                                              
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-s7fc4 -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.21 (v1.21.3) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s3 10.0.2.15 fd04::11, enp0s8 192.168.56.11 fd04::11 (Direct Routing)]
	 Host firewall:          Disabled
	 Cilium:                 Ok   1.10.90 (v1.10.90-fe104be)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 7/254 allocated from 10.0.0.0/24, IPv6: 7/254 allocated from fd02::/120
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           IPTables [IPv4: Enabled, IPv6: Disabled]
	 Controller Status:      43/43 healthy
	 Proxy Status:           OK, ip 10.0.0.8, 0 redirects active on ports 10000-20000
	 Hubble:                 Ok   Current/Max Flows: 714/4095 (17.44%), Flows/s: 11.72   Metrics: Disabled
	 Encryption:             Disabled
	 Cluster health:         2/2 reachable   (2021-11-24T09:56:42Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-s7fc4 -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                           
	 16         Disabled           Disabled          55868      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::c0   10.0.0.141   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                      
	                                                            k8s:k8s-app=kube-dns                                                                                                                                             
	 435        Disabled           Disabled          21747      k8s:app=grafana                                                                                                                  fd02::8b   10.0.0.4     ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 551        Disabled           Disabled          30231      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202111240956k8sdatapathconfigencapsulationcheckiptablesmasquera   fd02::9b   10.0.0.170   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202111240956k8sdatapathconfigencapsulationcheckiptablesmasquera                                                                  
	                                                            k8s:zgroup=testDSClient                                                                                                                                          
	 657        Disabled           Disabled          22113      k8s:app=prometheus                                                                                                               fd02::47   10.0.0.202   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                                                                 
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                                                           
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                                                
	 855        Disabled           Disabled          4          reserved:health                                                                                                                  fd02::31   10.0.0.232   ready   
	 998        Enabled            Disabled          21972      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202111240956k8sdatapathconfigencapsulationcheckiptablesmasquera   fd02::86   10.0.0.77    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202111240956k8sdatapathconfigencapsulationcheckiptablesmasquera                                                                  
	                                                            k8s:zgroup=testDS                                                                                                                                                
	 1327       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                                               ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                        
	                                                            k8s:node-role.kubernetes.io/master                                                                                                                               
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                      
	                                                            reserved:host                                                                                                                                                    
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
09:57:15 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
09:57:15 STEP: Deleting deployment demo_ds.yaml
09:57:15 STEP: Deleting namespace 202111240956k8sdatapathconfigencapsulationcheckiptablesmasquera
09:57:31 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|6609f4a6_K8sDatapathConfig_Encapsulation_Check_iptables_masquerading_with_random-fully.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//239/artifact/6609f4a6_K8sDatapathConfig_Encapsulation_Check_iptables_masquerading_with_random-fully.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19//239/artifact/test_results_Cilium-PR-K8s-1.21-kernel-4.19_239_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.19/239/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper
Copy link
Author

PR #18041 hit this flake with 89.67% similarity:

Click to show.

Test Name

K8sDatapathConfig Encapsulation Check iptables masquerading with random-fully

Failure Output

FAIL: Pod "testclient-tbgtn" can not connect to "http://google.com"

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-5.4/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:527
Pod "testclient-tbgtn" can not connect to "http://google.com"
Expected command: kubectl exec -n 202111300944k8sdatapathconfigencapsulationcheckiptablesmasquera testclient-tbgtn -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://google.com -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" --retry 10 
To succeed, but it failed:
Exitcode: 7 
Err: exit status 7
Stdout:
 	 time-> DNS: '0.012041()', Connect: '0.000000',Transfer '0.000000', total '2.215392'
Stderr:
 	 command terminated with exit code 7
	 

/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-5.4/src/github.com/cilium/cilium/test/k8sT/DatapathConfiguration.go:273

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 2
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 1 errors/warnings:
Session affinity for host reachable services needs kernel 5.7.0 or newer to work properly when accessed from inside cluster: the same service endpoint will be selected from all network namespaces on the host.
Cilium pods: [cilium-gncq2 cilium-wtvzx]
Netpols loaded: 
CiliumNetworkPolicies loaded: 202111300944k8sdatapathconfigencapsulationcheckiptablesmasquera::l3-policy-demo 
Endpoint Policy Enforcement:
Pod                          Ingress   Egress
test-k8s2-7f96d84c65-qkwqf             
testclient-gq7rb                       
testclient-tbgtn                       
testds-9hldj                           
testds-q4w5v                           
coredns-69b675786c-dv4xw               
Cilium agent 'cilium-gncq2': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 30 Failed 0
Cilium agent 'cilium-wtvzx': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 38 Failed 0


Standard Error

Click to show.
09:43:09 STEP: Installing Cilium
09:43:12 STEP: Waiting for Cilium to become ready
09:43:40 STEP: Validating if Kubernetes DNS is deployed
09:43:40 STEP: Checking if deployment is ready
09:43:40 STEP: Checking if kube-dns service is plumbed correctly
09:43:40 STEP: Checking if pods have identity
09:43:40 STEP: Checking if DNS can resolve
09:43:42 STEP: Kubernetes DNS is not ready: %!s(<nil>)
09:43:42 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
09:43:58 STEP: Waiting for Kubernetes DNS to become operational
09:43:58 STEP: Checking if deployment is ready
09:43:58 STEP: Checking if kube-dns service is plumbed correctly
09:43:58 STEP: Checking if pods have identity
09:43:58 STEP: Checking if DNS can resolve
09:43:58 STEP: Validating Cilium Installation
09:43:58 STEP: Performing Cilium controllers preflight check
09:43:58 STEP: Performing Cilium health check
09:43:58 STEP: Performing Cilium status preflight check
09:44:00 STEP: Performing Cilium service preflight check
09:44:00 STEP: Performing K8s service preflight check
09:44:01 STEP: Waiting for cilium-operator to be ready
09:44:01 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
09:44:01 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
09:44:01 STEP: Making sure all endpoints are in ready state
09:44:02 STEP: Creating namespace 202111300944k8sdatapathconfigencapsulationcheckiptablesmasquera
09:44:02 STEP: Deploying demo_ds.yaml in namespace 202111300944k8sdatapathconfigencapsulationcheckiptablesmasquera
09:44:03 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-5.4/src/github.com/cilium/cilium/test/k8sT/manifests/l3-policy-demo.yaml
09:44:06 STEP: Waiting for 4m0s for 5 pods of deployment demo_ds.yaml to become ready
09:44:06 STEP: WaitforNPods(namespace="202111300944k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="")
09:44:06 STEP: WaitforNPods(namespace="202111300944k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="") => <nil>
09:44:06 STEP: Checking pod connectivity between nodes
09:44:06 STEP: WaitforPods(namespace="202111300944k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDSClient")
09:44:06 STEP: WaitforPods(namespace="202111300944k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDSClient") => <nil>
09:44:07 STEP: WaitforPods(namespace="202111300944k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDS")
09:44:07 STEP: WaitforPods(namespace="202111300944k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDS") => <nil>
09:44:11 STEP: Test iptables masquerading
09:44:11 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-5.4/src/github.com/cilium/cilium/test/k8sT/manifests/l3-policy-demo.yaml
09:44:12 STEP: Waiting for 4m0s for 5 pods of deployment demo_ds.yaml to become ready
09:44:12 STEP: WaitforNPods(namespace="202111300944k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="")
09:44:12 STEP: WaitforNPods(namespace="202111300944k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="") => <nil>
09:44:12 STEP: Making ten curl requests from "testclient-gq7rb" to "http://google.com"
09:44:13 STEP: Making ten curl requests from "testclient-tbgtn" to "http://google.com"
FAIL: Pod "testclient-tbgtn" can not connect to "http://google.com"
Expected command: kubectl exec -n 202111300944k8sdatapathconfigencapsulationcheckiptablesmasquera testclient-tbgtn -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://google.com -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" --retry 10 
To succeed, but it failed:
Exitcode: 7 
Err: exit status 7
Stdout:
 	 time-> DNS: '0.012041()', Connect: '0.000000',Transfer '0.000000', total '2.215392'
Stderr:
 	 command terminated with exit code 7
	 

=== Test Finished at 2021-11-30T09:44:17Z====
09:44:17 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
09:44:17 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS    RESTARTS   AGE   IP              NODE   NOMINATED NODE   READINESS GATES
	 202111300944k8sdatapathconfigencapsulationcheckiptablesmasquera   test-k8s2-7f96d84c65-qkwqf         2/2     Running   0          15s   10.0.1.138      k8s2   <none>           <none>
	 202111300944k8sdatapathconfigencapsulationcheckiptablesmasquera   testclient-gq7rb                   1/1     Running   0          15s   10.0.0.233      k8s1   <none>           <none>
	 202111300944k8sdatapathconfigencapsulationcheckiptablesmasquera   testclient-tbgtn                   1/1     Running   0          15s   10.0.1.81       k8s2   <none>           <none>
	 202111300944k8sdatapathconfigencapsulationcheckiptablesmasquera   testds-9hldj                       2/2     Running   0          15s   10.0.0.225      k8s1   <none>           <none>
	 202111300944k8sdatapathconfigencapsulationcheckiptablesmasquera   testds-q4w5v                       2/2     Running   0          15s   10.0.1.15       k8s2   <none>           <none>
	 cilium-monitoring                                                 grafana-5747bcc8f9-bqjfb           0/1     Running   0          19m   10.0.0.216      k8s1   <none>           <none>
	 cilium-monitoring                                                 prometheus-655fb888d7-4q4pt        1/1     Running   0          19m   10.0.0.31       k8s1   <none>           <none>
	 kube-system                                                       cilium-gncq2                       1/1     Running   0          66s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-7c69cc79d4-lx48n   1/1     Running   0          66s   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-7c69cc79d4-sps48   1/1     Running   0          66s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-wtvzx                       1/1     Running   0          66s   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       coredns-69b675786c-dv4xw           1/1     Running   0          36s   10.0.1.69       k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running   0          23m   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running   0          23m   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running   0          23m   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-jbkh6                   1/1     Running   0          23m   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-vw4mr                   1/1     Running   0          20m   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running   0          23m   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-qjkdq                 1/1     Running   0          19m   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-stvht                 1/1     Running   0          19m   192.168.56.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-p9rn6               1/1     Running   0          20m   192.168.56.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-sjtrt               1/1     Running   0          20m   192.168.56.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-gncq2 cilium-wtvzx]
cmd: kubectl exec -n kube-system cilium-gncq2 -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.21 (v1.21.3) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s3 10.0.2.15 fd04::11, enp0s8 192.168.56.11 fd04::11 (Direct Routing)]
	 Host firewall:          Disabled
	 Cilium:                 Ok   1.11.90 (v1.11.90-121f969)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 4/254 allocated from 10.0.0.0/24, IPv6: 4/254 allocated from fd02::/120
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           IPTables [IPv4: Enabled, IPv6: Disabled]
	 Controller Status:      30/30 healthy
	 Proxy Status:           OK, ip 10.0.0.235, 0 redirects active on ports 10000-20000
	 Hubble:                 Ok   Current/Max Flows: 778/4095 (19.00%), Flows/s: 13.59   Metrics: Disabled
	 Encryption:             Disabled
	 Cluster health:         2/2 reachable   (2021-11-30T09:44:00Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-gncq2 -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                           
	 1205       Enabled            Disabled          1846       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202111300944k8sdatapathconfigencapsulationcheckiptablesmasquera   fd02::9a   10.0.0.225   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202111300944k8sdatapathconfigencapsulationcheckiptablesmasquera                                                                  
	                                                            k8s:zgroup=testDS                                                                                                                                                
	 2838       Disabled           Disabled          2738       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202111300944k8sdatapathconfigencapsulationcheckiptablesmasquera   fd02::98   10.0.0.233   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                         
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202111300944k8sdatapathconfigencapsulationcheckiptablesmasquera                                                                  
	                                                            k8s:zgroup=testDSClient                                                                                                                                          
	 3688       Disabled           Disabled          4          reserved:health                                                                                                                  fd02::91   10.0.0.184   ready   
	 4012       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                                               ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                                                                        
	                                                            k8s:node-role.kubernetes.io/master                                                                                                                               
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                                                                      
	                                                            reserved:host                                                                                                                                                    
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-wtvzx -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.21 (v1.21.3) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s3 10.0.2.15 fd04::12, enp0s8 192.168.56.12 fd04::12 (Direct Routing)]
	 Host firewall:          Disabled
	 Cilium:                 Ok   1.11.90 (v1.11.90-121f969)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 6/254 allocated from 10.0.1.0/24, IPv6: 6/254 allocated from fd02::100/120
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           IPTables [IPv4: Enabled, IPv6: Disabled]
	 Controller Status:      38/38 healthy
	 Proxy Status:           OK, ip 10.0.1.10, 0 redirects active on ports 10000-20000
	 Hubble:                 Ok   Current/Max Flows: 950/4095 (23.20%), Flows/s: 15.45   Metrics: Disabled
	 Encryption:             Disabled
	 Cluster health:         2/2 reachable   (2021-11-30T09:44:01Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-wtvzx -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                                                      IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                                                            
	 1410       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                                                ready   
	                                                            reserved:host                                                                                                                                                     
	 3239       Disabled           Disabled          2738       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202111300944k8sdatapathconfigencapsulationcheckiptablesmasquera   fd02::1ac   10.0.1.81    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202111300944k8sdatapathconfigencapsulationcheckiptablesmasquera                                                                   
	                                                            k8s:zgroup=testDSClient                                                                                                                                           
	 3578       Disabled           Disabled          49982      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                                       fd02::1fd   10.0.1.69    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                                                       
	                                                            k8s:k8s-app=kube-dns                                                                                                                                              
	 3909       Disabled           Disabled          4          reserved:health                                                                                                                  fd02::1d2   10.0.1.180   ready   
	 3944       Disabled           Disabled          15537      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202111300944k8sdatapathconfigencapsulationcheckiptablesmasquera   fd02::1fc   10.0.1.138   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202111300944k8sdatapathconfigencapsulationcheckiptablesmasquera                                                                   
	                                                            k8s:zgroup=test-k8s2                                                                                                                                              
	 3961       Enabled            Disabled          1846       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202111300944k8sdatapathconfigencapsulationcheckiptablesmasquera   fd02::13e   10.0.1.15    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202111300944k8sdatapathconfigencapsulationcheckiptablesmasquera                                                                   
	                                                            k8s:zgroup=testDS                                                                                                                                                 
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
09:44:35 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
09:44:35 STEP: Deleting deployment demo_ds.yaml
09:44:36 STEP: Deleting namespace 202111300944k8sdatapathconfigencapsulationcheckiptablesmasquera
09:44:50 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|d2429973_K8sDatapathConfig_Encapsulation_Check_iptables_masquerading_with_random-fully.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-5.4//25/artifact/d2429973_K8sDatapathConfig_Encapsulation_Check_iptables_masquerading_with_random-fully.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-5.4//25/artifact/test_results_Cilium-PR-K8s-1.21-kernel-5.4_25_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-5.4/25/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper
Copy link
Author

PR #17762 hit this flake with 88.79% similarity:

Click to show.

Test Name

K8sDatapathConfig Encapsulation Check iptables masquerading with random-fully

Failure Output

FAIL: Pod "testclient-fh59g" can not connect to "http://google.com"

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-GKE/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:527
Pod "testclient-fh59g" can not connect to "http://google.com"
Expected command: kubectl exec -n 202111301414k8sdatapathconfigencapsulationcheckiptablesmasquera testclient-fh59g -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://google.com -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" --retry 10 
To succeed, but it failed:
Exitcode: -1 
Err: signal: killed
Stdout:
 	 
Stderr:
 	 

/home/jenkins/workspace/Cilium-PR-K8s-GKE/src/github.com/cilium/cilium/test/k8sT/DatapathConfiguration.go:273

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Cilium pods: [cilium-mchj2 cilium-zgrtb]
Netpols loaded: 
CiliumNetworkPolicies loaded: 202111301414k8sdatapathconfigencapsulationcheckiptablesmasquera::l3-policy-demo 
Endpoint Policy Enforcement:
Pod                                              Ingress   Egress
konnectivity-agent-autoscaler-6cb774c9cc-b6msf             
metrics-server-v0.3.6-9c5bbf784-mnkv6                      
test-k8s2-79ff876c9d-zr7nf                                 
testds-5gqpv                                               
testclient-fmzc5                                           
kube-dns-b4f5c58c7-drd87                                   
kube-dns-b4f5c58c7-z2vw8                                   
l7-default-backend-56cb9644f6-ssw29                        
konnectivity-agent-96bf468f4-2r4gl                         
konnectivity-agent-96bf468f4-jxt8x                         
event-exporter-gke-67986489c8-9h9kx                        
kube-dns-autoscaler-844c9d9448-r2td2                       
testclient-fh59g                                           
testds-vq9ms                                               
Cilium agent 'cilium-mchj2': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 64 Failed 0
Cilium agent 'cilium-zgrtb': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 45 Failed 0


Standard Error

Click to show.
14:13:25 STEP: Installing Cilium
14:13:35 STEP: Waiting for Cilium to become ready
14:14:31 STEP: Validating if Kubernetes DNS is deployed
14:14:31 STEP: Checking if deployment is ready
14:14:31 STEP: Checking if kube-dns service is plumbed correctly
14:14:31 STEP: Checking if pods have identity
14:14:31 STEP: Checking if DNS can resolve
14:14:34 STEP: Kubernetes DNS is up and operational
14:14:34 STEP: Validating Cilium Installation
14:14:34 STEP: Performing Cilium status preflight check
14:14:34 STEP: Performing Cilium controllers preflight check
14:14:34 STEP: Performing Cilium health check
14:14:38 STEP: Performing Cilium service preflight check
14:14:38 STEP: Performing K8s service preflight check
14:14:38 STEP: Waiting for cilium-operator to be ready
14:14:38 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
14:14:38 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
14:14:38 STEP: WaitforPods(namespace="kube-system", filter="")
14:14:39 STEP: WaitforPods(namespace="kube-system", filter="") => <nil>
14:14:39 STEP: Making sure all endpoints are in ready state
14:14:41 STEP: Creating namespace 202111301414k8sdatapathconfigencapsulationcheckiptablesmasquera
14:14:41 STEP: Deploying demo_ds.yaml in namespace 202111301414k8sdatapathconfigencapsulationcheckiptablesmasquera
14:14:46 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-GKE/src/github.com/cilium/cilium/test/k8sT/manifests/l3-policy-demo.yaml
14:14:51 STEP: Waiting for 4m0s for 5 pods of deployment demo_ds.yaml to become ready
14:14:51 STEP: WaitforNPods(namespace="202111301414k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="")
14:14:55 STEP: WaitforNPods(namespace="202111301414k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="") => <nil>
14:14:55 STEP: Checking pod connectivity between nodes
14:14:55 STEP: WaitforPods(namespace="202111301414k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDSClient")
14:14:56 STEP: WaitforPods(namespace="202111301414k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDSClient") => <nil>
14:14:56 STEP: WaitforPods(namespace="202111301414k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDS")
14:14:56 STEP: WaitforPods(namespace="202111301414k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDS") => <nil>
14:15:02 STEP: Test iptables masquerading
14:15:02 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-GKE/src/github.com/cilium/cilium/test/k8sT/manifests/l3-policy-demo.yaml
14:15:04 STEP: Waiting for 4m0s for 5 pods of deployment demo_ds.yaml to become ready
14:15:04 STEP: WaitforNPods(namespace="202111301414k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="")
14:15:05 STEP: WaitforNPods(namespace="202111301414k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="") => <nil>
14:15:05 STEP: Making ten curl requests from "testclient-fh59g" to "http://google.com"
FAIL: Pod "testclient-fh59g" can not connect to "http://google.com"
Expected command: kubectl exec -n 202111301414k8sdatapathconfigencapsulationcheckiptablesmasquera testclient-fh59g -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://google.com -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" --retry 10 
To succeed, but it failed:
Exitcode: -1 
Err: signal: killed
Stdout:
 	 
Stderr:
 	 

=== Test Finished at 2021-11-30T14:19:05Z====
14:19:05 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
14:19:06 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                                                     READY   STATUS    RESTARTS   AGE     IP             NODE                                          NOMINATED NODE   READINESS GATES
	 202111301414k8sdatapathconfigencapsulationcheckiptablesmasquera   test-k8s2-79ff876c9d-zr7nf                               2/2     Running   0          4m28s   10.120.1.123   gke-cilium-ci-14-cilium-ci-14-bedd472f-qdrl   <none>           <none>
	 202111301414k8sdatapathconfigencapsulationcheckiptablesmasquera   testclient-fh59g                                         1/1     Running   0          4m28s   10.120.2.1     gke-cilium-ci-14-cilium-ci-14-bedd472f-hrfs   <none>           <none>
	 202111301414k8sdatapathconfigencapsulationcheckiptablesmasquera   testclient-fmzc5                                         1/1     Running   0          4m28s   10.120.1.149   gke-cilium-ci-14-cilium-ci-14-bedd472f-qdrl   <none>           <none>
	 202111301414k8sdatapathconfigencapsulationcheckiptablesmasquera   testds-5gqpv                                             2/2     Running   0          4m28s   10.120.1.7     gke-cilium-ci-14-cilium-ci-14-bedd472f-qdrl   <none>           <none>
	 202111301414k8sdatapathconfigencapsulationcheckiptablesmasquera   testds-vq9ms                                             2/2     Running   0          4m28s   10.120.2.125   gke-cilium-ci-14-cilium-ci-14-bedd472f-hrfs   <none>           <none>
	 cilium-monitoring                                                 grafana-d69c97b9b-9drnw                                  0/1     Running   0          77m     10.120.2.7     gke-cilium-ci-14-cilium-ci-14-bedd472f-hrfs   <none>           <none>
	 cilium-monitoring                                                 prometheus-655fb888d7-b9w75                              1/1     Running   0          77m     10.120.2.8     gke-cilium-ci-14-cilium-ci-14-bedd472f-hrfs   <none>           <none>
	 kube-system                                                       cilium-mchj2                                             1/1     Running   0          5m37s   10.128.0.25    gke-cilium-ci-14-cilium-ci-14-bedd472f-qdrl   <none>           <none>
	 kube-system                                                       cilium-node-init-dw8gq                                   1/1     Running   0          5m36s   10.128.0.24    gke-cilium-ci-14-cilium-ci-14-bedd472f-hrfs   <none>           <none>
	 kube-system                                                       cilium-node-init-k7pj2                                   1/1     Running   0          5m37s   10.128.0.25    gke-cilium-ci-14-cilium-ci-14-bedd472f-qdrl   <none>           <none>
	 kube-system                                                       cilium-operator-86dd47dfd7-8gcqc                         1/1     Running   0          5m36s   10.128.0.24    gke-cilium-ci-14-cilium-ci-14-bedd472f-hrfs   <none>           <none>
	 kube-system                                                       cilium-operator-86dd47dfd7-q2qxv                         1/1     Running   0          5m36s   10.128.0.25    gke-cilium-ci-14-cilium-ci-14-bedd472f-qdrl   <none>           <none>
	 kube-system                                                       cilium-zgrtb                                             1/1     Running   0          5m37s   10.128.0.24    gke-cilium-ci-14-cilium-ci-14-bedd472f-hrfs   <none>           <none>
	 kube-system                                                       event-exporter-gke-67986489c8-9h9kx                      2/2     Running   0          44m     10.120.1.170   gke-cilium-ci-14-cilium-ci-14-bedd472f-qdrl   <none>           <none>
	 kube-system                                                       fluentbit-gke-p8wjb                                      2/2     Running   0          78m     10.128.0.25    gke-cilium-ci-14-cilium-ci-14-bedd472f-qdrl   <none>           <none>
	 kube-system                                                       fluentbit-gke-x5lfl                                      2/2     Running   0          78m     10.128.0.24    gke-cilium-ci-14-cilium-ci-14-bedd472f-hrfs   <none>           <none>
	 kube-system                                                       gke-metrics-agent-cqmq2                                  1/1     Running   0          78m     10.128.0.25    gke-cilium-ci-14-cilium-ci-14-bedd472f-qdrl   <none>           <none>
	 kube-system                                                       gke-metrics-agent-t288z                                  1/1     Running   0          78m     10.128.0.24    gke-cilium-ci-14-cilium-ci-14-bedd472f-hrfs   <none>           <none>
	 kube-system                                                       konnectivity-agent-96bf468f4-2r4gl                       1/1     Running   0          44m     10.120.1.83    gke-cilium-ci-14-cilium-ci-14-bedd472f-qdrl   <none>           <none>
	 kube-system                                                       konnectivity-agent-96bf468f4-jxt8x                       1/1     Running   0          44m     10.120.2.230   gke-cilium-ci-14-cilium-ci-14-bedd472f-hrfs   <none>           <none>
	 kube-system                                                       konnectivity-agent-autoscaler-6cb774c9cc-b6msf           1/1     Running   0          44m     10.120.1.200   gke-cilium-ci-14-cilium-ci-14-bedd472f-qdrl   <none>           <none>
	 kube-system                                                       kube-dns-autoscaler-844c9d9448-r2td2                     1/1     Running   0          44m     10.120.1.103   gke-cilium-ci-14-cilium-ci-14-bedd472f-qdrl   <none>           <none>
	 kube-system                                                       kube-dns-b4f5c58c7-drd87                                 4/4     Running   0          11m     10.120.2.83    gke-cilium-ci-14-cilium-ci-14-bedd472f-hrfs   <none>           <none>
	 kube-system                                                       kube-dns-b4f5c58c7-z2vw8                                 4/4     Running   0          11m     10.120.2.192   gke-cilium-ci-14-cilium-ci-14-bedd472f-hrfs   <none>           <none>
	 kube-system                                                       kube-proxy-gke-cilium-ci-14-cilium-ci-14-bedd472f-hrfs   1/1     Running   0          78m     10.128.0.24    gke-cilium-ci-14-cilium-ci-14-bedd472f-hrfs   <none>           <none>
	 kube-system                                                       kube-proxy-gke-cilium-ci-14-cilium-ci-14-bedd472f-qdrl   1/1     Running   0          78m     10.128.0.25    gke-cilium-ci-14-cilium-ci-14-bedd472f-qdrl   <none>           <none>
	 kube-system                                                       l7-default-backend-56cb9644f6-ssw29                      1/1     Running   0          44m     10.120.1.249   gke-cilium-ci-14-cilium-ci-14-bedd472f-qdrl   <none>           <none>
	 kube-system                                                       log-gatherer-dxsnf                                       1/1     Running   0          77m     10.128.0.24    gke-cilium-ci-14-cilium-ci-14-bedd472f-hrfs   <none>           <none>
	 kube-system                                                       log-gatherer-gljhn                                       1/1     Running   0          77m     10.128.0.25    gke-cilium-ci-14-cilium-ci-14-bedd472f-qdrl   <none>           <none>
	 kube-system                                                       metrics-server-v0.3.6-9c5bbf784-mnkv6                    2/2     Running   0          44m     10.120.1.100   gke-cilium-ci-14-cilium-ci-14-bedd472f-qdrl   <none>           <none>
	 kube-system                                                       pdcsi-node-dgn6j                                         2/2     Running   0          78m     10.128.0.24    gke-cilium-ci-14-cilium-ci-14-bedd472f-hrfs   <none>           <none>
	 kube-system                                                       pdcsi-node-dlsvv                                         2/2     Running   0          78m     10.128.0.25    gke-cilium-ci-14-cilium-ci-14-bedd472f-qdrl   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-mchj2 cilium-zgrtb]
cmd: kubectl exec -n kube-system cilium-mchj2 -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.20+ (v1.20.10-gke.1600) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [eth0 10.128.0.25 (Direct Routing)]
	 Host firewall:          Disabled
	 Cilium:                 Ok   1.11.90 (v1.11.90-cd25bc1)
	 NodeMonitor:            Listening for events on 4 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 11/254 allocated from 10.120.1.0/24, 
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           IPTables [IPv4: Enabled, IPv6: Disabled]
	 Controller Status:      64/64 healthy
	 Proxy Status:           OK, ip 10.120.1.12, 0 redirects active on ports 10000-20000
	 Hubble:                 Ok   Current/Max Flows: 1925/4095 (47.01%), Flows/s: 5.51   Metrics: Disabled
	 Encryption:             Disabled
	 Cluster health:         2/2 reachable   (2021-11-30T14:18:22Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-mchj2 -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                       IPv6   IPv4           STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                          
	 129        Disabled           Disabled          9243       k8s:io.cilium.k8s.policy.cluster=default                                                                 10.120.1.83    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=konnectivity-agent                                                                      
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                     
	                                                            k8s:k8s-app=konnectivity-agent                                                                                                  
	 305        Disabled           Disabled          49093      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.120.1.149   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                 
	                                                            k8s:io.kubernetes.pod.namespace=202111301414k8sdatapathconfigencapsulationcheckiptablesmasquera                                 
	                                                            k8s:zgroup=testDSClient                                                                                                         
	 484        Disabled           Disabled          26375      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.120.1.200   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=konnectivity-agent-cpha                                                                 
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                     
	                                                            k8s:k8s-app=konnectivity-agent-autoscaler                                                                                       
	 568        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                              ready   
	                                                            k8s:cloud.google.com/gke-boot-disk=pd-standard                                                                                  
	                                                            k8s:cloud.google.com/gke-container-runtime=containerd                                                                           
	                                                            k8s:cloud.google.com/gke-nodepool=cilium-ci-14                                                                                  
	                                                            k8s:cloud.google.com/gke-os-distribution=cos                                                                                    
	                                                            k8s:cloud.google.com/machine-family=n1                                                                                          
	                                                            k8s:node.kubernetes.io/instance-type=n1-standard-4                                                                              
	                                                            k8s:topology.gke.io/zone=us-west1-c                                                                                             
	                                                            k8s:topology.kubernetes.io/region=us-west1                                                                                      
	                                                            k8s:topology.kubernetes.io/zone=us-west1-c                                                                                      
	                                                            reserved:host                                                                                                                   
	 729        Disabled           Disabled          15791      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.120.1.103   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=kube-dns-autoscaler                                                                     
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                     
	                                                            k8s:k8s-app=kube-dns-autoscaler                                                                                                 
	 860        Disabled           Disabled          64096      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.120.1.100   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=metrics-server                                                                          
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                     
	                                                            k8s:k8s-app=metrics-server                                                                                                      
	                                                            k8s:version=v0.3.6                                                                                                              
	 1054       Disabled           Disabled          4          reserved:health                                                                                          10.120.1.30    ready   
	 1812       Disabled           Disabled          44771      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.120.1.249   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                 
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                     
	                                                            k8s:k8s-app=glbc                                                                                                                
	                                                            k8s:name=glbc                                                                                                                   
	 2039       Disabled           Disabled          22823      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.120.1.123   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                 
	                                                            k8s:io.kubernetes.pod.namespace=202111301414k8sdatapathconfigencapsulationcheckiptablesmasquera                                 
	                                                            k8s:zgroup=test-k8s2                                                                                                            
	 2523       Enabled            Disabled          39691      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.120.1.7     ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                 
	                                                            k8s:io.kubernetes.pod.namespace=202111301414k8sdatapathconfigencapsulationcheckiptablesmasquera                                 
	                                                            k8s:zgroup=testDS                                                                                                               
	 3876       Disabled           Disabled          63005      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.120.1.170   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=event-exporter-sa                                                                       
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                     
	                                                            k8s:k8s-app=event-exporter                                                                                                      
	                                                            k8s:version=v0.3.4                                                                                                              
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-zgrtb -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.20+ (v1.20.10-gke.1600) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [eth0 10.128.0.24 (Direct Routing)]
	 Host firewall:          Disabled
	 Cilium:                 Ok   1.11.90 (v1.11.90-cd25bc1)
	 NodeMonitor:            Listening for events on 4 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 7/254 allocated from 10.120.2.0/24, 
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           IPTables [IPv4: Enabled, IPv6: Disabled]
	 Controller Status:      45/45 healthy
	 Proxy Status:           OK, ip 10.120.2.96, 0 redirects active on ports 10000-20000
	 Hubble:                 Ok   Current/Max Flows: 3584/4095 (87.52%), Flows/s: 11.26   Metrics: Disabled
	 Encryption:             Disabled
	 Cluster health:         2/2 reachable   (2021-11-30T14:17:13Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-zgrtb -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                       IPv6   IPv4           STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                          
	 450        Disabled           Disabled          49093      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.120.2.1     ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                 
	                                                            k8s:io.kubernetes.pod.namespace=202111301414k8sdatapathconfigencapsulationcheckiptablesmasquera                                 
	                                                            k8s:zgroup=testDSClient                                                                                                         
	 616        Disabled           Disabled          4          reserved:health                                                                                          10.120.2.205   ready   
	 691        Disabled           Disabled          9243       k8s:io.cilium.k8s.policy.cluster=default                                                                 10.120.2.230   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=konnectivity-agent                                                                      
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                     
	                                                            k8s:k8s-app=konnectivity-agent                                                                                                  
	 763        Disabled           Disabled          16427      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.120.2.192   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=kube-dns                                                                                
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                     
	                                                            k8s:k8s-app=kube-dns                                                                                                            
	 2168       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                              ready   
	                                                            k8s:cloud.google.com/gke-boot-disk=pd-standard                                                                                  
	                                                            k8s:cloud.google.com/gke-container-runtime=containerd                                                                           
	                                                            k8s:cloud.google.com/gke-nodepool=cilium-ci-14                                                                                  
	                                                            k8s:cloud.google.com/gke-os-distribution=cos                                                                                    
	                                                            k8s:cloud.google.com/machine-family=n1                                                                                          
	                                                            k8s:node.kubernetes.io/instance-type=n1-standard-4                                                                              
	                                                            k8s:topology.gke.io/zone=us-west1-c                                                                                             
	                                                            k8s:topology.kubernetes.io/region=us-west1                                                                                      
	                                                            k8s:topology.kubernetes.io/zone=us-west1-c                                                                                      
	                                                            reserved:host                                                                                                                   
	 2351       Disabled           Disabled          16427      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.120.2.83    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=kube-dns                                                                                
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                     
	                                                            k8s:k8s-app=kube-dns                                                                                                            
	 2912       Enabled            Disabled          39691      k8s:io.cilium.k8s.policy.cluster=default                                                                 10.120.2.125   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                 
	                                                            k8s:io.kubernetes.pod.namespace=202111301414k8sdatapathconfigencapsulationcheckiptablesmasquera                                 
	                                                            k8s:zgroup=testDS                                                                                                               
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
14:19:49 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
14:19:49 STEP: Deleting deployment demo_ds.yaml
14:19:59 STEP: Deleting namespace 202111301414k8sdatapathconfigencapsulationcheckiptablesmasquera
14:20:19 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|ac9ad29e_K8sDatapathConfig_Encapsulation_Check_iptables_masquerading_with_random-fully.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-GKE//7061/artifact/src/github.com/cilium/cilium/ac9ad29e_K8sDatapathConfig_Encapsulation_Check_iptables_masquerading_with_random-fully.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-GKE//7061/artifact/src/github.com/cilium/cilium/test_results_Cilium-PR-K8s-GKE_7061_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-GKE/7061/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@joestringer joestringer added this to To quarantine in 1.11 CI via automation Dec 1, 2021
@joestringer joestringer moved this from To quarantine to Unassigned in 1.11 CI Dec 1, 2021
@borkmann borkmann added the sig/datapath Impacts bpf/ or low-level forwarding details, including map management and monitor messages. label Dec 6, 2021
@borkmann borkmann self-assigned this Dec 8, 2021
@borkmann borkmann moved this from Unassigned (Datapath) to To triage in 1.11 CI Dec 8, 2021
@aanm aanm added the area/CI Continuous Integration testing issue or flake label Jan 6, 2022
@maintainer-s-little-helper
Copy link
Author

PR #19062 hit this flake with 91.08% similarity:

Click to show.

Test Name

K8sDatapathConfig Encapsulation Check iptables masquerading with random-fully

Failure Output

FAIL: Pod "testclient-p55fz" can not connect to "http://google.com"

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-5.4/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:518
Pod "testclient-p55fz" can not connect to "http://google.com"
Expected command: kubectl exec -n 202203111250k8sdatapathconfigencapsulationcheckiptablesmasquera testclient-p55fz -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://google.com -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" --retry 10 
To succeed, but it failed:
Exitcode: 7 
Err: exit status 7
Stdout:
 	 time-> DNS: '0.008645()', Connect: '0.000000',Transfer '0.000000', total '2.238399'
Stderr:
 	 command terminated with exit code 7
	 

/home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-5.4/src/github.com/cilium/cilium/test/k8sT/DatapathConfiguration.go:277

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 2
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 1 errors/warnings:
Session affinity for host reachable services needs kernel 5.7.0 or newer to work properly when accessed from inside cluster: the same service endpoint will be selected from all network namespaces on the host.
Cilium pods: [cilium-hcl4g cilium-z886w]
Netpols loaded: 
CiliumNetworkPolicies loaded: 202203111250k8sdatapathconfigencapsulationcheckiptablesmasquera::l3-policy-demo 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
prometheus-655fb888d7-mznc6             
coredns-7c74c644b-sfjjx                 
test-k8s2-79ff876c9d-8nqkd              
testclient-648rp                        
testclient-p55fz                        
testds-9xklv                            
testds-d25b9                            
grafana-d69c97b9b-p7hsv                 
Cilium agent 'cilium-hcl4g': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 47 Failed 0
Cilium agent 'cilium-z886w': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 29 Failed 0


Standard Error

Click to show.
12:49:49 STEP: Running BeforeEach block for EntireTestsuite K8sDatapathConfig Encapsulation
12:49:49 STEP: Installing Cilium
12:49:51 STEP: Waiting for Cilium to become ready
12:50:42 STEP: Validating if Kubernetes DNS is deployed
12:50:42 STEP: Checking if deployment is ready
12:50:42 STEP: Checking if kube-dns service is plumbed correctly
12:50:42 STEP: Checking if DNS can resolve
12:50:42 STEP: Checking if pods have identity
12:50:43 STEP: Kubernetes DNS is not ready: %!s(<nil>)
12:50:43 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
12:50:43 STEP: Waiting for Kubernetes DNS to become operational
12:50:43 STEP: Checking if deployment is ready
12:50:43 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
12:50:44 STEP: Checking if deployment is ready
12:50:44 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
12:50:45 STEP: Checking if deployment is ready
12:50:45 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
12:50:46 STEP: Checking if deployment is ready
12:50:46 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
12:50:47 STEP: Checking if deployment is ready
12:50:47 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
12:50:48 STEP: Checking if deployment is ready
12:50:48 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
12:50:49 STEP: Checking if deployment is ready
12:50:49 STEP: Checking if kube-dns service is plumbed correctly
12:50:49 STEP: Checking if DNS can resolve
12:50:49 STEP: Checking if pods have identity
12:50:50 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist
12:50:50 STEP: Checking if deployment is ready
12:50:50 STEP: Checking if kube-dns service is plumbed correctly
12:50:50 STEP: Checking if pods have identity
12:50:50 STEP: Checking if DNS can resolve
12:50:51 STEP: Validating Cilium Installation
12:50:51 STEP: Performing Cilium controllers preflight check
12:50:51 STEP: Performing Cilium health check
12:50:51 STEP: Performing Cilium status preflight check
12:50:52 STEP: Performing Cilium service preflight check
12:50:52 STEP: Performing K8s service preflight check
12:50:54 STEP: Waiting for cilium-operator to be ready
12:50:54 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
12:50:54 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
12:50:54 STEP: Making sure all endpoints are in ready state
12:50:55 STEP: Creating namespace 202203111250k8sdatapathconfigencapsulationcheckiptablesmasquera
12:50:55 STEP: Deploying demo_ds.yaml in namespace 202203111250k8sdatapathconfigencapsulationcheckiptablesmasquera
12:50:56 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-5.4/src/github.com/cilium/cilium/test/k8sT/manifests/l3-policy-demo.yaml
12:50:58 STEP: Waiting for 4m0s for 5 pods of deployment demo_ds.yaml to become ready
12:50:58 STEP: WaitforNPods(namespace="202203111250k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="")
12:51:05 STEP: WaitforNPods(namespace="202203111250k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="") => <nil>
12:51:05 STEP: Checking pod connectivity between nodes
12:51:05 STEP: WaitforPods(namespace="202203111250k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDSClient")
12:51:05 STEP: WaitforPods(namespace="202203111250k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDSClient") => <nil>
12:51:05 STEP: WaitforPods(namespace="202203111250k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDS")
12:51:05 STEP: WaitforPods(namespace="202203111250k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDS") => <nil>
12:51:10 STEP: Test iptables masquerading
12:51:10 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-5.4/src/github.com/cilium/cilium/test/k8sT/manifests/l3-policy-demo.yaml
12:51:11 STEP: Waiting for 4m0s for 5 pods of deployment demo_ds.yaml to become ready
12:51:11 STEP: WaitforNPods(namespace="202203111250k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="")
12:51:11 STEP: WaitforNPods(namespace="202203111250k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="") => <nil>
12:51:11 STEP: Making ten curl requests from "testclient-648rp" to "http://google.com"
12:51:12 STEP: Making ten curl requests from "testclient-p55fz" to "http://google.com"
FAIL: Pod "testclient-p55fz" can not connect to "http://google.com"
Expected command: kubectl exec -n 202203111250k8sdatapathconfigencapsulationcheckiptablesmasquera testclient-p55fz -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://google.com -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" --retry 10 
To succeed, but it failed:
Exitcode: 7 
Err: exit status 7
Stdout:
 	 time-> DNS: '0.008645()', Connect: '0.000000',Transfer '0.000000', total '2.238399'
Stderr:
 	 command terminated with exit code 7
	 

=== Test Finished at 2022-03-11T12:51:15Z====
12:51:15 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
12:51:16 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                              READY   STATUS    RESTARTS   AGE   IP              NODE   NOMINATED NODE   READINESS GATES
	 202203111250k8sdatapathconfigencapsulationcheckiptablesmasquera   test-k8s2-79ff876c9d-8nqkd        2/2     Running   0          22s   10.0.1.136      k8s2   <none>           <none>
	 202203111250k8sdatapathconfigencapsulationcheckiptablesmasquera   testclient-648rp                  1/1     Running   0          22s   10.0.1.70       k8s2   <none>           <none>
	 202203111250k8sdatapathconfigencapsulationcheckiptablesmasquera   testclient-p55fz                  1/1     Running   0          22s   10.0.0.28       k8s1   <none>           <none>
	 202203111250k8sdatapathconfigencapsulationcheckiptablesmasquera   testds-9xklv                      2/2     Running   0          22s   10.0.1.185      k8s2   <none>           <none>
	 202203111250k8sdatapathconfigencapsulationcheckiptablesmasquera   testds-d25b9                      2/2     Running   0          22s   10.0.0.59       k8s1   <none>           <none>
	 cilium-monitoring                                                 grafana-d69c97b9b-p7hsv           1/1     Running   0          12m   10.0.1.99       k8s2   <none>           <none>
	 cilium-monitoring                                                 prometheus-655fb888d7-mznc6       1/1     Running   0          12m   10.0.1.100      k8s2   <none>           <none>
	 kube-system                                                       cilium-hcl4g                      1/1     Running   0          86s   192.168.36.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-6f87dfcb6-vlqps   1/1     Running   0          86s   192.168.36.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-6f87dfcb6-w72df   1/1     Running   0          86s   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-z886w                      1/1     Running   0          86s   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       coredns-7c74c644b-sfjjx           1/1     Running   0          34s   10.0.1.107      k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                         1/1     Running   0          15m   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1               1/1     Running   0          15m   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1      1/1     Running   0          15m   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-mv7xz                  1/1     Running   0          15m   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       kube-proxy-vlk4q                  1/1     Running   0          13m   192.168.36.12   k8s2   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1               1/1     Running   0          15m   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-k68xm                1/1     Running   0          12m   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-mcs8r                1/1     Running   0          12m   192.168.36.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-2qxkh              1/1     Running   0          13m   192.168.36.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-4vkxw              1/1     Running   0          13m   192.168.36.11   k8s1   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-hcl4g cilium-z886w]
cmd: kubectl exec -n kube-system cilium-hcl4g -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.19 (v1.19.16) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s3 10.0.2.15 fd04::12, enp0s8 192.168.36.12 fd04::12 (Direct Routing)]
	 Cilium:                 Ok   1.10.8 (v1.10.8-f64a507)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 8/254 allocated from 10.0.1.0/24, IPv6: 8/254 allocated from fd02::100/120
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           IPTables [IPv4: Enabled, IPv6: Disabled]
	 Controller Status:      47/47 healthy
	 Proxy Status:           OK, ip 10.0.1.229, 0 redirects active on ports 10000-20000
	 Hubble:                 Ok   Current/Max Flows: 887/4095 (21.66%), Flows/s: 10.24   Metrics: Disabled
	 Encryption:             Disabled
	 Cluster health:         2/2 reachable   (2022-03-11T12:50:52Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-hcl4g -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                       IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                             
	 95         Disabled           Disabled          5106       k8s:app=grafana                                                                                   fd02::195   10.0.1.99    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                  
	 248        Disabled           Disabled          30045      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::1f4   10.0.1.136   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=202203111250k8sdatapathconfigencapsulationcheckiptablesmasquera                                    
	                                                            k8s:zgroup=test-k8s2                                                                                                               
	 535        Disabled           Disabled          59138      k8s:app=prometheus                                                                                fd02::1de   10.0.1.100   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                             
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                  
	 564        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                 ready   
	                                                            reserved:host                                                                                                                      
	 1248       Disabled           Disabled          15690      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::1cf   10.0.1.70    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=202203111250k8sdatapathconfigencapsulationcheckiptablesmasquera                                    
	                                                            k8s:zgroup=testDSClient                                                                                                            
	 1391       Disabled           Disabled          4          reserved:health                                                                                   fd02::159   10.0.1.218   ready   
	 1679       Enabled            Disabled          12473      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::181   10.0.1.185   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=202203111250k8sdatapathconfigencapsulationcheckiptablesmasquera                                    
	                                                            k8s:zgroup=testDS                                                                                                                  
	 2672       Disabled           Disabled          38195      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::1c0   10.0.1.107   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                        
	                                                            k8s:k8s-app=kube-dns                                                                                                               
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-z886w -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.19 (v1.19.16) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s3 10.0.2.15 fd04::11, enp0s8 192.168.36.11 fd04::11 (Direct Routing)]
	 Cilium:                 Ok   1.10.8 (v1.10.8-f64a507)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 4/254 allocated from 10.0.0.0/24, IPv6: 4/254 allocated from fd02::/120
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           IPTables [IPv4: Enabled, IPv6: Disabled]
	 Controller Status:      29/29 healthy
	 Proxy Status:           OK, ip 10.0.0.94, 0 redirects active on ports 10000-20000
	 Hubble:                 Ok   Current/Max Flows: 693/4095 (16.92%), Flows/s: 7.38   Metrics: Disabled
	 Encryption:             Disabled
	 Cluster health:         2/2 reachable   (2022-03-11T12:50:54Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-z886w -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                       IPv6       IPv4        STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                           
	 267        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                               ready   
	                                                            k8s:node-role.kubernetes.io/master                                                                                               
	                                                            reserved:host                                                                                                                    
	 1044       Disabled           Disabled          15690      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::b7   10.0.0.28   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202203111250k8sdatapathconfigencapsulationcheckiptablesmasquera                                  
	                                                            k8s:zgroup=testDSClient                                                                                                          
	 1671       Disabled           Disabled          4          reserved:health                                                                                   fd02::77   10.0.0.2    ready   
	 3820       Enabled            Disabled          12473      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::e3   10.0.0.59   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                  
	                                                            k8s:io.kubernetes.pod.namespace=202203111250k8sdatapathconfigencapsulationcheckiptablesmasquera                                  
	                                                            k8s:zgroup=testDS                                                                                                                
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
12:51:33 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
12:51:33 STEP: Deleting deployment demo_ds.yaml
12:51:34 STEP: Deleting namespace 202203111250k8sdatapathconfigencapsulationcheckiptablesmasquera
12:51:49 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|fbf63a34_K8sDatapathConfig_Encapsulation_Check_iptables_masquerading_with_random-fully.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-5.4//1427/artifact/fbf63a34_K8sDatapathConfig_Encapsulation_Check_iptables_masquerading_with_random-fully.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-5.4//1427/artifact/test_results_Cilium-PR-K8s-1.19-kernel-5.4_1427_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-5.4/1427/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper
Copy link
Author

PR #20937 hit this flake with 87.91% similarity:

Click to show.

Test Name

K8sDatapathConfig Encapsulation Check iptables masquerading with random-fully

Failure Output

FAIL: Pod "testclient-zkng9" can not connect to "http://google.com"

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.16-net-next/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:518
Pod "testclient-zkng9" can not connect to "http://google.com"
Expected command: kubectl exec -n 202208171033k8sdatapathconfigencapsulationcheckiptablesmasquera testclient-zkng9 -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://google.com -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" --retry 10 
To succeed, but it failed:
Exitcode: 7 
Err: exit status 7
Stdout:
 	 time-> DNS: '0.012539()', Connect: '0.000000',Transfer '0.000000', total '2.229305'
Stderr:
 	 command terminated with exit code 7
	 

/home/jenkins/workspace/Cilium-PR-K8s-1.16-net-next/src/github.com/cilium/cilium/test/k8sT/DatapathConfiguration.go:277

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 2
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Cilium pods: [cilium-2jkth cilium-5c525]
Netpols loaded: 
CiliumNetworkPolicies loaded: 202208171033k8sdatapathconfigencapsulationcheckiptablesmasquera::l3-policy-demo 
Endpoint Policy Enforcement:
Pod                          Ingress   Egress
testds-d4csf                           
grafana-7fd557d749-jgg55               
prometheus-d87f8f984-rpwk5             
coredns-8cfc78c54-dxx5z                
test-k8s2-5b756fd6c5-8vppz             
testclient-jxr9t                       
testclient-zkng9                       
testds-7c7mp                           
Cilium agent 'cilium-2jkth': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 38 Failed 0
Cilium agent 'cilium-5c525': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 36 Failed 0


Standard Error

Click to show.
10:32:37 STEP: Running BeforeEach block for EntireTestsuite K8sDatapathConfig Encapsulation
10:32:37 STEP: Installing Cilium
10:32:38 STEP: Waiting for Cilium to become ready
10:33:28 STEP: Validating if Kubernetes DNS is deployed
10:33:28 STEP: Checking if deployment is ready
10:33:28 STEP: Checking if kube-dns service is plumbed correctly
10:33:28 STEP: Checking if pods have identity
10:33:28 STEP: Checking if DNS can resolve
10:33:29 STEP: Kubernetes DNS is not ready: %!s(<nil>)
10:33:29 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
10:33:29 STEP: Waiting for Kubernetes DNS to become operational
10:33:29 STEP: Checking if deployment is ready
10:33:29 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:33:30 STEP: Checking if deployment is ready
10:33:30 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:33:31 STEP: Checking if deployment is ready
10:33:31 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:33:32 STEP: Checking if deployment is ready
10:33:32 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:33:33 STEP: Checking if deployment is ready
10:33:33 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:33:34 STEP: Checking if deployment is ready
10:33:34 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
10:33:35 STEP: Checking if deployment is ready
10:33:35 STEP: Checking if kube-dns service is plumbed correctly
10:33:35 STEP: Checking if pods have identity
10:33:35 STEP: Checking if DNS can resolve
10:33:36 STEP: Validating Cilium Installation
10:33:36 STEP: Performing Cilium controllers preflight check
10:33:36 STEP: Performing Cilium health check
10:33:36 STEP: Performing Cilium status preflight check
10:33:36 STEP: Checking whether host EP regenerated
10:33:37 STEP: Performing Cilium service preflight check
10:33:37 STEP: Performing K8s service preflight check
10:33:37 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-2jkth': Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory
	 
	 command terminated with exit code 1
	 

10:33:37 STEP: Performing Cilium controllers preflight check
10:33:37 STEP: Performing Cilium status preflight check
10:33:37 STEP: Performing Cilium health check
10:33:37 STEP: Checking whether host EP regenerated
10:33:38 STEP: Performing Cilium service preflight check
10:33:38 STEP: Performing K8s service preflight check
10:33:38 STEP: Performing Cilium controllers preflight check
10:33:38 STEP: Performing Cilium status preflight check
10:33:38 STEP: Checking whether host EP regenerated
10:33:38 STEP: Performing Cilium health check
10:33:39 STEP: Performing Cilium service preflight check
10:33:39 STEP: Performing K8s service preflight check
10:33:41 STEP: Waiting for cilium-operator to be ready
10:33:41 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
10:33:41 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
10:33:41 STEP: Making sure all endpoints are in ready state
10:33:42 STEP: Creating namespace 202208171033k8sdatapathconfigencapsulationcheckiptablesmasquera
10:33:42 STEP: Deploying demo_ds.yaml in namespace 202208171033k8sdatapathconfigencapsulationcheckiptablesmasquera
10:33:42 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.16-net-next/src/github.com/cilium/cilium/test/k8sT/manifests/l3-policy-demo.yaml
10:33:44 STEP: Waiting for 4m0s for 5 pods of deployment demo_ds.yaml to become ready
10:33:44 STEP: WaitforNPods(namespace="202208171033k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="")
10:33:54 STEP: WaitforNPods(namespace="202208171033k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="") => <nil>
10:33:54 STEP: Checking pod connectivity between nodes
10:33:54 STEP: WaitforPods(namespace="202208171033k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDSClient")
10:33:54 STEP: WaitforPods(namespace="202208171033k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDSClient") => <nil>
10:33:54 STEP: WaitforPods(namespace="202208171033k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDS")
10:33:54 STEP: WaitforPods(namespace="202208171033k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="-l zgroup=testDS") => <nil>
10:33:59 STEP: Test iptables masquerading
10:33:59 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.16-net-next/src/github.com/cilium/cilium/test/k8sT/manifests/l3-policy-demo.yaml
10:33:59 STEP: Waiting for 4m0s for 5 pods of deployment demo_ds.yaml to become ready
10:33:59 STEP: WaitforNPods(namespace="202208171033k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="")
10:33:59 STEP: WaitforNPods(namespace="202208171033k8sdatapathconfigencapsulationcheckiptablesmasquera", filter="") => <nil>
10:33:59 STEP: Making ten curl requests from "testclient-jxr9t" to "http://google.com"
10:34:01 STEP: Making ten curl requests from "testclient-zkng9" to "http://google.com"
FAIL: Pod "testclient-zkng9" can not connect to "http://google.com"
Expected command: kubectl exec -n 202208171033k8sdatapathconfigencapsulationcheckiptablesmasquera testclient-zkng9 -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://google.com -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'" --retry 10 
To succeed, but it failed:
Exitcode: 7 
Err: exit status 7
Stdout:
 	 time-> DNS: '0.012539()', Connect: '0.000000',Transfer '0.000000', total '2.229305'
Stderr:
 	 command terminated with exit code 7
	 

=== Test Finished at 2022-08-17T10:34:05Z====
10:34:05 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
10:34:05 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS    RESTARTS   AGE   IP              NODE   NOMINATED NODE   READINESS GATES
	 202208171033k8sdatapathconfigencapsulationcheckiptablesmasquera   test-k8s2-5b756fd6c5-8vppz         2/2     Running   0          25s   10.0.1.198      k8s2   <none>           <none>
	 202208171033k8sdatapathconfigencapsulationcheckiptablesmasquera   testclient-jxr9t                   1/1     Running   0          25s   10.0.1.151      k8s2   <none>           <none>
	 202208171033k8sdatapathconfigencapsulationcheckiptablesmasquera   testclient-zkng9                   1/1     Running   0          25s   10.0.0.203      k8s1   <none>           <none>
	 202208171033k8sdatapathconfigencapsulationcheckiptablesmasquera   testds-7c7mp                       2/2     Running   0          25s   10.0.1.137      k8s2   <none>           <none>
	 202208171033k8sdatapathconfigencapsulationcheckiptablesmasquera   testds-d4csf                       2/2     Running   0          25s   10.0.0.94       k8s1   <none>           <none>
	 cilium-monitoring                                                 grafana-7fd557d749-jgg55           1/1     Running   0          26m   10.0.0.55       k8s1   <none>           <none>
	 cilium-monitoring                                                 prometheus-d87f8f984-rpwk5         1/1     Running   0          26m   10.0.0.1        k8s1   <none>           <none>
	 kube-system                                                       cilium-2jkth                       1/1     Running   0          89s   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-5c525                       1/1     Running   0          89s   192.168.36.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-756b8548c4-nx88b   1/1     Running   0          89s   192.168.36.13   k8s3   <none>           <none>
	 kube-system                                                       cilium-operator-756b8548c4-rm7kn   1/1     Running   0          89s   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       coredns-8cfc78c54-dxx5z            1/1     Running   0          38s   10.0.1.190      k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running   0          34m   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running   0          34m   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running   0          34m   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running   0          34m   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-9pjl9                 1/1     Running   0          27m   192.168.36.13   k8s3   <none>           <none>
	 kube-system                                                       log-gatherer-f7jsb                 1/1     Running   0          27m   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-wcs7r                 1/1     Running   0          27m   192.168.36.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-229cc               1/1     Running   0          27m   192.168.36.12   k8s2   <none>           <none>
	 kube-system                                                       registry-adder-rs6zf               1/1     Running   0          27m   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-wqcqm               1/1     Running   0          27m   192.168.36.13   k8s3   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-2jkth cilium-5c525]
cmd: kubectl exec -n kube-system cilium-2jkth -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.16 (v1.16.15) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Endpoint", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s3 10.0.2.15 fd04::11, enp0s8 192.168.36.11 fd04::11 (Direct Routing)]
	 Cilium:                 Ok   1.10.14 (v1.10.14-54ee3c8)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 6/254 allocated from 10.0.0.0/24, IPv6: 6/254 allocated from fd02::/120
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           IPTables [IPv4: Enabled, IPv6: Disabled]
	 Controller Status:      38/38 healthy
	 Proxy Status:           OK, ip 10.0.0.156, 0 redirects active on ports 10000-20000
	 Hubble:                 Ok   Current/Max Flows: 1095/4095 (26.74%), Flows/s: 11.85   Metrics: Disabled
	 Encryption:             Disabled
	 Cluster health:         2/2 reachable   (2022-08-17T10:33:39Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-2jkth -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                       IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                            
	 48         Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                ready   
	                                                            k8s:node-role.kubernetes.io/master                                                                                                
	                                                            reserved:host                                                                                                                     
	 362        Enabled            Disabled          14209      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::5d   10.0.0.94    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202208171033k8sdatapathconfigencapsulationcheckiptablesmasquera                                   
	                                                            k8s:zgroup=testDS                                                                                                                 
	 567        Disabled           Disabled          4          reserved:health                                                                                   fd02::eb   10.0.0.104   ready   
	 576        Disabled           Disabled          8966       k8s:app=grafana                                                                                   fd02::6    10.0.0.55    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                 
	 1272       Disabled           Disabled          39621      k8s:app=prometheus                                                                                fd02::69   10.0.0.1     ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                            
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                 
	 3839       Disabled           Disabled          48646      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::2b   10.0.0.203   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202208171033k8sdatapathconfigencapsulationcheckiptablesmasquera                                   
	                                                            k8s:zgroup=testDSClient                                                                                                           
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-5c525 -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.16 (v1.16.15) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Endpoint", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s3 10.0.2.15 fd04::12, enp0s8 192.168.36.12 fd04::12 (Direct Routing)]
	 Cilium:                 Ok   1.10.14 (v1.10.14-54ee3c8)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 6/254 allocated from 10.0.1.0/24, IPv6: 6/254 allocated from fd02::100/120
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           IPTables [IPv4: Enabled, IPv6: Disabled]
	 Controller Status:      36/36 healthy
	 Proxy Status:           OK, ip 10.0.1.220, 0 redirects active on ports 10000-20000
	 Hubble:                 Ok   Current/Max Flows: 1103/4095 (26.94%), Flows/s: 15.52   Metrics: Disabled
	 Encryption:             Disabled
	 Cluster health:         2/2 reachable   (2022-08-17T10:33:39Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-5c525 -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                       IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                             
	 794        Disabled           Disabled          2301       k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::138   10.0.1.190   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                        
	                                                            k8s:k8s-app=kube-dns                                                                                                               
	 796        Enabled            Disabled          14209      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::107   10.0.1.137   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=202208171033k8sdatapathconfigencapsulationcheckiptablesmasquera                                    
	                                                            k8s:zgroup=testDS                                                                                                                  
	 1089       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                 ready   
	                                                            reserved:host                                                                                                                      
	 1698       Disabled           Disabled          48646      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::1bd   10.0.1.151   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=202208171033k8sdatapathconfigencapsulationcheckiptablesmasquera                                    
	                                                            k8s:zgroup=testDSClient                                                                                                            
	 2134       Disabled           Disabled          5012       k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::161   10.0.1.198   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=202208171033k8sdatapathconfigencapsulationcheckiptablesmasquera                                    
	                                                            k8s:zgroup=test-k8s2                                                                                                               
	 3141       Disabled           Disabled          4          reserved:health                                                                                   fd02::19e   10.0.1.221   ready   
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
10:34:19 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
10:34:19 STEP: Deleting deployment demo_ds.yaml
10:34:19 STEP: Deleting namespace 202208171033k8sdatapathconfigencapsulationcheckiptablesmasquera
10:34:32 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|d772b98c_K8sDatapathConfig_Encapsulation_Check_iptables_masquerading_with_random-fully.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-net-next//2464/artifact/d772b98c_K8sDatapathConfig_Encapsulation_Check_iptables_masquerading_with_random-fully.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-net-next//2464/artifact/test_results_Cilium-PR-K8s-1.16-net-next_2464_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-net-next/2464/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@pchaigno
Copy link
Member

Taking run https://jenkins.cilium.io/job/cilium-master-k8s-1.24-kernel-5.4-quarantine/410/testReport/junit/Suite-k8s-1/24/K8sDatapathConfig_Encapsulation_Check_iptables_masquerading_with_random_fully/ to debug this.
Sysdump:

Tracing the Packet

We can see the IPv6 packet failing to be routed outside the node:

Dec 10 05:01:35.294: [fd02::1b]:45982 (ID:9883) -> [2607:f8b0:4023:1000::71]:80 (world) to-stack FORWARDED (TCP Flags: SYN)
Dec 10 05:01:35.294: fd04::11 (host) -> fd02::1b (ID:9883) to-endpoint FORWARDED (ICMPv6 DestinationUnreachable(NoRouteToDst))
Dec 10 05:01:36.316: [fd02::1b]:35414 (ID:9883) -> [2607:f8b0:4023:1000::8a]:80 (world) to-stack FORWARDED (TCP Flags: SYN)
Dec 10 05:01:37.791: [fd02::1b]:52508 (ID:9883) -> [2607:f8b0:4023:1000::65]:80 (world) to-stack FORWARDED (TCP Flags: SYN)
Dec 10 05:01:39.040: [fd02::1b]:58852 (ID:9883) -> [2607:f8b0:4023:1000::8b]:80 (world) to-stack FORWARDED (TCP Flags: SYN)

Routing Failure

Running the test locally, we can see that the packet should have been routed as follows:

$ ip -6 r get 2a00:1450:4007:808::200e
2a00:1450:4007:808::200e from :: via fd17:625c:f037:2::1 dev enp0s16 src fd17:625c:f037:2:a00:27ff:fee6:d2a3 metric 1024 pref medium

from the default route:

$ ip -6 r | grep default
default via fd17:625c:f037:2::1 dev enp0s16 metric 1024 pref medium

This route is installed at

cilium/test/Vagrantfile

Lines 240 to 243 in f59df85

server.vm.provision "ipv6-nat-config",
type: "shell",
run: "always",
inline: "ip -6 r a default via fd17:625c:f037:2::1 dev enp0s16 || true"

In the Jenkins output, we can see that that step of the provisioning failed for k8s1:

04:46:37  ==> k8s1-1.24: Running provisioner: ipv6-nat-config (shell)...
04:46:37      k8s1-1.24: Running: inline script
04:46:37      k8s1-1.24: RTNETLINK answers: No route to host

For some reason, the IPv6 NAT interface (enp0s16) is missing its NAT IP address (something like fd17:625c:f037:2:a00:27ff:fee6:d2a3/64):

$ cat ip-a.md | grep -A3 enp0s16
6: enp0s16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:dc:95:4e brd ff:ff:ff:ff:ff:ff
    inet 192.168.59.15/24 brd 192.168.59.255 scope global enp0s16
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fedc:954e/64 scope link 
       valid_lft forever preferred_lft forever

pchaigno added a commit to pchaigno/cilium that referenced this issue Dec 11, 2022
This commit dumps the natnetwork configuration on the host as part of
the VirtualBox provisioning, to help debug a flake with that device.

Related: cilium#17353
Signed-off-by: Paul Chaignon <paul@cilium.io>
pchaigno added a commit that referenced this issue Dec 12, 2022
This commit dumps the natnetwork configuration on the host as part of
the VirtualBox provisioning, to help debug a flake with that device.

Related: #17353
Signed-off-by: Paul Chaignon <paul@cilium.io>
@pchaigno pchaigno self-assigned this Dec 12, 2022
@pchaigno pchaigno added the feature/ipv6 Relates to IPv6 protocol support label Dec 12, 2022
@pchaigno
Copy link
Member

#22675 dumped the VirtualBox natnetworks on the host during the VM provisioning process. In case the tests passed, we can see the following natnetworks:

Name:         natnet1
Network:      192.168.0.0/16
Gateway:      192.168.0.1
DHCP Server:  Yes
IPv6:         Yes
IPv6 Prefix:  fd17:625c:f037:2::/64
IPv6 Default: No
Enabled:      Yes


Name:         natnet2
Network:      192.168.0.0/16
Gateway:      192.168.0.1
DHCP Server:  Yes
IPv6:         Yes
IPv6 Prefix:  fd17:625c:f037:2::/64
IPv6 Default: No
Enabled:      Yes

When the test fails, we have the following:

Name:         natnet1
Network:      10.0.2.0/24
Gateway:      10.0.2.1
DHCP Server:  Yes
IPv6:         No
IPv6 Prefix:  fd17:625c:f037:2::/64
IPv6 Default: No
Enabled:      Yes


Name:         natnet2
Network:      192.168.0.0/16
Gateway:      192.168.0.1
DHCP Server:  Yes
IPv6:         Yes
IPv6 Prefix:  fd17:625c:f037:2::/64
IPv6 Default: No
Enabled:      Yes

natnet1 matches k8s1, where the routing issue is happening. So for some reason, the network and its gateway are incorrect, even though we explicitly asked for the 192.168.0.0/16 network.

@pchaigno
Copy link
Member

Next step is to see what effect #22704 had on the quarantined tests. I expect it to mitigate the flake but not completely remove it. We can track this with this DataStudio dashboard.

In case of any failure, we should check the Jenkins log to see:

  1. If we have any indication as to why the natnetwork has an incorrect Network attribute.
  2. If the natnetwork with the incorrect attribute already existed.

1.11 CI automation moved this from To triage to Evaluate to exit quarantine Jan 25, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/CI Continuous Integration testing issue or flake ci/flake This is a known failure that occurs in the tree. Please investigate me! feature/ipv6 Relates to IPv6 protocol support sig/datapath Impacts bpf/ or low-level forwarding details, including map management and monitor messages.
Projects
No open projects
1.11 CI
Evaluate to exit quarantine
Development

Successfully merging a pull request may close this issue.

4 participants