Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CI: Ginkgo panic: Test Panicked at internal/asyncassertion/async_assertion.go:95 #17661

Closed
maintainer-s-little-helper bot opened this issue Oct 21, 2021 · 2 comments
Labels
ci/flake This is a known failure that occurs in the tree. Please investigate me!

Comments

@maintainer-s-little-helper
Copy link

Test Name

K8sDatapathConfig Encapsulation Check vxlan connectivity with per-endpoint routes

Failure Output


Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.16-net-next/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:527
Test Panicked
/home/jenkins/workspace/Cilium-PR-K8s-1.16-net-next/src/github.com/cilium/cilium/vendor/github.com/onsi/gomega/internal/asyncassertion/async_assertion.go:95

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 2
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Cilium pods: [cilium-5dlr8 cilium-5z2mr]
Netpols loaded: 
CiliumNetworkPolicies loaded: 202110201420k8sdatapathconfigencapsulationcheckvxlanconnectivit::l3-policy-demo 
Endpoint Policy Enforcement:
Pod                          Ingress   Egress
testclient-45tf8                       
testclient-cp7cl                       
testds-6flbp                           
testds-h22mt                           
grafana-7fd557d749-z8mhn               
prometheus-d87f8f984-hxz9l             
coredns-5fc58db489-h6smp               
test-k8s2-5b756fd6c5-ndlkk             
Cilium agent 'cilium-5dlr8': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 39 Failed 0
Cilium agent 'cilium-5z2mr': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 37 Failed 0


Standard Error

Click to show.
14:18:43 STEP: Installing Cilium
14:18:45 STEP: Waiting for Cilium to become ready
14:19:39 STEP: Validating if Kubernetes DNS is deployed
14:19:39 STEP: Checking if deployment is ready
14:19:39 STEP: Checking if kube-dns service is plumbed correctly
14:19:39 STEP: Checking if pods have identity
14:19:39 STEP: Checking if DNS can resolve
14:19:40 STEP: Kubernetes DNS is not ready: %!s(<nil>)
14:19:40 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
14:19:42 STEP: Waiting for Kubernetes DNS to become operational
14:19:42 STEP: Checking if deployment is ready
14:19:42 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
14:19:43 STEP: Checking if deployment is ready
14:19:43 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
14:19:44 STEP: Checking if deployment is ready
14:19:44 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
14:19:45 STEP: Checking if deployment is ready
14:19:45 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
14:19:46 STEP: Checking if deployment is ready
14:19:46 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
14:19:47 STEP: Checking if deployment is ready
14:19:47 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
14:19:48 STEP: Checking if deployment is ready
14:19:48 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
14:19:49 STEP: Checking if deployment is ready
14:19:49 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
14:19:50 STEP: Checking if deployment is ready
14:19:50 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
14:19:51 STEP: Checking if deployment is ready
14:19:51 STEP: Checking if kube-dns service is plumbed correctly
14:19:51 STEP: Checking if pods have identity
14:19:51 STEP: Checking if DNS can resolve
14:19:52 STEP: Validating Cilium Installation
14:19:52 STEP: Performing Cilium controllers preflight check
14:19:52 STEP: Performing Cilium status preflight check
14:19:52 STEP: Performing Cilium health check
14:19:53 STEP: Performing Cilium service preflight check
14:19:53 STEP: Performing K8s service preflight check
14:19:53 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-5z2mr': Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory
	 
	 command terminated with exit code 1
	 

14:19:53 STEP: Performing Cilium controllers preflight check
14:19:53 STEP: Performing Cilium status preflight check
14:19:53 STEP: Performing Cilium health check
14:19:55 STEP: Performing Cilium service preflight check
14:19:55 STEP: Performing K8s service preflight check
14:19:55 STEP: Performing Cilium status preflight check
14:19:55 STEP: Performing Cilium controllers preflight check
14:19:55 STEP: Performing Cilium health check
14:19:56 STEP: Performing Cilium service preflight check
14:19:56 STEP: Performing K8s service preflight check
14:19:56 STEP: Performing Cilium status preflight check
14:19:56 STEP: Performing Cilium health check
14:19:56 STEP: Performing Cilium controllers preflight check
14:19:58 STEP: Performing Cilium service preflight check
14:19:58 STEP: Performing K8s service preflight check
14:19:58 STEP: Performing Cilium status preflight check
14:19:58 STEP: Performing Cilium health check
14:19:58 STEP: Performing Cilium controllers preflight check
14:19:59 STEP: Performing Cilium service preflight check
14:19:59 STEP: Performing K8s service preflight check
14:19:59 STEP: Performing Cilium controllers preflight check
14:19:59 STEP: Performing Cilium status preflight check
14:19:59 STEP: Performing Cilium health check
14:20:01 STEP: Performing Cilium service preflight check
14:20:01 STEP: Performing K8s service preflight check
14:20:02 STEP: Waiting for cilium-operator to be ready
14:20:02 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
14:20:02 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
14:20:02 STEP: Making sure all endpoints are in ready state
14:20:03 STEP: Creating namespace 202110201420k8sdatapathconfigencapsulationcheckvxlanconnectivit
14:20:03 STEP: Deploying demo_ds.yaml in namespace 202110201420k8sdatapathconfigencapsulationcheckvxlanconnectivit
14:20:04 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.16-net-next/src/github.com/cilium/cilium/test/k8sT/manifests/l3-policy-demo.yaml
14:20:08 STEP: Waiting for 4m0s for 5 pods of deployment demo_ds.yaml to become ready
14:20:08 STEP: WaitforNPods(namespace="202110201420k8sdatapathconfigencapsulationcheckvxlanconnectivit", filter="")
14:20:14 STEP: WaitforNPods(namespace="202110201420k8sdatapathconfigencapsulationcheckvxlanconnectivit", filter="") => <nil>
14:20:14 STEP: Checking pod connectivity between nodes
14:20:14 STEP: WaitforPods(namespace="202110201420k8sdatapathconfigencapsulationcheckvxlanconnectivit", filter="-l zgroup=testDSClient")
14:20:14 STEP: WaitforPods(namespace="202110201420k8sdatapathconfigencapsulationcheckvxlanconnectivit", filter="-l zgroup=testDSClient") => <nil>
14:20:15 STEP: WaitforPods(namespace="202110201420k8sdatapathconfigencapsulationcheckvxlanconnectivit", filter="-l zgroup=testDS")
14:20:15 STEP: WaitforPods(namespace="202110201420k8sdatapathconfigencapsulationcheckvxlanconnectivit", filter="-l zgroup=testDS") => <nil>
14:20:19 STEP: Test BPF masquerade
14:20:19 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.16-net-next/src/github.com/cilium/cilium/test/k8sT/manifests/l3-policy-demo.yaml
14:20:20 STEP: Waiting for 4m0s for 5 pods of deployment demo_ds.yaml to become ready
14:20:20 STEP: WaitforNPods(namespace="202110201420k8sdatapathconfigencapsulationcheckvxlanconnectivit", filter="")
14:20:20 STEP: WaitforNPods(namespace="202110201420k8sdatapathconfigencapsulationcheckvxlanconnectivit", filter="") => <nil>
14:20:20 STEP: Making ten curl requests from "testclient-45tf8" to "http://google.com"
=== Test Finished at 2021-10-20T14:20:23Z====
14:20:23 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
14:20:23 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE                                                         NAME                               READY   STATUS    RESTARTS   AGE    IP              NODE   NOMINATED NODE   READINESS GATES
	 202110201420k8sdatapathconfigencapsulationcheckvxlanconnectivit   test-k8s2-5b756fd6c5-ndlkk         2/2     Running   0          22s    10.0.1.113      k8s2   <none>           <none>
	 202110201420k8sdatapathconfigencapsulationcheckvxlanconnectivit   testclient-45tf8                   1/1     Running   0          22s    10.0.1.141      k8s2   <none>           <none>
	 202110201420k8sdatapathconfigencapsulationcheckvxlanconnectivit   testclient-cp7cl                   1/1     Running   0          22s    10.0.0.140      k8s1   <none>           <none>
	 202110201420k8sdatapathconfigencapsulationcheckvxlanconnectivit   testds-6flbp                       2/2     Running   0          22s    10.0.0.61       k8s1   <none>           <none>
	 202110201420k8sdatapathconfigencapsulationcheckvxlanconnectivit   testds-h22mt                       2/2     Running   0          22s    10.0.1.69       k8s2   <none>           <none>
	 cilium-monitoring                                                 grafana-7fd557d749-z8mhn           1/1     Running   0          64m    10.0.0.181      k8s1   <none>           <none>
	 cilium-monitoring                                                 prometheus-d87f8f984-hxz9l         1/1     Running   0          64m    10.0.0.66       k8s1   <none>           <none>
	 kube-system                                                       cilium-5dlr8                       1/1     Running   0          101s   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-5z2mr                       1/1     Running   0          101s   192.168.36.12   k8s2   <none>           <none>
	 kube-system                                                       cilium-operator-744cd59fb6-h2g4j   1/1     Running   0          101s   192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       cilium-operator-744cd59fb6-rth9k   1/1     Running   0          101s   192.168.36.13   k8s3   <none>           <none>
	 kube-system                                                       coredns-5fc58db489-h6smp           1/1     Running   0          46s    10.0.1.223      k8s2   <none>           <none>
	 kube-system                                                       etcd-k8s1                          1/1     Running   0          67m    192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       kube-apiserver-k8s1                1/1     Running   0          67m    192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       kube-controller-manager-k8s1       1/1     Running   0          67m    192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       kube-scheduler-k8s1                1/1     Running   0          67m    192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-2sjmw                 1/1     Running   0          64m    192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       log-gatherer-g9hcn                 1/1     Running   0          64m    192.168.36.12   k8s2   <none>           <none>
	 kube-system                                                       log-gatherer-q5gtt                 1/1     Running   0          64m    192.168.36.13   k8s3   <none>           <none>
	 kube-system                                                       registry-adder-68mn5               1/1     Running   0          65m    192.168.36.11   k8s1   <none>           <none>
	 kube-system                                                       registry-adder-7v9l8               1/1     Running   0          65m    192.168.36.13   k8s3   <none>           <none>
	 kube-system                                                       registry-adder-xw2f6               1/1     Running   0          65m    192.168.36.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-5dlr8 cilium-5z2mr]
cmd: kubectl exec -n kube-system cilium-5dlr8 -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.16 (v1.16.15) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Endpoint", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s3 10.0.2.15 fd04::11, enp0s8 192.168.36.11 fd04::11 (Direct Routing)]
	 Host firewall:          Disabled
	 Cilium:                 Ok   1.10.90 (v1.10.90-19e2a59)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 6/254 allocated from 10.0.0.0/24, IPv6: 6/254 allocated from fd02::/120
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           BPF   [enp0s3, enp0s8]   10.0.0.0/8 [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:      39/39 healthy
	 Proxy Status:           OK, ip 10.0.0.183, 0 redirects active on ports 10000-20000
	 Hubble:                 Ok   Current/Max Flows: 824/4095 (20.12%), Flows/s: 8.82   Metrics: Disabled
	 Encryption:             Disabled
	 Cluster health:         2/2 reachable   (2021-10-20T14:20:01Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-5dlr8 -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                       IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                            
	 645        Disabled           Disabled          4          reserved:health                                                                                   fd02::df   10.0.0.10    ready   
	 1752       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                                ready   
	                                                            k8s:node-role.kubernetes.io/master                                                                                                
	                                                            reserved:host                                                                                                                     
	 1896       Disabled           Disabled          30939      k8s:app=prometheus                                                                                fd02::55   10.0.0.66    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                                            
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                 
	 2793       Disabled           Disabled          53052      k8s:app=grafana                                                                                   fd02::d1   10.0.0.181   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                                          
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                                 
	 3126       Enabled            Disabled          49912      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::c9   10.0.0.61    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202110201420k8sdatapathconfigencapsulationcheckvxlanconnectivit                                   
	                                                            k8s:zgroup=testDS                                                                                                                 
	 3144       Disabled           Disabled          36653      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::72   10.0.0.140   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                   
	                                                            k8s:io.kubernetes.pod.namespace=202110201420k8sdatapathconfigencapsulationcheckvxlanconnectivit                                   
	                                                            k8s:zgroup=testDSClient                                                                                                           
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-5z2mr -c cilium-agent -- cilium status
Exitcode: 0 
Stdout:
 	 KVStore:                Ok   Disabled
	 Kubernetes:             Ok   1.16 (v1.16.15) [linux/amd64]
	 Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Endpoint", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"]
	 KubeProxyReplacement:   Strict   [enp0s3 10.0.2.15 fd04::12, enp0s8 192.168.36.12 fd04::12 (Direct Routing)]
	 Host firewall:          Disabled
	 Cilium:                 Ok   1.10.90 (v1.10.90-19e2a59)
	 NodeMonitor:            Listening for events on 3 CPUs with 64x4096 of shared memory
	 Cilium health daemon:   Ok   
	 IPAM:                   IPv4: 6/254 allocated from 10.0.1.0/24, IPv6: 6/254 allocated from fd02::100/120
	 BandwidthManager:       Disabled
	 Host Routing:           Legacy
	 Masquerading:           BPF   [enp0s3, enp0s8]   10.0.0.0/8 [IPv4: Enabled, IPv6: Enabled]
	 Controller Status:      37/37 healthy
	 Proxy Status:           OK, ip 10.0.1.211, 0 redirects active on ports 10000-20000
	 Hubble:                 Ok   Current/Max Flows: 831/4095 (20.29%), Flows/s: 8.13   Metrics: Disabled
	 Encryption:             Disabled
	 Cluster health:         2/2 reachable   (2021-10-20T14:20:02Z)
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-5z2mr -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                                       IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                                             
	 433        Disabled           Disabled          36653      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::11e   10.0.1.141   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=202110201420k8sdatapathconfigencapsulationcheckvxlanconnectivit                                    
	                                                            k8s:zgroup=testDSClient                                                                                                            
	 493        Enabled            Disabled          49912      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::1f2   10.0.1.69    ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=202110201420k8sdatapathconfigencapsulationcheckvxlanconnectivit                                    
	                                                            k8s:zgroup=testDS                                                                                                                  
	 567        Disabled           Disabled          4          reserved:health                                                                                   fd02::175   10.0.1.189   ready   
	 989        Disabled           Disabled          34088      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::1f8   10.0.1.113   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=202110201420k8sdatapathconfigencapsulationcheckvxlanconnectivit                                    
	                                                            k8s:zgroup=test-k8s2                                                                                                               
	 2114       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                                 ready   
	                                                            reserved:host                                                                                                                      
	 3529       Disabled           Disabled          47414      k8s:io.cilium.k8s.policy.cluster=default                                                          fd02::1b8   10.0.1.223   ready   
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                                        
	                                                            k8s:k8s-app=kube-dns                                                                                                               
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
14:20:41 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
14:20:41 STEP: Deleting deployment demo_ds.yaml
14:20:42 STEP: Deleting namespace 202110201420k8sdatapathconfigencapsulationcheckvxlanconnectivit
14:20:56 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|4ec52ba2_K8sDatapathConfig_Encapsulation_Check_vxlan_connectivity_with_per-endpoint_routes.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-net-next//1719/artifact/4ec52ba2_K8sDatapathConfig_Encapsulation_Check_vxlan_connectivity_with_per-endpoint_routes.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-net-next//1719/artifact/test_results_Cilium-PR-K8s-1.16-net-next_1719_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-net-next/1719/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper maintainer-s-little-helper bot added the ci/flake This is a known failure that occurs in the tree. Please investigate me! label Oct 21, 2021
@pchaigno
Copy link
Member

The test in question panicked with:

Test Panicked
stop execution
/home/jenkins/workspace/Cilium-PR-K8s-1.16-net-next/src/github.com/cilium/cilium/vendor/github.com/onsi/gomega/internal/asyncassertion/async_assertion.go:95

Full Stack Trace
github.com/onsi/gomega/internal/asyncassertion.(*AsyncAssertion).pollActual.func1.1({0xc000ea2780, 0x273}, {0xc0012b8350, 0xc000edab40, 0x0})
	/home/jenkins/workspace/Cilium-PR-K8s-1.16-net-next/src/github.com/cilium/cilium/vendor/github.com/onsi/gomega/internal/asyncassertion/async_assertion.go:95 +0x10d
github.com/onsi/gomega/internal/assertion.(*Assertion).match(0xc000ad4360, {0x1e0b8b8, 0x2e29c90}, 0x1, {0xc000ad42c0, 0x3, 0x3})
	/home/jenkins/workspace/Cilium-PR-K8s-1.16-net-next/src/github.com/cilium/cilium/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:79 +0x1bd
github.com/onsi/gomega/internal/assertion.(*Assertion).Should(0xc000ad4360, {0x1e0b8b8, 0x2e29c90}, {0xc000ad42c0, 0x3, 0x3})
	/home/jenkins/workspace/Cilium-PR-K8s-1.16-net-next/src/github.com/cilium/cilium/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:28 +0x95
github.com/cilium/cilium/test/k8sT.testPodHTTPToOutside(0xc000345d60, {0x1b78201, 0x11}, 0x0, 0x0)
	/home/jenkins/workspace/Cilium-PR-K8s-1.16-net-next/src/github.com/cilium/cilium/test/k8sT/DatapathConfiguration.go:1128 +0x9d4
github.com/cilium/cilium/test/k8sT.glob..func5.9.8()
	/home/jenkins/workspace/Cilium-PR-K8s-1.16-net-next/src/github.com/cilium/cilium/test/k8sT/DatapathConfiguration.go:258 +0x1c5
reflect.Value.call({0x1829c60, 0xc0006038d8, 0x0}, {0x1b60b80, 0x4}, {0x2e29c90, 0x0, 0x0})
	/usr/local/go/src/reflect/value.go:543 +0x814
reflect.Value.Call({0x1829c60, 0xc0006038d8, 0x0}, {0x2e29c90, 0x0, 0x0})
	/usr/local/go/src/reflect/value.go:339 +0xc5
github.com/cilium/cilium/test/ginkgo-ext.applyAdvice.func1({0x2e29c90, 0x0, 0x0})
	/home/jenkins/workspace/Cilium-PR-K8s-1.16-net-next/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:575 +0xdc
github.com/cilium/cilium/test.TestTest(0x0)
	/home/jenkins/workspace/Cilium-PR-K8s-1.16-net-next/src/github.com/cilium/cilium/test/test_suite_test.go:98 +0x4ca
testing.tRunner(0xc000283040, 0x1c63758)
	/usr/local/go/src/testing/testing.go:1259 +0x102
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1306 +0x35a

@pchaigno pchaigno changed the title CI: K8sDatapathConfig Encapsulation Check vxlan connectivity with per-endpoint routes CI: Ginkgo panic: Test Panicked at internal/asyncassertion/async_assertion.go:95 Nov 3, 2021
@tklauser
Copy link
Member

tklauser commented Dec 1, 2021

github.com/onsi/gomega has been updated from v1.14.0 to v1.16.0 in #17775 (on 2021-11-04) and to v1.17.0 in #17816 (on 2021-11-11).

AFAICS no failures have been reported since the first update. The last report was on 2021-11-05, but on a PR not yet rebased to include #17775, i.e. not using the updated gomega version.

The upstream issue onsi/gomega#457 which was fixed in v1.15.0 looks like a likely culprit: a data race in the same code we saw panicking.

Given the above I'd assume the gomega update fixed this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci/flake This is a known failure that occurs in the tree. Please investigate me!
Projects
None yet
Development

No branches or pull requests

2 participants