Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CI: K8sLRPTests Checks local redirect policy LRP connectivity #17204

Closed
maintainer-s-little-helper bot opened this issue Aug 20, 2021 · 5 comments
Closed
Labels
ci/flake This is a known failure that occurs in the tree. Please investigate me!

Comments

@maintainer-s-little-helper
Copy link

Test Name

K8sLRPTests Checks local redirect policy LRP connectivity

Failure Output

FAIL: Expected

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-GKE/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:465
Expected
    <*errors.errorString | 0xc000b3b5a0>: {
        s: "timed out waiting for pods with filter -l role=backend to be ready: 4m0s timeout expired",
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-GKE/src/github.com/cilium/cilium/test/k8sT/lrp.go:74

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 2
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 1 errors/warnings:
Error while inserting service in LB map


Standard Error

Click to show.
13:55:29 STEP: Running BeforeAll block for EntireTestsuite K8sLRPTests
13:55:29 STEP: Ensuring the namespace kube-system exists
13:55:29 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
13:55:30 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
13:55:30 STEP: Running BeforeAll block for EntireTestsuite K8sLRPTests Checks local redirect policy
13:55:30 STEP: Installing Cilium
13:55:37 STEP: Waiting for Cilium to become ready
13:56:19 STEP: Validating if Kubernetes DNS is deployed
13:56:19 STEP: Checking if deployment is ready
13:56:19 STEP: Checking if kube-dns service is plumbed correctly
13:56:19 STEP: Checking if pods have identity
13:56:19 STEP: Checking if DNS can resolve
13:56:22 STEP: Kubernetes DNS is up and operational
13:56:22 STEP: Validating Cilium Installation
13:56:22 STEP: Performing Cilium status preflight check
13:56:22 STEP: Performing Cilium health check
13:56:22 STEP: Performing Cilium controllers preflight check
13:56:26 STEP: Performing Cilium service preflight check
13:56:26 STEP: Performing K8s service preflight check
13:56:26 STEP: Waiting for cilium-operator to be ready
13:56:26 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
13:56:27 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
13:56:27 STEP: WaitforPods(namespace="kube-system", filter="")
13:56:27 STEP: WaitforPods(namespace="kube-system", filter="") => <nil>
13:56:28 STEP: WaitforPods(namespace="default", filter="-l role=frontend")
13:56:30 STEP: WaitforPods(namespace="default", filter="-l role=frontend") => <nil>
13:56:30 STEP: WaitforPods(namespace="default", filter="-l role=backend")
14:00:30 STEP: WaitforPods(namespace="default", filter="-l role=backend") => timed out waiting for pods with filter -l role=backend to be ready: 4m0s timeout expired
14:00:31 STEP: cmd: kubectl describe pods -n default -l role=backend
Exitcode: 0 
Stdout:
 	 Name:         k8s1-backend
	 Namespace:    default
	 Priority:     0
	 Node:         gke-cilium-ci-14-cilium-ci-14-dd4654ad-rskf/10.128.15.223
	 Start Time:   Thu, 19 Aug 2021 13:56:27 +0000
	 Labels:       id=be1
	               role=backend
	 Annotations:  <none>
	 Status:       Running
	 IP:           10.120.1.1
	 IPs:
	   IP:  10.120.1.1
	 Containers:
	   web:
	     Container ID:   containerd://a7c2d5d8b17d52d30c64c07a99e888fa9082cc83bb34f4a78c93ada0a952acb6
	     Image:          docker.io/cilium/echoserver:1.10.1
	     Image ID:       docker.io/cilium/echoserver@sha256:daa2b422465b8714195fda33bc14865a8672c14963d0955c9d95632d6b804b63
	     Port:           80/TCP
	     Host Port:      0/TCP
	     State:          Running
	       Started:      Thu, 19 Aug 2021 13:56:29 +0000
	     Ready:          False
	     Restart Count:  0
	     Readiness:      http-get http://:80/ delay=0s timeout=1s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-jd2pq (ro)
	   udp:
	     Container ID:   containerd://81b481fd28c7f9e915bd9a8acddaf783aa86e5049cd60f624eb4d1da8480440c
	     Image:          docker.io/cilium/echoserver-udp:v2020.01.30
	     Image ID:       docker.io/cilium/echoserver-udp@sha256:66680075424a25cae98c4a437fec2b7e33b91fbe94e53defeea0305aee3866dd
	     Port:           69/UDP
	     Host Port:      0/UDP
	     State:          Running
	       Started:      Thu, 19 Aug 2021 13:56:29 +0000
	     Ready:          True
	     Restart Count:  0
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-jd2pq (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             False 
	   ContainersReady   False 
	   PodScheduled      True 
	 Volumes:
	   default-token-jd2pq:
	     Type:        Secret (a volume populated by a Secret)
	     SecretName:  default-token-jd2pq
	     Optional:    false
	 QoS Class:       BestEffort
	 Node-Selectors:  cilium.io/ci-node=k8s1
	 Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                  node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	 Events:
	   Type     Reason     Age                   From                                                  Message
	   ----     ------     ----                  ----                                                  -------
	   Normal   Scheduled  4m4s                  default-scheduler                                     Successfully assigned default/k8s1-backend to gke-cilium-ci-14-cilium-ci-14-dd4654ad-rskf
	   Normal   Pulled     4m2s                  kubelet, gke-cilium-ci-14-cilium-ci-14-dd4654ad-rskf  Container image "docker.io/cilium/echoserver:1.10.1" already present on machine
	   Normal   Created    4m2s                  kubelet, gke-cilium-ci-14-cilium-ci-14-dd4654ad-rskf  Created container web
	   Normal   Started    4m2s                  kubelet, gke-cilium-ci-14-cilium-ci-14-dd4654ad-rskf  Started container web
	   Normal   Pulled     4m2s                  kubelet, gke-cilium-ci-14-cilium-ci-14-dd4654ad-rskf  Container image "docker.io/cilium/echoserver-udp:v2020.01.30" already present on machine
	   Normal   Created    4m2s                  kubelet, gke-cilium-ci-14-cilium-ci-14-dd4654ad-rskf  Created container udp
	   Normal   Started    4m2s                  kubelet, gke-cilium-ci-14-cilium-ci-14-dd4654ad-rskf  Started container udp
	   Warning  Unhealthy  53s (x19 over 3m53s)  kubelet, gke-cilium-ci-14-cilium-ci-14-dd4654ad-rskf  Readiness probe failed: Get "http://10.120.1.1:80/": dial tcp 10.120.1.1:80: connect: connection refused
	 
	 
	 Name:         k8s2-backend
	 Namespace:    default
	 Priority:     0
	 Node:         gke-cilium-ci-14-cilium-ci-14-dd4654ad-x6cl/10.128.15.224
	 Start Time:   Thu, 19 Aug 2021 13:56:27 +0000
	 Labels:       id=be2
	               role=backend
	 Annotations:  <none>
	 Status:       Running
	 IP:           10.120.2.65
	 IPs:
	   IP:  10.120.2.65
	 Containers:
	   web:
	     Container ID:   containerd://ed7baa2339feb46d7c887e5528909ef6b5606c906d059afed4253d7002b49499
	     Image:          docker.io/cilium/echoserver:1.10.1
	     Image ID:       docker.io/cilium/echoserver@sha256:daa2b422465b8714195fda33bc14865a8672c14963d0955c9d95632d6b804b63
	     Port:           80/TCP
	     Host Port:      0/TCP
	     State:          Running
	       Started:      Thu, 19 Aug 2021 13:56:29 +0000
	     Ready:          True
	     Restart Count:  0
	     Readiness:      http-get http://:80/ delay=0s timeout=1s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-jd2pq (ro)
	   udp:
	     Container ID:   containerd://6fa51207f4c15c24f9c5ed969e56bec7bb4c06ddc44a2ad2c8d7564eb9ca6208
	     Image:          docker.io/cilium/echoserver-udp:v2020.01.30
	     Image ID:       docker.io/cilium/echoserver-udp@sha256:66680075424a25cae98c4a437fec2b7e33b91fbe94e53defeea0305aee3866dd
	     Port:           69/UDP
	     Host Port:      0/UDP
	     State:          Running
	       Started:      Thu, 19 Aug 2021 13:56:29 +0000
	     Ready:          True
	     Restart Count:  0
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-jd2pq (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   default-token-jd2pq:
	     Type:        Secret (a volume populated by a Secret)
	     SecretName:  default-token-jd2pq
	     Optional:    false
	 QoS Class:       BestEffort
	 Node-Selectors:  cilium.io/ci-node=k8s2
	 Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                  node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	 Events:
	   Type    Reason     Age   From                                                  Message
	   ----    ------     ----  ----                                                  -------
	   Normal  Scheduled  4m4s  default-scheduler                                     Successfully assigned default/k8s2-backend to gke-cilium-ci-14-cilium-ci-14-dd4654ad-x6cl
	   Normal  Pulled     4m2s  kubelet, gke-cilium-ci-14-cilium-ci-14-dd4654ad-x6cl  Container image "docker.io/cilium/echoserver:1.10.1" already present on machine
	   Normal  Created    4m2s  kubelet, gke-cilium-ci-14-cilium-ci-14-dd4654ad-x6cl  Created container web
	   Normal  Started    4m2s  kubelet, gke-cilium-ci-14-cilium-ci-14-dd4654ad-x6cl  Started container web
	   Normal  Pulled     4m2s  kubelet, gke-cilium-ci-14-cilium-ci-14-dd4654ad-x6cl  Container image "docker.io/cilium/echoserver-udp:v2020.01.30" already present on machine
	   Normal  Created    4m2s  kubelet, gke-cilium-ci-14-cilium-ci-14-dd4654ad-x6cl  Created container udp
	   Normal  Started    4m2s  kubelet, gke-cilium-ci-14-cilium-ci-14-dd4654ad-x6cl  Started container udp
	 
Stderr:
 	 

FAIL: Expected
    <*errors.errorString | 0xc000b3b5a0>: {
        s: "timed out waiting for pods with filter -l role=backend to be ready: 4m0s timeout expired",
    }
to be nil
14:00:31 STEP: Running JustAfterEach block for EntireTestsuite K8sLRPTests
14:00:32 STEP: Running AfterEach for block EntireTestsuite K8sLRPTests
14:00:32 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|66d4a529_K8sLRPTests_Checks_local_redirect_policy_LRP_connectivity.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-GKE//6254/artifact/src/github.com/cilium/cilium/2fdc54c9_K8sLRPTests_Checks_local_redirect_policy_LRP_restores_service_when_removed.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-GKE//6254/artifact/src/github.com/cilium/cilium/66d4a529_K8sLRPTests_Checks_local_redirect_policy_LRP_connectivity.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-GKE//6254/artifact/src/github.com/cilium/cilium/test_results_Cilium-PR-K8s-GKE_6254_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-GKE/6254/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@pchaigno
Copy link
Member

This same sort of test failure with test pod failing to start due to connection refused on readiness probe happened a couple other times on GKE with different tests:

@maintainer-s-little-helper
Copy link
Author

PR #17236 hit this flake with 91.32% similarity:

Click to show.

Test Name

K8sLRPTests Checks local redirect policy LRP connectivity

Failure Output

FAIL: Expected

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-GKE/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:465
Expected
    <*errors.errorString | 0xc001c20ec0>: {
        s: "timed out waiting for pods with filter -l role=backend to be ready: 4m0s timeout expired",
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-GKE/src/github.com/cilium/cilium/test/k8sT/lrp.go:74

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 2
Number of "level=warning" in logs: 2
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
Error while inserting service in LB map
Unable to restore endpoint, ignoring


Standard Error

Click to show.
02:49:19 STEP: Running BeforeAll block for EntireTestsuite K8sLRPTests
02:49:19 STEP: Ensuring the namespace kube-system exists
02:49:19 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
02:49:19 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
02:49:20 STEP: Running BeforeAll block for EntireTestsuite K8sLRPTests Checks local redirect policy
02:49:20 STEP: Installing Cilium
02:49:27 STEP: Waiting for Cilium to become ready
02:50:01 STEP: Validating if Kubernetes DNS is deployed
02:50:01 STEP: Checking if deployment is ready
02:50:01 STEP: Checking if kube-dns service is plumbed correctly
02:50:01 STEP: Checking if pods have identity
02:50:01 STEP: Checking if DNS can resolve
02:50:04 STEP: Kubernetes DNS is up and operational
02:50:04 STEP: Validating Cilium Installation
02:50:04 STEP: Performing Cilium controllers preflight check
02:50:04 STEP: Performing Cilium status preflight check
02:50:04 STEP: Performing Cilium health check
02:50:08 STEP: Performing Cilium service preflight check
02:50:08 STEP: Performing K8s service preflight check
02:50:08 STEP: Waiting for cilium-operator to be ready
02:50:08 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
02:50:09 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
02:50:09 STEP: WaitforPods(namespace="kube-system", filter="")
02:50:09 STEP: WaitforPods(namespace="kube-system", filter="") => <nil>
02:50:10 STEP: WaitforPods(namespace="default", filter="-l role=frontend")
02:50:12 STEP: WaitforPods(namespace="default", filter="-l role=frontend") => <nil>
02:50:12 STEP: WaitforPods(namespace="default", filter="-l role=backend")
02:54:12 STEP: WaitforPods(namespace="default", filter="-l role=backend") => timed out waiting for pods with filter -l role=backend to be ready: 4m0s timeout expired
02:54:13 STEP: cmd: kubectl describe pods -n default -l role=backend
Exitcode: 0 
Stdout:
 	 Name:         k8s1-backend
	 Namespace:    default
	 Priority:     0
	 Node:         gke-cilium-ci-4-cilium-ci-4-5a93b208-11bf/10.128.15.201
	 Start Time:   Wed, 25 Aug 2021 02:50:09 +0000
	 Labels:       id=be1
	               role=backend
	 Annotations:  <none>
	 Status:       Running
	 IP:           10.84.1.1
	 IPs:
	   IP:  10.84.1.1
	 Containers:
	   web:
	     Container ID:   containerd://76f8f20a1b8bc9e923e685f804c036a0604fc0647480a45c617f59d9c66b54f9
	     Image:          docker.io/cilium/echoserver:1.10.1
	     Image ID:       docker.io/cilium/echoserver@sha256:daa2b422465b8714195fda33bc14865a8672c14963d0955c9d95632d6b804b63
	     Port:           80/TCP
	     Host Port:      0/TCP
	     State:          Running
	       Started:      Wed, 25 Aug 2021 02:50:11 +0000
	     Ready:          False
	     Restart Count:  0
	     Readiness:      http-get http://:80/ delay=0s timeout=1s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-dktwg (ro)
	   udp:
	     Container ID:   containerd://12e2d2d1c1ae21f102cc35fa317eac7e80bda36f9d88dfb06e2c0d43afe8a227
	     Image:          docker.io/cilium/echoserver-udp:v2020.01.30
	     Image ID:       docker.io/cilium/echoserver-udp@sha256:66680075424a25cae98c4a437fec2b7e33b91fbe94e53defeea0305aee3866dd
	     Port:           69/UDP
	     Host Port:      0/UDP
	     State:          Running
	       Started:      Wed, 25 Aug 2021 02:50:11 +0000
	     Ready:          True
	     Restart Count:  0
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-dktwg (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             False 
	   ContainersReady   False 
	   PodScheduled      True 
	 Volumes:
	   default-token-dktwg:
	     Type:        Secret (a volume populated by a Secret)
	     SecretName:  default-token-dktwg
	     Optional:    false
	 QoS Class:       BestEffort
	 Node-Selectors:  cilium.io/ci-node=k8s1
	 Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                  node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	 Events:
	   Type     Reason     Age                   From                                                Message
	   ----     ------     ----                  ----                                                -------
	   Normal   Scheduled  4m4s                  default-scheduler                                   Successfully assigned default/k8s1-backend to gke-cilium-ci-4-cilium-ci-4-5a93b208-11bf
	   Normal   Pulled     4m2s                  kubelet, gke-cilium-ci-4-cilium-ci-4-5a93b208-11bf  Container image "docker.io/cilium/echoserver:1.10.1" already present on machine
	   Normal   Created    4m2s                  kubelet, gke-cilium-ci-4-cilium-ci-4-5a93b208-11bf  Created container web
	   Normal   Started    4m2s                  kubelet, gke-cilium-ci-4-cilium-ci-4-5a93b208-11bf  Started container web
	   Normal   Pulled     4m2s                  kubelet, gke-cilium-ci-4-cilium-ci-4-5a93b208-11bf  Container image "docker.io/cilium/echoserver-udp:v2020.01.30" already present on machine
	   Normal   Created    4m2s                  kubelet, gke-cilium-ci-4-cilium-ci-4-5a93b208-11bf  Created container udp
	   Normal   Started    4m2s                  kubelet, gke-cilium-ci-4-cilium-ci-4-5a93b208-11bf  Started container udp
	   Warning  Unhealthy  52s (x19 over 3m52s)  kubelet, gke-cilium-ci-4-cilium-ci-4-5a93b208-11bf  Readiness probe failed: Get "http://10.84.1.1:80/": dial tcp 10.84.1.1:80: connect: connection refused
	 
	 
	 Name:         k8s2-backend
	 Namespace:    default
	 Priority:     0
	 Node:         gke-cilium-ci-4-cilium-ci-4-5a93b208-483q/10.128.15.200
	 Start Time:   Wed, 25 Aug 2021 02:50:09 +0000
	 Labels:       id=be2
	               role=backend
	 Annotations:  <none>
	 Status:       Running
	 IP:           10.84.3.10
	 IPs:
	   IP:  10.84.3.10
	 Containers:
	   web:
	     Container ID:   containerd://2de1ab8717535b21219b43d6254c3f5bab32f40c22469efc7be24b2085d68809
	     Image:          docker.io/cilium/echoserver:1.10.1
	     Image ID:       docker.io/cilium/echoserver@sha256:daa2b422465b8714195fda33bc14865a8672c14963d0955c9d95632d6b804b63
	     Port:           80/TCP
	     Host Port:      0/TCP
	     State:          Running
	       Started:      Wed, 25 Aug 2021 02:50:11 +0000
	     Ready:          True
	     Restart Count:  0
	     Readiness:      http-get http://:80/ delay=0s timeout=1s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-dktwg (ro)
	   udp:
	     Container ID:   containerd://de8c0472abe451f96401c77836f972d1977ee86c599ad7a8cbb2f3dba6d32250
	     Image:          docker.io/cilium/echoserver-udp:v2020.01.30
	     Image ID:       docker.io/cilium/echoserver-udp@sha256:66680075424a25cae98c4a437fec2b7e33b91fbe94e53defeea0305aee3866dd
	     Port:           69/UDP
	     Host Port:      0/UDP
	     State:          Running
	       Started:      Wed, 25 Aug 2021 02:50:11 +0000
	     Ready:          True
	     Restart Count:  0
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-dktwg (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   default-token-dktwg:
	     Type:        Secret (a volume populated by a Secret)
	     SecretName:  default-token-dktwg
	     Optional:    false
	 QoS Class:       BestEffort
	 Node-Selectors:  cilium.io/ci-node=k8s2
	 Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                  node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	 Events:
	   Type    Reason     Age   From                                                Message
	   ----    ------     ----  ----                                                -------
	   Normal  Scheduled  4m4s  default-scheduler                                   Successfully assigned default/k8s2-backend to gke-cilium-ci-4-cilium-ci-4-5a93b208-483q
	   Normal  Pulled     4m2s  kubelet, gke-cilium-ci-4-cilium-ci-4-5a93b208-483q  Container image "docker.io/cilium/echoserver:1.10.1" already present on machine
	   Normal  Created    4m2s  kubelet, gke-cilium-ci-4-cilium-ci-4-5a93b208-483q  Created container web
	   Normal  Started    4m2s  kubelet, gke-cilium-ci-4-cilium-ci-4-5a93b208-483q  Started container web
	   Normal  Pulled     4m2s  kubelet, gke-cilium-ci-4-cilium-ci-4-5a93b208-483q  Container image "docker.io/cilium/echoserver-udp:v2020.01.30" already present on machine
	   Normal  Created    4m2s  kubelet, gke-cilium-ci-4-cilium-ci-4-5a93b208-483q  Created container udp
	   Normal  Started    4m2s  kubelet, gke-cilium-ci-4-cilium-ci-4-5a93b208-483q  Started container udp
	 
Stderr:
 	 

FAIL: Expected
    <*errors.errorString | 0xc001c20ec0>: {
        s: "timed out waiting for pods with filter -l role=backend to be ready: 4m0s timeout expired",
    }
to be nil
02:54:13 STEP: Running JustAfterEach block for EntireTestsuite K8sLRPTests
02:54:14 STEP: Running AfterEach for block EntireTestsuite K8sLRPTests
02:54:14 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|6c367a64_K8sLRPTests_Checks_local_redirect_policy_LRP_connectivity.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-GKE//6296/artifact/src/github.com/cilium/cilium/3ed562db_K8sLRPTests_Checks_local_redirect_policy_LRP_restores_service_when_removed.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-GKE//6296/artifact/src/github.com/cilium/cilium/6c367a64_K8sLRPTests_Checks_local_redirect_policy_LRP_connectivity.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-GKE//6296/artifact/src/github.com/cilium/cilium/test_results_Cilium-PR-K8s-GKE_6296_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-GKE/6296/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper
Copy link
Author

PR #17616 hit this flake with 91.32% similarity:

Click to show.

Test Name

K8sLRPTests Checks local redirect policy LRP connectivity

Failure Output

FAIL: Expected

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-GKE/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:465
Expected
    <*errors.errorString | 0xc00031ff10>: {
        s: "timed out waiting for pods with filter -l role=backend to be ready: 4m0s timeout expired",
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-GKE/src/github.com/cilium/cilium/test/k8sT/lrp.go:74

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 2
Number of "level=warning" in logs: 2
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
Error while inserting service in LB map
Unable to restore endpoint, ignoring


Standard Error

Click to show.
13:50:52 STEP: Running BeforeAll block for EntireTestsuite K8sLRPTests
13:50:52 STEP: Ensuring the namespace kube-system exists
13:50:53 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
13:50:53 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
13:50:53 STEP: Running BeforeAll block for EntireTestsuite K8sLRPTests Checks local redirect policy
13:50:53 STEP: Installing Cilium
13:51:01 STEP: Waiting for Cilium to become ready
13:51:39 STEP: Validating if Kubernetes DNS is deployed
13:51:39 STEP: Checking if deployment is ready
13:51:39 STEP: Checking if kube-dns service is plumbed correctly
13:51:39 STEP: Checking if pods have identity
13:51:39 STEP: Checking if DNS can resolve
13:51:41 STEP: Kubernetes DNS is up and operational
13:51:41 STEP: Validating Cilium Installation
13:51:41 STEP: Performing Cilium controllers preflight check
13:51:41 STEP: Performing Cilium status preflight check
13:51:41 STEP: Performing Cilium health check
13:51:46 STEP: Performing Cilium service preflight check
13:51:46 STEP: Performing K8s service preflight check
13:51:46 STEP: Waiting for cilium-operator to be ready
13:51:46 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
13:51:46 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
13:51:46 STEP: WaitforPods(namespace="kube-system", filter="")
13:51:47 STEP: WaitforPods(namespace="kube-system", filter="") => <nil>
13:51:48 STEP: WaitforPods(namespace="default", filter="-l role=frontend")
13:51:51 STEP: WaitforPods(namespace="default", filter="-l role=frontend") => <nil>
13:51:51 STEP: WaitforPods(namespace="default", filter="-l role=backend")
13:55:51 STEP: WaitforPods(namespace="default", filter="-l role=backend") => timed out waiting for pods with filter -l role=backend to be ready: 4m0s timeout expired
13:55:52 STEP: cmd: kubectl describe pods -n default -l role=backend
Exitcode: 0 
Stdout:
 	 Name:         k8s1-backend
	 Namespace:    default
	 Priority:     0
	 Node:         gke-cilium-ci-11-cilium-ci-11-3ede1a3d-2tlk/10.128.0.48
	 Start Time:   Fri, 15 Oct 2021 13:51:47 +0000
	 Labels:       id=be1
	               role=backend
	 Annotations:  <none>
	 Status:       Running
	 IP:           10.32.1.185
	 IPs:
	   IP:  10.32.1.185
	 Containers:
	   web:
	     Container ID:   containerd://f190f7c81b5b2c65e36c30edabc449ac2d0e852388a1a940d531259bb515c1d7
	     Image:          docker.io/cilium/echoserver:1.10.1
	     Image ID:       docker.io/cilium/echoserver@sha256:daa2b422465b8714195fda33bc14865a8672c14963d0955c9d95632d6b804b63
	     Port:           80/TCP
	     Host Port:      0/TCP
	     State:          Running
	       Started:      Fri, 15 Oct 2021 13:51:49 +0000
	     Ready:          True
	     Restart Count:  0
	     Readiness:      http-get http://:80/ delay=0s timeout=1s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-v7ksk (ro)
	   udp:
	     Container ID:   containerd://e370009315a1ad4da189a75e43a8d74a4d639f616fc1e95c8c486cfe1a7dfa2e
	     Image:          docker.io/cilium/echoserver-udp:v2020.01.30
	     Image ID:       docker.io/cilium/echoserver-udp@sha256:66680075424a25cae98c4a437fec2b7e33b91fbe94e53defeea0305aee3866dd
	     Port:           69/UDP
	     Host Port:      0/UDP
	     State:          Running
	       Started:      Fri, 15 Oct 2021 13:51:49 +0000
	     Ready:          True
	     Restart Count:  0
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-v7ksk (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   default-token-v7ksk:
	     Type:        Secret (a volume populated by a Secret)
	     SecretName:  default-token-v7ksk
	     Optional:    false
	 QoS Class:       BestEffort
	 Node-Selectors:  cilium.io/ci-node=k8s1
	 Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                  node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	 Events:
	   Type    Reason     Age   From                                                  Message
	   ----    ------     ----  ----                                                  -------
	   Normal  Scheduled  4m4s  default-scheduler                                     Successfully assigned default/k8s1-backend to gke-cilium-ci-11-cilium-ci-11-3ede1a3d-2tlk
	   Normal  Pulled     4m2s  kubelet, gke-cilium-ci-11-cilium-ci-11-3ede1a3d-2tlk  Container image "docker.io/cilium/echoserver:1.10.1" already present on machine
	   Normal  Created    4m2s  kubelet, gke-cilium-ci-11-cilium-ci-11-3ede1a3d-2tlk  Created container web
	   Normal  Started    4m2s  kubelet, gke-cilium-ci-11-cilium-ci-11-3ede1a3d-2tlk  Started container web
	   Normal  Pulled     4m2s  kubelet, gke-cilium-ci-11-cilium-ci-11-3ede1a3d-2tlk  Container image "docker.io/cilium/echoserver-udp:v2020.01.30" already present on machine
	   Normal  Created    4m2s  kubelet, gke-cilium-ci-11-cilium-ci-11-3ede1a3d-2tlk  Created container udp
	   Normal  Started    4m2s  kubelet, gke-cilium-ci-11-cilium-ci-11-3ede1a3d-2tlk  Started container udp
	 
	 
	 Name:         k8s2-backend
	 Namespace:    default
	 Priority:     0
	 Node:         gke-cilium-ci-11-cilium-ci-11-3ede1a3d-62r2/10.128.0.47
	 Start Time:   Fri, 15 Oct 2021 13:51:47 +0000
	 Labels:       id=be2
	               role=backend
	 Annotations:  <none>
	 Status:       Running
	 IP:           10.32.2.1
	 IPs:
	   IP:  10.32.2.1
	 Containers:
	   web:
	     Container ID:   containerd://9259a126764211d1c4a546df33e2e4e874b0c22585542bdcac5670356aecd231
	     Image:          docker.io/cilium/echoserver:1.10.1
	     Image ID:       docker.io/cilium/echoserver@sha256:daa2b422465b8714195fda33bc14865a8672c14963d0955c9d95632d6b804b63
	     Port:           80/TCP
	     Host Port:      0/TCP
	     State:          Running
	       Started:      Fri, 15 Oct 2021 13:51:49 +0000
	     Ready:          False
	     Restart Count:  0
	     Readiness:      http-get http://:80/ delay=0s timeout=1s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-v7ksk (ro)
	   udp:
	     Container ID:   containerd://f65bfdeb00a307ba7915aee236a58fb97bbc93cddf6c354f0e4d884a0cf34259
	     Image:          docker.io/cilium/echoserver-udp:v2020.01.30
	     Image ID:       docker.io/cilium/echoserver-udp@sha256:66680075424a25cae98c4a437fec2b7e33b91fbe94e53defeea0305aee3866dd
	     Port:           69/UDP
	     Host Port:      0/UDP
	     State:          Running
	       Started:      Fri, 15 Oct 2021 13:51:49 +0000
	     Ready:          True
	     Restart Count:  0
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-v7ksk (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             False 
	   ContainersReady   False 
	   PodScheduled      True 
	 Volumes:
	   default-token-v7ksk:
	     Type:        Secret (a volume populated by a Secret)
	     SecretName:  default-token-v7ksk
	     Optional:    false
	 QoS Class:       BestEffort
	 Node-Selectors:  cilium.io/ci-node=k8s2
	 Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                  node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	 Events:
	   Type     Reason     Age                   From                                                  Message
	   ----     ------     ----                  ----                                                  -------
	   Normal   Scheduled  4m5s                  default-scheduler                                     Successfully assigned default/k8s2-backend to gke-cilium-ci-11-cilium-ci-11-3ede1a3d-62r2
	   Normal   Pulled     4m3s                  kubelet, gke-cilium-ci-11-cilium-ci-11-3ede1a3d-62r2  Container image "docker.io/cilium/echoserver:1.10.1" already present on machine
	   Normal   Created    4m3s                  kubelet, gke-cilium-ci-11-cilium-ci-11-3ede1a3d-62r2  Created container web
	   Normal   Started    4m3s                  kubelet, gke-cilium-ci-11-cilium-ci-11-3ede1a3d-62r2  Started container web
	   Normal   Pulled     4m3s                  kubelet, gke-cilium-ci-11-cilium-ci-11-3ede1a3d-62r2  Container image "docker.io/cilium/echoserver-udp:v2020.01.30" already present on machine
	   Normal   Created    4m3s                  kubelet, gke-cilium-ci-11-cilium-ci-11-3ede1a3d-62r2  Created container udp
	   Normal   Started    4m3s                  kubelet, gke-cilium-ci-11-cilium-ci-11-3ede1a3d-62r2  Started container udp
	   Warning  Unhealthy  59s (x19 over 3m59s)  kubelet, gke-cilium-ci-11-cilium-ci-11-3ede1a3d-62r2  Readiness probe failed: Get "http://10.32.2.1:80/": dial tcp 10.32.2.1:80: connect: connection refused
	 
Stderr:
 	 

FAIL: Expected
    <*errors.errorString | 0xc00031ff10>: {
        s: "timed out waiting for pods with filter -l role=backend to be ready: 4m0s timeout expired",
    }
to be nil
13:55:52 STEP: Running JustAfterEach block for EntireTestsuite K8sLRPTests
13:55:53 STEP: Running AfterEach for block EntireTestsuite K8sLRPTests
13:55:53 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|b55a1573_K8sLRPTests_Checks_local_redirect_policy_LRP_connectivity.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-GKE//6659/artifact/src/github.com/cilium/cilium/362acfeb_K8sLRPTests_Checks_local_redirect_policy_LRP_restores_service_when_removed.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-GKE//6659/artifact/src/github.com/cilium/cilium/b55a1573_K8sLRPTests_Checks_local_redirect_policy_LRP_connectivity.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-GKE//6659/artifact/src/github.com/cilium/cilium/test_results_Cilium-PR-K8s-GKE_6659_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-GKE/6659/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@maintainer-s-little-helper
Copy link
Author

PR #18076 hit this flake with 90.82% similarity:

Click to show.

Test Name

K8sLRPTests Checks local redirect policy LRP connectivity

Failure Output

FAIL: Expected

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-GKE@2/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:465
Expected
    <*errors.errorString | 0xc000c44c40>: {
        s: "timed out waiting for pods with filter -l role=backend to be ready: 4m0s timeout expired",
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-GKE@2/src/github.com/cilium/cilium/test/k8sT/lrp.go:74

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 2
⚠️  Number of "level=warning" in logs: 8
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
Unable to restore endpoint, ignoring
Error while inserting service in LB map


Standard Error

Click to show.
11:58:48 STEP: Running BeforeAll block for EntireTestsuite K8sLRPTests
11:58:48 STEP: Ensuring the namespace kube-system exists
11:58:48 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
11:58:48 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
11:58:49 STEP: Deleting pods [spaceship-6567c9b4bd-jb969,spaceship-6567c9b4bd-jx85r,spaceship-6567c9b4bd-l4fmm,spaceship-6567c9b4bd-mmt7f,xwing-6f56868789-b8ksd,xwing-6f56868789-dpxzk,xwing-6f56868789-ms997] in namespace default
11:58:49 STEP: Waiting for 7 deletes to return (spaceship-6567c9b4bd-jb969,spaceship-6567c9b4bd-jx85r,spaceship-6567c9b4bd-l4fmm,spaceship-6567c9b4bd-mmt7f,xwing-6f56868789-b8ksd,xwing-6f56868789-dpxzk,xwing-6f56868789-ms997)
11:58:59 STEP: Unable to delete pods xwing-6f56868789-dpxzk with 'kubectl -n default delete pods xwing-6f56868789-dpxzk': Exitcode: -1 
Err: signal: killed
Stdout:
 	 pod "xwing-6f56868789-dpxzk" deleted
	 
Stderr:
 	 

11:58:59 STEP: Unable to delete pods spaceship-6567c9b4bd-mmt7f with 'kubectl -n default delete pods spaceship-6567c9b4bd-mmt7f': Exitcode: -1 
Err: signal: killed
Stdout:
 	 pod "spaceship-6567c9b4bd-mmt7f" deleted
	 
Stderr:
 	 

11:58:59 STEP: Unable to delete pods spaceship-6567c9b4bd-jx85r with 'kubectl -n default delete pods spaceship-6567c9b4bd-jx85r': Exitcode: -1 
Err: signal: killed
Stdout:
 	 pod "spaceship-6567c9b4bd-jx85r" deleted
	 
Stderr:
 	 

11:58:59 STEP: Unable to delete pods xwing-6f56868789-ms997 with 'kubectl -n default delete pods xwing-6f56868789-ms997': Exitcode: -1 
Err: signal: killed
Stdout:
 	 pod "xwing-6f56868789-ms997" deleted
	 
Stderr:
 	 

11:58:59 STEP: Unable to delete pods spaceship-6567c9b4bd-l4fmm with 'kubectl -n default delete pods spaceship-6567c9b4bd-l4fmm': Exitcode: -1 
Err: signal: killed
Stdout:
 	 pod "spaceship-6567c9b4bd-l4fmm" deleted
	 
Stderr:
 	 

11:58:59 STEP: Unable to delete pods xwing-6f56868789-b8ksd with 'kubectl -n default delete pods xwing-6f56868789-b8ksd': Exitcode: -1 
Err: signal: killed
Stdout:
 	 pod "xwing-6f56868789-b8ksd" deleted
	 
Stderr:
 	 

11:58:59 STEP: Unable to delete pods spaceship-6567c9b4bd-jb969 with 'kubectl -n default delete pods spaceship-6567c9b4bd-jb969': Exitcode: -1 
Err: signal: killed
Stdout:
 	 pod "spaceship-6567c9b4bd-jb969" deleted
	 
Stderr:
 	 

11:58:59 STEP: Running BeforeAll block for EntireTestsuite K8sLRPTests Checks local redirect policy
11:58:59 STEP: Installing Cilium
11:59:06 STEP: Waiting for Cilium to become ready
11:59:29 STEP: Validating if Kubernetes DNS is deployed
11:59:29 STEP: Checking if deployment is ready
11:59:30 STEP: Checking if kube-dns service is plumbed correctly
11:59:30 STEP: Checking if pods have identity
11:59:30 STEP: Checking if DNS can resolve
11:59:32 STEP: Kubernetes DNS is not ready: %!s(<nil>)
11:59:32 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
12:00:07 STEP: Waiting for Kubernetes DNS to become operational
12:00:07 STEP: Checking if deployment is ready
12:00:07 STEP: Checking if kube-dns service is plumbed correctly
12:00:07 STEP: Checking if pods have identity
12:00:07 STEP: Checking if DNS can resolve
12:00:10 STEP: Validating Cilium Installation
12:00:10 STEP: Performing Cilium controllers preflight check
12:00:10 STEP: Performing Cilium status preflight check
12:00:10 STEP: Performing Cilium health check
12:00:14 STEP: Performing Cilium service preflight check
12:00:14 STEP: Performing K8s service preflight check
12:00:14 STEP: Waiting for cilium-operator to be ready
12:00:14 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
12:00:14 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
12:00:14 STEP: WaitforPods(namespace="kube-system", filter="")
12:00:15 STEP: WaitforPods(namespace="kube-system", filter="") => <nil>
12:00:16 STEP: WaitforPods(namespace="default", filter="-l role=frontend")
12:00:18 STEP: WaitforPods(namespace="default", filter="-l role=frontend") => <nil>
12:00:18 STEP: WaitforPods(namespace="default", filter="-l role=backend")
12:04:18 STEP: WaitforPods(namespace="default", filter="-l role=backend") => timed out waiting for pods with filter -l role=backend to be ready: 4m0s timeout expired
12:04:18 STEP: cmd: kubectl describe pods -n default -l role=backend
Exitcode: 0 
Stdout:
 	 Name:         k8s1-backend
	 Namespace:    default
	 Priority:     0
	 Node:         gke-cilium-ci-5-cilium-ci-5-37191319-1v0w/10.128.0.49
	 Start Time:   Wed, 01 Dec 2021 12:00:15 +0000
	 Labels:       id=be1
	               role=backend
	 Annotations:  <none>
	 Status:       Running
	 IP:           10.52.1.1
	 IPs:
	   IP:  10.52.1.1
	 Containers:
	   web:
	     Container ID:   containerd://865bcaba02830ef5dfd49e1009e7319a618dfa5a4ea0e088d25728aead5ca36d
	     Image:          docker.io/cilium/echoserver:1.10.1
	     Image ID:       docker.io/cilium/echoserver@sha256:daa2b422465b8714195fda33bc14865a8672c14963d0955c9d95632d6b804b63
	     Port:           80/TCP
	     Host Port:      0/TCP
	     State:          Running
	       Started:      Wed, 01 Dec 2021 12:00:17 +0000
	     Ready:          False
	     Restart Count:  0
	     Readiness:      http-get http://:80/ delay=0s timeout=1s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-6r6rc (ro)
	   udp:
	     Container ID:   containerd://ce4231aa2250930e7d5dac04809d22dbddae4aed6b24fa654570d8683bf0bcb1
	     Image:          docker.io/cilium/echoserver-udp:v2020.01.30
	     Image ID:       docker.io/cilium/echoserver-udp@sha256:66680075424a25cae98c4a437fec2b7e33b91fbe94e53defeea0305aee3866dd
	     Port:           69/UDP
	     Host Port:      0/UDP
	     State:          Running
	       Started:      Wed, 01 Dec 2021 12:00:17 +0000
	     Ready:          True
	     Restart Count:  0
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-6r6rc (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             False 
	   ContainersReady   False 
	   PodScheduled      True 
	 Volumes:
	   default-token-6r6rc:
	     Type:        Secret (a volume populated by a Secret)
	     SecretName:  default-token-6r6rc
	     Optional:    false
	 QoS Class:       BestEffort
	 Node-Selectors:  cilium.io/ci-node=k8s1
	 Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                  node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	 Events:
	   Type     Reason     Age                   From               Message
	   ----     ------     ----                  ----               -------
	   Normal   Scheduled  4m3s                  default-scheduler  Successfully assigned default/k8s1-backend to gke-cilium-ci-5-cilium-ci-5-37191319-1v0w
	   Normal   Pulled     4m1s                  kubelet            Container image "docker.io/cilium/echoserver:1.10.1" already present on machine
	   Normal   Created    4m1s                  kubelet            Created container web
	   Normal   Started    4m1s                  kubelet            Started container web
	   Normal   Pulled     4m1s                  kubelet            Container image "docker.io/cilium/echoserver-udp:v2020.01.30" already present on machine
	   Normal   Created    4m1s                  kubelet            Created container udp
	   Normal   Started    4m1s                  kubelet            Started container udp
	   Warning  Unhealthy  51s (x19 over 3m51s)  kubelet            Readiness probe failed: Get "http://10.52.1.1:80/": dial tcp 10.52.1.1:80: connect: connection refused
	 
	 
	 Name:         k8s2-backend
	 Namespace:    default
	 Priority:     0
	 Node:         gke-cilium-ci-5-cilium-ci-5-37191319-bq7z/10.128.0.50
	 Start Time:   Wed, 01 Dec 2021 12:00:15 +0000
	 Labels:       id=be2
	               role=backend
	 Annotations:  <none>
	 Status:       Running
	 IP:           10.52.2.106
	 IPs:
	   IP:  10.52.2.106
	 Containers:
	   web:
	     Container ID:   containerd://9f6057ab059e50b9ecc0fabcf69a425f8d7ec60cda2717039f5ec3d9691697dd
	     Image:          docker.io/cilium/echoserver:1.10.1
	     Image ID:       docker.io/cilium/echoserver@sha256:daa2b422465b8714195fda33bc14865a8672c14963d0955c9d95632d6b804b63
	     Port:           80/TCP
	     Host Port:      0/TCP
	     State:          Running
	       Started:      Wed, 01 Dec 2021 12:00:17 +0000
	     Ready:          True
	     Restart Count:  0
	     Readiness:      http-get http://:80/ delay=0s timeout=1s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-6r6rc (ro)
	   udp:
	     Container ID:   containerd://d760cc63278feb3dd6d2edd3f5bfa10d0e709ba652af42c3e0edd656fbe01342
	     Image:          docker.io/cilium/echoserver-udp:v2020.01.30
	     Image ID:       docker.io/cilium/echoserver-udp@sha256:66680075424a25cae98c4a437fec2b7e33b91fbe94e53defeea0305aee3866dd
	     Port:           69/UDP
	     Host Port:      0/UDP
	     State:          Running
	       Started:      Wed, 01 Dec 2021 12:00:17 +0000
	     Ready:          True
	     Restart Count:  0
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from default-token-6r6rc (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   default-token-6r6rc:
	     Type:        Secret (a volume populated by a Secret)
	     SecretName:  default-token-6r6rc
	     Optional:    false
	 QoS Class:       BestEffort
	 Node-Selectors:  cilium.io/ci-node=k8s2
	 Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                  node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	 Events:
	   Type    Reason     Age   From               Message
	   ----    ------     ----  ----               -------
	   Normal  Scheduled  4m3s  default-scheduler  Successfully assigned default/k8s2-backend to gke-cilium-ci-5-cilium-ci-5-37191319-bq7z
	   Normal  Pulled     4m1s  kubelet            Container image "docker.io/cilium/echoserver:1.10.1" already present on machine
	   Normal  Created    4m1s  kubelet            Created container web
	   Normal  Started    4m1s  kubelet            Started container web
	   Normal  Pulled     4m1s  kubelet            Container image "docker.io/cilium/echoserver-udp:v2020.01.30" already present on machine
	   Normal  Created    4m1s  kubelet            Created container udp
	   Normal  Started    4m1s  kubelet            Started container udp
	 
Stderr:
 	 

FAIL: Expected
    <*errors.errorString | 0xc000c44c40>: {
        s: "timed out waiting for pods with filter -l role=backend to be ready: 4m0s timeout expired",
    }
to be nil
12:04:18 STEP: Running JustAfterEach block for EntireTestsuite K8sLRPTests
12:04:20 STEP: Running AfterEach for block EntireTestsuite K8sLRPTests
12:04:20 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|a88c4e7d_K8sLRPTests_Checks_local_redirect_policy_LRP_connectivity.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-GKE//7070/artifact/src/github.com/cilium/cilium/a88c4e7d_K8sLRPTests_Checks_local_redirect_policy_LRP_connectivity.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-GKE//7070/artifact/src/github.com/cilium/cilium/b5f77e74_K8sLRPTests_Checks_local_redirect_policy_LRP_restores_service_when_removed.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-GKE//7070/artifact/src/github.com/cilium/cilium/test_results_Cilium-PR-K8s-GKE_7070_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-GKE/7070/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

@pchaigno
Copy link
Member

Duplicate of #17307.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci/flake This is a known failure that occurs in the tree. Please investigate me!
Projects
None yet
Development

No branches or pull requests

1 participant