Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

my cluster masternode can not use calico bgp and all calico ds pods are restarting ! #8804

Closed
guquanheng opened this issue May 9, 2024 · 2 comments

Comments

@guquanheng
Copy link

calico-node-2sz4m 1/1 Running 8 (5m2s ago) 32m
calico-node-45fxm 0/1 CrashLoopBackOff 7 (90s ago) 32m
calico-node-4lsdn 1/1 Running 9 (5m44s ago) 32m
calico-node-4tp8p 0/1 CrashLoopBackOff 10 (10s ago) 32m
calico-node-6tm9p 1/1 Running 0 32m
calico-node-7w9bk 1/1 Running 10 (3m18s ago) 32m
calico-node-b7ls2 1/1 Running 11 (6m8s ago) 32m
calico-node-bgm49 1/1 Running 6 (2m31s ago) 32m
calico-node-bjtnh 1/1 Running 5 (5m28s ago) 32m
calico-node-bm5q4 0/1 CrashLoopBackOff 8 (4m33s ago) 32m
calico-node-gk7w2 0/1 CrashLoopBackOff 7 (6s ago) 32m
calico-node-hfcbw 0/1 CrashLoopBackOff 8 (3m45s ago) 32m
calico-node-ht94c 0/1 CrashLoopBackOff 7 (2m10s ago) 32m
calico-node-mdpvv 0/1 CrashLoopBackOff 7 (2m28s ago) 32m
calico-node-qkthj 1/1 Running 9 (6m11s ago) 32m
calico-node-vpgss 0/1 CrashLoopBackOff 9 (2m10s ago) 32m

logs-pod

bird: Mesh_10_1_0_27: State changed to start
2024-05-09 03:54:43.458 [INFO][75] monitor-addresses/autodetection_methods.go 117: Using autodetected IPv4 address 10.1.0.11/24 on matching interface eth1
bird: Mesh_10_1_0_15: State changed to start
2024-05-09 03:54:49.380 [INFO][77] felix/calc_graph.go 460: Local endpoint deleted id=WorkloadEndpoint(node=node-11, orchestrator=k8s, workload=nvidia-device-plugin/nfd-node-feature-discovery-worker-rd74g, name=eth0)
2024-05-09 03:54:49.380 [INFO][77] felix/int_dataplane.go 1680: Received proto.WorkloadEndpointRemove update from calculation graph msg=id:<orchestrator_id:"k8s" workload_id:"nvidia-device-plugin/nfd-node-feature-discovery-worker-rd74g" endpoint_id:"eth0" >
2024-05-09 03:54:49.380 [INFO][77] felix/int_dataplane.go 1680: Received proto.ActiveProfileRemove update from calculation graph msg=id:<name:"ksa.nvidia-device-plugin.nfd-node-feature-discovery-worker" >
2024-05-09 03:54:49.380 [INFO][77] felix/table.go 539: Queuing deletion of chain. chainName="cali-pri-_UlGinJiDR-6bSynKAj" ipVersion=0x4 table="filter"
2024-05-09 03:54:49.380 [INFO][77] felix/table.go 539: Queuing deletion of chain. chainName="cali-pro-_UlGinJiDR-6bSynKAj" ipVersion=0x4 table="filter"
2024-05-09 03:54:49.380 [INFO][77] felix/table.go 539: Queuing deletion of chain. chainName="cali-pro-_UlGinJiDR-6bSynKAj" ipVersion=0x4 table="mangle"
2024-05-09 03:54:49.380 [INFO][77] felix/endpoint_mgr.go 700: Workload removed, deleting its chains. id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"nvidia-device-plugin/nfd-node-feature-discovery-worker-rd74g", EndpointId:"eth0"}
2024-05-09 03:54:49.380 [INFO][77] felix/table.go 539: Queuing deletion of chain. chainName="cali-tw-calief280df4517" ipVersion=0x4 table="filter"
2024-05-09 03:54:49.380 [INFO][77] felix/table.go 593: Chain no longer referenced, marking it for removal chainName="cali-pri-_UlGinJiDR-6bSynKAj"
2024-05-09 03:54:49.380 [INFO][77] felix/table.go 539: Queuing deletion of chain. chainName="cali-fw-calief280df4517" ipVersion=0x4 table="filter"
2024-05-09 03:54:49.380 [INFO][77] felix/table.go 593: Chain no longer referenced, marking it for removal chainName="cali-pro-_UlGinJiDR-6bSynKAj"
2024-05-09 03:54:49.380 [INFO][77] felix/table.go 539: Queuing deletion of chain. chainName="cali-sm-calief280df4517" ipVersion=0x4 table="filter"
2024-05-09 03:54:49.380 [INFO][77] felix/endpoint_mgr.go 556: Workload removed, deleting old state. id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"nvidia-device-plugin/nfd-node-feature-discovery-worker-rd74g", EndpointId:"eth0"}
2024-05-09 03:54:49.381 [INFO][77] felix/table.go 508: Queueing update of chain. chainName="cali-from-wl-dispatch" ipVersion=0x4 table="filter"
2024-05-09 03:54:49.381 [INFO][77] felix/table.go 508: Queueing update of chain. chainName="cali-to-wl-dispatch" ipVersion=0x4 table="filter"
2024-05-09 03:54:49.381 [INFO][77] felix/table.go 593: Chain no longer referenced, marking it for removal chainName="cali-tw-calief280df4517"
2024-05-09 03:54:49.381 [INFO][77] felix/table.go 508: Queueing update of chain. chainName="cali-set-endpoint-mark" ipVersion=0x4 table="filter"
2024-05-09 03:54:49.381 [INFO][77] felix/table.go 593: Chain no longer referenced, marking it for removal chainName="cali-sm-calief280df4517"
2024-05-09 03:54:49.381 [INFO][77] felix/table.go 508: Queueing update of chain. chainName="cali-from-endpoint-mark" ipVersion=0x4 table="filter"
2024-05-09 03:54:49.381 [INFO][77] felix/table.go 593: Chain no longer referenced, marking it for removal chainName="cali-fw-calief280df4517"
2024-05-09 03:54:49.381 [INFO][77] felix/endpoint_mgr.go 488: Re-evaluated workload endpoint status adminUp=false failed=false known=false operUp=false status="" workloadEndpointID=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"nvidia-device-plugin/nfd-node-feature-discovery-worker-rd74g", EndpointId:"eth0"}
2024-05-09 03:54:49.381 [INFO][77] felix/status_combiner.go 58: Storing endpoint status update ipVersion=0x4 status="" workload=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"nvidia-device-plugin/nfd-node-feature-discovery-worker-rd74g", EndpointId:"eth0"}
2024-05-09 03:54:49.382 [INFO][77] felix/route_table.go 1200: Failed to access interface because it doesn't exist. error=Link not found ifaceName="calief280df4517" ifaceRegex="^cali.
" ipVersion=0x4 tableIndex=0
2024-05-09 03:54:49.382 [INFO][77] felix/route_table.go 1268: Failed to get interface; it's down/gone. error=Link not found ifaceName="calief280df4517" ifaceRegex="^cali.
" ipVersion=0x4 tableIndex=0
2024-05-09 03:54:49.382 [INFO][77] felix/route_table.go 618: Interface missing, will retry if it appears. ifaceName="calief280df4517" ifaceRegex="^cali.*" ipVersion=0x4 tableIndex=0
2024-05-09 03:54:49.382 [INFO][77] felix/conntrack.go 90: Removing conntrack flows ip=10.182.231.121
2024-05-09 03:54:49.455 [INFO][77] felix/status_combiner.go 86: Reporting endpoint removed. id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"nvidia-device-plugin/nfd-node-feature-discovery-worker-rd74g", EndpointId:"eth0"}
2024-05-09 03:54:49.456 [INFO][77] felix/summary.go 100: Summarising 26 dataplane reconciliation loops over 1m2.6s: avg=12ms longest=78ms (resync-filter-v4,resync-mangle-v4,update-filter-v4)
2024-05-09 03:54:49.480 [INFO][77] felix/int_dataplane.go 1241: Linux interface state changed. ifIndex=1255 ifaceName="calief280df4517" state="down"
2024-05-09 03:54:49.480 [INFO][77] felix/int_dataplane.go 1695: Received interface update msg=&intdataplane.ifaceUpdate{Name:"calief280df4517", State:"down", Index:1255}
2024-05-09 03:54:49.482 [INFO][77] felix/int_dataplane.go 1241: Linux interface state changed. ifIndex=1255 ifaceName="calief280df4517" state=""
2024-05-09 03:54:49.482 [INFO][77] felix/int_dataplane.go 1277: Linux interface addrs changed. addrs= ifaceName="calief280df4517"
2024-05-09 03:54:49.482 [INFO][77] felix/int_dataplane.go 1695: Received interface update msg=&intdataplane.ifaceUpdate{Name:"calief280df4517", State:"", Index:1255}
2024-05-09 03:54:49.483 [INFO][77] felix/int_dataplane.go 1713: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"calief280df4517", Addrs:set.Setstring}
2024-05-09 03:54:49.483 [INFO][77] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"calief280df4517", Addrs:set.Setstring}
2024-05-09 03:54:50.687 [INFO][77] felix/calc_graph.go 462: Local endpoint updated id=WorkloadEndpoint(node=node-11, orchestrator=k8s, workload=nvidia-device-plugin/nfd-node-feature-discovery-worker-rd74g, name=eth0)
2024-05-09 03:54:50.688 [INFO][77] felix/int_dataplane.go 1680: Received *proto.ActiveProfileUpdate update from calculation graph msg=id:<name:"ksa.nvidia-device-plugin.nfd-node-feature-discovery-worker" > profile:<>
2024-05-09 03:54:50.688 [INFO][77] felix/table.go 508: Queueing update of chain. chainName="cali-pri-_UlGinJiDR-6bSynKAj" ipVersion=0x4 table="filter"
2024-05-09 03:54:50.688 [INFO][77] felix/table.go 508: Queueing update of chain. chainName="cali-pro-_UlGinJiDR-6bSynKAj" ipVersion=0x4 table="filter"
2024-05-09 03:54:50.688 [INFO][77] felix/table.go 508: Queueing update of chain. chainName="cali-pro-_UlGinJiDR-6bSynKAj" ipVersion=0x4 table="mangle"
2024-05-09 03:54:50.688 [INFO][77] felix/int_dataplane.go 1680: Received *proto.WorkloadEndpointUpdate update from calculation graph msg=id:<orchestrator_id:"k8s" workload_id:"nvidia-device-plugin/nfd-node-feature-discovery-worker-rd74g" endpoint_id:"eth0" > endpoint:<state:"active" name:"calief280df4517" mac:"b6:c3:66:40:5f:ae" profile_ids:"kns.nvidia-device-plugin" profile_ids:"ksa.nvidia-device-plugin.nfd-node-feature-discovery-worker" ipv4_nets:"10.182.231.83/32" >
2024-05-09 03:54:50.688 [INFO][77] felix/endpoint_mgr.go 600: Updating per-endpoint chains. id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"nvidia-device-plugin/nfd-node-feature-discovery-worker-rd74g", EndpointId:"eth0"}
2024-05-09 03:54:50.688 [INFO][77] felix/table.go 508: Queueing update of chain. chainName="cali-tw-calief280df4517" ipVersion=0x4 table="filter"
2024-05-09 03:54:50.688 [INFO][77] felix/table.go 582: Chain became referenced, marking it for programming chainName="cali-pri-_UlGinJiDR-6bSynKAj"
2024-05-09 03:54:50.689 [INFO][77] felix/table.go 508: Queueing update of chain. chainName="cali-fw-calief280df4517" ipVersion=0x4 table="filter"
2024-05-09 03:54:50.689 [INFO][77] felix/table.go 582: Chain became referenced, marking it for programming chainName="cali-pro-_UlGinJiDR-6bSynKAj"
2024-05-09 03:54:50.689 [INFO][77] felix/table.go 508: Queueing update of chain. chainName="cali-sm-calief280df4517" ipVersion=0x4 table="filter"
2024-05-09 03:54:50.689 [INFO][77] felix/endpoint_mgr.go 646: Updating endpoint routes. id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"nvidia-device-plugin/nfd-node-feature-discovery-worker-rd74g", EndpointId:"eth0"}
2024-05-09 03:54:50.689 [INFO][77] felix/table.go 508: Queueing update of chain. chainName="cali-from-wl-dispatch" ipVersion=0x4 table="filter"
2024-05-09 03:54:50.689 [INFO][77] felix/table.go 582: Chain became referenced, marking it for programming chainName="cali-fw-calief280df4517"
2024-05-09 03:54:50.689 [INFO][77] felix/table.go 508: Queueing update of chain. chainName="cali-to-wl-dispatch" ipVersion=0x4 table="filter"
2024-05-09 03:54:50.689 [INFO][77] felix/table.go 582: Chain became referenced, marking it for programming chainName="cali-tw-calief280df4517"
2024-05-09 03:54:50.689 [INFO][77] felix/endpoint_mgr.go 1280: Skipping configuration of interface because it is oper down. ifaceName="calief280df4517"
2024-05-09 03:54:50.690 [INFO][77] felix/table.go 508: Queueing update of chain. chainName="cali-set-endpoint-mark" ipVersion=0x4 table="filter"
2024-05-09 03:54:50.690 [INFO][77] felix/table.go 582: Chain became referenced, marking it for programming chainName="cali-sm-calief280df4517"
2024-05-09 03:54:50.690 [INFO][77] felix/table.go 508: Queueing update of chain. chainName="cali-from-endpoint-mark" ipVersion=0x4 table="filter"
2024-05-09 03:54:50.690 [INFO][77] felix/endpoint_mgr.go 488: Re-evaluated workload endpoint status adminUp=true failed=false known=true operUp=false status="down" workloadEndpointID=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"nvidia-device-plugin/nfd-node-feature-discovery-worker-rd74g", EndpointId:"eth0"}
2024-05-09 03:54:50.690 [INFO][77] felix/status_combiner.go 58: Storing endpoint status update ipVersion=0x4 status="down" workload=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"nvidia-device-plugin/nfd-node-feature-discovery-worker-rd74g", EndpointId:"eth0"}
2024-05-09 03:54:50.701 [INFO][77] felix/status_combiner.go 78: Endpoint down for at least one IP version id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"nvidia-device-plugin/nfd-node-feature-discovery-worker-rd74g", EndpointId:"eth0"} ipVersion=0x4 status="down"
2024-05-09 03:54:50.701 [INFO][77] felix/status_combiner.go 98: Reporting combined status. id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"nvidia-device-plugin/nfd-node-feature-discovery-worker-rd74g", EndpointId:"eth0"} status="down"
2024-05-09 03:54:50.784 [INFO][77] felix/int_dataplane.go 1241: Linux interface state changed. ifIndex=1262 ifaceName="calief280df4517" state="down"
2024-05-09 03:54:50.784 [INFO][77] felix/int_dataplane.go 1695: Received interface update msg=&intdataplane.ifaceUpdate{Name:"calief280df4517", State:"down", Index:1262}
2024-05-09 03:54:50.784 [INFO][77] felix/endpoint_mgr.go 429: Workload interface state changed; marking for status update. ifaceName="calief280df4517"
2024-05-09 03:54:50.784 [INFO][77] felix/endpoint_mgr.go 488: Re-evaluated workload endpoint status adminUp=true failed=false known=true operUp=false status="down" workloadEndpointID=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"nvidia-device-plugin/nfd-node-feature-discovery-worker-rd74g", EndpointId:"eth0"}
2024-05-09 03:54:50.784 [INFO][77] felix/status_combiner.go 58: Storing endpoint status update ipVersion=0x4 status="down" workload=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"nvidia-device-plugin/nfd-node-feature-discovery-worker-rd74g", EndpointId:"eth0"}
2024-05-09 03:54:50.785 [INFO][77] felix/int_dataplane.go 1277: Linux interface addrs changed. addrs=set.Set{} ifaceName="calief280df4517"
2024-05-09 03:54:50.785 [INFO][77] felix/int_dataplane.go 1241: Linux interface state changed. ifIndex=1262 ifaceName="calief280df4517" state="up"
2024-05-09 03:54:50.785 [INFO][77] felix/status_combiner.go 78: Endpoint down for at least one IP version id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"nvidia-device-plugin/nfd-node-feature-discovery-worker-rd74g", EndpointId:"eth0"} ipVersion=0x4 status="down"
2024-05-09 03:54:50.785 [INFO][77] felix/status_combiner.go 98: Reporting combined status. id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"nvidia-device-plugin/nfd-node-feature-discovery-worker-rd74g", EndpointId:"eth0"} status="down"
2024-05-09 03:54:50.785 [INFO][77] felix/int_dataplane.go 1713: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"calief280df4517", Addrs:set.Typed[string]{}}
2024-05-09 03:54:50.785 [INFO][77] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"calief280df4517", Addrs:set.Typed[string]{}}
2024-05-09 03:54:50.785 [INFO][77] felix/int_dataplane.go 1695: Received interface update msg=&intdataplane.ifaceUpdate{Name:"calief280df4517", State:"up", Index:1262}
2024-05-09 03:54:50.785 [INFO][77] felix/endpoint_mgr.go 372: Workload interface came up, marking for reconfiguration. ifaceName="calief280df4517"
2024-05-09 03:54:50.786 [INFO][77] felix/endpoint_mgr.go 429: Workload interface state changed; marking for status update. ifaceName="calief280df4517"
2024-05-09 03:54:50.786 [WARNING][77] felix/endpoint_mgr.go 1296: Could not set accept_ra error=open /proc/sys/net/ipv6/conf/calief280df4517/accept_ra: no such file or directory ifaceName="calief280df4517"
2024-05-09 03:54:50.786 [INFO][77] felix/endpoint_mgr.go 1212: Applying /proc/sys configuration to interface. ifaceName="calief280df4517"
2024-05-09 03:54:50.786 [INFO][77] felix/endpoint_mgr.go 488: Re-evaluated workload endpoint status adminUp=true failed=false known=true operUp=true status="up" workloadEndpointID=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"nvidia-device-plugin/nfd-node-feature-discovery-worker-rd74g", EndpointId:"eth0"}
2024-05-09 03:54:50.786 [INFO][77] felix/status_combiner.go 58: Storing endpoint status update ipVersion=0x4 status="up" workload=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"nvidia-device-plugin/nfd-node-feature-discovery-worker-rd74g", EndpointId:"eth0"}
2024-05-09 03:54:50.787 [INFO][77] felix/status_combiner.go 81: Endpoint up for at least one IP version id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"nvidia-device-plugin/nfd-node-feature-discovery-worker-rd74g", EndpointId:"eth0"} ipVersion=0x4 status="up"
2024-05-09 03:54:50.787 [INFO][77] felix/status_combiner.go 98: Reporting combined status. id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"nvidia-device-plugin/nfd-node-feature-discovery-worker-rd74g", EndpointId:"eth0"} status="up"
bird: Mesh_10_1_0_13: State changed to start
bird: Mesh_10_1_0_27: State changed to feed
bird: Mesh_10_1_0_27: State changed to up

logs-cluster
root@master-01:~# kubectl describe pod -n kube-system calico-node-b7ls2
Name: calico-node-b7ls2
Namespace: kube-system
Priority: 2000001000
Priority Class Name: system-node-critical
Service Account: calico-node
Node: master-01/10.1.0.30
Start Time: Thu, 09 May 2024 11:17:35 +0800
Labels: controller-revision-hash=6f9559595c
k8s-app=calico-node
pod-template-generation=6
Annotations:
Status: Running
IP: 10.1.0.30
IPs:
IP: 10.1.0.30
Controlled By: DaemonSet/calico-node
Init Containers:
install-cni:
Container ID: containerd://066fada8c5e4d4a0fd1125d6a78ed6d41038c15121b5d480c3dec5bc0e3813d3
Image: calico/cni:v3.24.5
Image ID: docker.io/calico/cni@sha256:e282ea2914c806b5de2976330a17cfb5e6dcef47147bceb1432ca5c75fd46f50
Port:
Host Port:
Command:
/opt/cni/bin/install
State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 09 May 2024 11:52:21 +0800
Finished: Thu, 09 May 2024 11:52:22 +0800
Ready: True
Restart Count: 6
Environment Variables from:
kubernetes-services-endpoint ConfigMap Optional: true
Environment:
CNI_CONF_NAME: 10-calico.conflist
CNI_NETWORK_CONFIG: <set to the key 'cni_network_config' of config map 'calico-config'> Optional: false
ETCD_ENDPOINTS: <set to the key 'etcd_endpoints' of config map 'calico-config'> Optional: false
CNI_MTU: <set to the key 'veth_mtu' of config map 'calico-config'> Optional: false
SLEEP: false
Mounts:
/calico-secrets from etcd-certs (rw)
/host/etc/cni/net.d from cni-net-dir (rw)
/host/opt/cni/bin from cni-bin-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7t6lm (ro)
mount-bpffs:
Container ID: containerd://b237dde71cfca1a52ecaec6cd753dddbb6b770b73d4b9315f45c33dfac29fded
Image: calico/node:v3.24.5
Image ID: docker.io/calico/node@sha256:5972ad2bcbdc668564d3e26960c9c513b2d7b05581c704747cf7c62ef3a405a6
Port:
Host Port:
Command:
calico-node
-init
-best-effort
State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 09 May 2024 11:52:22 +0800
Finished: Thu, 09 May 2024 11:52:22 +0800
Ready: True
Restart Count: 0
Environment:
Mounts:
/nodeproc from nodeproc (ro)
/sys/fs from sys-fs (rw)
/var/run/calico from var-run-calico (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7t6lm (ro)
Containers:
calico-node:
Container ID: containerd://cb6efd6f022767181c833f33bd1f2c59607fcbe54a2994aa9823e1b5b4a6c916
Image: calico/node:v3.24.5
Image ID: docker.io/calico/node@sha256:5972ad2bcbdc668564d3e26960c9c513b2d7b05581c704747cf7c62ef3a405a6
Port:
Host Port:
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 09 May 2024 11:49:09 +0800
Finished: Thu, 09 May 2024 11:52:21 +0800
Ready: False
Restart Count: 11
Limits:
cpu: 1
memory: 1Gi
Requests:
cpu: 250m
memory: 1Gi
Environment Variables from:
kubernetes-services-endpoint ConfigMap Optional: true
Environment:
ETCD_ENDPOINTS: <set to the key 'etcd_endpoints' of config map 'calico-config'> Optional: false
ETCD_CA_CERT_FILE: <set to the key 'etcd_ca' of config map 'calico-config'> Optional: false
ETCD_KEY_FILE: <set to the key 'etcd_key' of config map 'calico-config'> Optional: false
ETCD_CERT_FILE: <set to the key 'etcd_cert' of config map 'calico-config'> Optional: false
CALICO_K8S_NODE_REF: (v1:spec.nodeName)
CALICO_NETWORKING_BACKEND: <set to the key 'calico_backend' of config map 'calico-config'> Optional: false
CLUSTER_TYPE: k8s,bgp
IP: autodetect
IP_AUTODETECTION_METHOD: interface=eth1
CALICO_IPV4POOL_IPIP: always
FELIX_IPINIPMTU: <set to the key 'veth_mtu' of config map 'calico-config'> Optional: false
FELIX_VXLANMTU: <set to the key 'veth_mtu' of config map 'calico-config'> Optional: false
FELIX_WIREGUARDMTU: <set to the key 'veth_mtu' of config map 'calico-config'> Optional: false
CALICO_IPV4POOL_CIDR: 10.182.0.0/16
CALICO_DISABLE_FILE_LOGGING: true
FELIX_DEFAULTENDPOINTTOHOSTACTION: ACCEPT
FELIX_IPV6SUPPORT: false
FELIX_HEALTHENABLED: true
FELIX_KUBENODEPORTRANGES: 30000:32767
FELIX_PROMETHEUSMETRICSENABLED: false
Mounts:
/calico-secrets from etcd-certs (rw)
/host/etc/cni/net.d from cni-net-dir (rw)
/lib/modules from lib-modules (ro)
/run/xtables.lock from xtables-lock (rw)
/sys/fs/bpf from bpffs (rw)
/var/lib/calico from var-lib-calico (rw)
/var/log/calico/cni from cni-log-dir (ro)
/var/run/calico from var-run-calico (rw)
/var/run/nodeagent from policysync (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7t6lm (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
lib-modules:
Type: HostPath (bare host directory volume)
Path: /lib/modules
HostPathType:
var-run-calico:
Type: HostPath (bare host directory volume)
Path: /var/run/calico
HostPathType:
var-lib-calico:
Type: HostPath (bare host directory volume)
Path: /var/lib/calico
HostPathType:
xtables-lock:
Type: HostPath (bare host directory volume)
Path: /run/xtables.lock
HostPathType: FileOrCreate
sys-fs:
Type: HostPath (bare host directory volume)
Path: /sys/fs/
HostPathType: DirectoryOrCreate
bpffs:
Type: HostPath (bare host directory volume)
Path: /sys/fs/bpf
HostPathType: Directory
nodeproc:
Type: HostPath (bare host directory volume)
Path: /proc
HostPathType:
cni-bin-dir:
Type: HostPath (bare host directory volume)
Path: /opt/cni/bin
HostPathType:
cni-net-dir:
Type: HostPath (bare host directory volume)
Path: /etc/cni/net.d
HostPathType:
cni-log-dir:
Type: HostPath (bare host directory volume)
Path: /var/log/calico/cni
HostPathType:
etcd-certs:
Type: Secret (a volume populated by a Secret)
SecretName: calico-etcd-secrets
Optional: false
policysync:
Type: HostPath (bare host directory volume)
Path: /var/run/nodeagent
HostPathType: DirectoryOrCreate
kube-api-access-7t6lm:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: :NoSchedule op=Exists
:NoExecute op=Exists
CriticalAddonsOnly op=Exists
node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/network-unavailable:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
Type Reason Age From Message


Normal Scheduled 38m default-scheduler Successfully assigned kube-system/calico-node-b7ls2 to master-01
Normal Pulled 37m (x2 over 38m) kubelet Container image "calico/node:v3.24.5" already present on machine
Normal Created 37m (x2 over 38m) kubelet Created container mount-bpffs
Normal Started 37m (x2 over 38m) kubelet Started container mount-bpffs
Normal Pulled 37m (x2 over 38m) kubelet Container image "calico/node:v3.24.5" already present on machine
Normal Created 37m (x2 over 38m) kubelet Created container calico-node
Normal Started 37m (x2 over 38m) kubelet Started container calico-node
Normal Pulled 36m (x3 over 38m) kubelet Container image "calico/cni:v3.24.5" already present on machine
Normal Created 36m (x3 over 38m) kubelet Created container install-cni
Normal Started 36m (x3 over 38m) kubelet Started container install-cni
Normal SandboxChanged 36m (x2 over 38m) kubelet Pod sandbox changed, it will be killed and re-created.
Normal Killing 30m (x3 over 38m) kubelet Stopping container calico-node
Warning BackOff 27m (x8 over 36m) kubelet Back-off restarting failed container calico-node in pod calico-node-b7ls2_kube-system(18bd1ecd-c69f-43f4-9b50-8760ff063043)
Warning BackOff 23m kubelet Back-off restarting failed container install-cni in pod calico-node-b7ls2_kube-system(18bd1ecd-c69f-43f4-9b50-8760ff063043)
Normal SandboxChanged 23m (x2 over 23m) kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulled 23m (x2 over 23m) kubelet Container image "calico/cni:v3.24.5" already present on machine
Normal Created 23m (x2 over 23m) kubelet Created container install-cni
Normal Started 23m (x2 over 23m) kubelet Started container install-cni
Normal Created 23m (x2 over 23m) kubelet Created container mount-bpffs
Normal Started 23m (x2 over 23m) kubelet Started container mount-bpffs
Normal Pulled 23m (x2 over 23m) kubelet Container image "calico/node:v3.24.5" already present on machine
Normal Pulled 23m (x2 over 23m) kubelet Container image "calico/node:v3.24.5" already present on machine
Normal Created 23m (x2 over 23m) kubelet Created container calico-node
Normal Started 23m (x2 over 23m) kubelet Started container calico-node
Warning BackOff 8m40s (x47 over 23m) kubelet Back-off restarting failed container calico-node in pod calico-node-b7ls2_kube-system(18bd1ecd-c69f-43f4-9b50-8760ff063043)
Normal Killing 3m19s (x8 over 23m) kubelet Stopping container calico-node

@caseydavenport
Copy link
Member

@guquanheng could you please update the description to use backticks to properly format the terminal output? It's very hard to read in this formatting.

@guquanheng
Copy link
Author

guquanheng commented May 15, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants