-
Notifications
You must be signed in to change notification settings - Fork 145
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cloud Controller Manager doesn't add droplets to Load Balancer #639
Comments
The message indicates that nodes are either not there at all or in a state that does not allow them to be added, namely being unready. Can you check on that? And if it's looking good, could you share the YAML output of a node that you think should be added but isn't? |
I have a simple cluster with one controlplane and one worker node. All nodes are in ready status
and I can run pods normally. When I've added droplets to loadbalancer manually, the application is available from outside world. Here is the the yaml file of worker node, which should be definitely attached to loadbalancer automatically apiVersion: v1
kind: Node
metadata:
annotations:
alpha.kubernetes.io/provided-node-ip: 10.114.0.2
csi.volume.kubernetes.io/nodeid: '{"csi.tigera.io":"ivinco-k8s-worker01","driver.longhorn.io":"ivinco-k8s-worker01"}'
kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: "0"
projectcalico.org/IPv4Address: 10.114.0.2/20
projectcalico.org/IPv4VXLANTunnelAddr: 192.168.205.0
volumes.kubernetes.io/controller-managed-attach-detach: "true"
creationTimestamp: "2023-07-03T18:34:32Z"
labels:
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/os: linux
kubernetes.io/arch: amd64
kubernetes.io/hostname: ivinco-k8s-worker01
kubernetes.io/os: linux
name: ivinco-k8s-worker01
resourceVersion: "406517"
uid: 3de39ee4-3584-4cda-abfa-2e86e29c14bf
spec:
podCIDR: 192.168.1.0/24
podCIDRs:
- 192.168.1.0/24
status:
addresses:
- address: 10.114.0.2
type: InternalIP
- address: ivinco-k8s-worker01
type: Hostname
- address: 161.35.218.29
type: ExternalIP
allocatable:
cpu: "2"
ephemeral-storage: "56911881737"
hugepages-2Mi: "0"
memory: 1912080Ki
pods: "110"
capacity:
cpu: "2"
ephemeral-storage: 60306Mi
hugepages-2Mi: "0"
memory: 2014480Ki
pods: "110"
conditions:
- lastHeartbeatTime: "2023-07-04T19:09:13Z"
lastTransitionTime: "2023-07-04T19:09:13Z"
message: Calico is running on this node
reason: CalicoIsUp
status: "False"
type: NetworkUnavailable
- lastHeartbeatTime: "2023-07-05T10:34:37Z"
lastTransitionTime: "2023-07-04T19:09:08Z"
message: kubelet has sufficient memory available
reason: KubeletHasSufficientMemory
status: "False"
type: MemoryPressure
- lastHeartbeatTime: "2023-07-05T10:34:37Z"
lastTransitionTime: "2023-07-04T19:09:08Z"
message: kubelet has no disk pressure
reason: KubeletHasNoDiskPressure
status: "False"
type: DiskPressure
- lastHeartbeatTime: "2023-07-05T10:34:37Z"
lastTransitionTime: "2023-07-04T19:09:08Z"
message: kubelet has sufficient PID available
reason: KubeletHasSufficientPID
status: "False"
type: PIDPressure
- lastHeartbeatTime: "2023-07-05T10:34:37Z"
lastTransitionTime: "2023-07-04T19:09:08Z"
message: kubelet is posting ready status. AppArmor enabled
reason: KubeletReady
status: "True"
type: Ready
daemonEndpoints:
kubeletEndpoint:
Port: 10250
images:
- names:
- docker.io/longhornio/longhorn-engine@sha256:32170c96be28e47c9a66c85cca65c5df8637f3d2b0a1b2034c02ec186ac6996a
- docker.io/longhornio/longhorn-engine:v1.4.2
sizeBytes: 267482077
- names:
- docker.io/longhornio/longhorn-instance-manager@sha256:9157fe047fba170c70ac5ce83d5cc9e2fcdcd3b7a73885efb2ce786c2c69bd1d
- docker.io/longhornio/longhorn-instance-manager:v1.4.2
sizeBytes: 266275737
- names:
- registry.k8s.io/ingress-nginx/controller@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd
sizeBytes: 113903192
- names:
- registry.k8s.io/ingress-nginx/controller@sha256:744ae2afd433a395eeb13dc03d3313facba92e96ad71d9feaafc85925493fee3
sizeBytes: 111174024
- names:
- docker.io/longhornio/longhorn-manager@sha256:b318015eee458b08c2fc8a38fd5bd947cb9df2dd747c20b5ddb38163f131a732
- docker.io/longhornio/longhorn-manager:v1.4.2
sizeBytes: 98278778
- names:
- docker.io/calico/cni@sha256:3be3c67ddba17004c292eafec98cc49368ac273b40b27c8a6621be4471d348d6
- docker.io/calico/cni:v3.26.1
sizeBytes: 93375656
- names:
- docker.io/calico/node@sha256:8e34517775f319917a0be516ed3a373dbfca650d1ee8e72158087c24356f47fb
- docker.io/calico/node:v3.26.1
sizeBytes: 86592218
- names:
- docker.io/longhornio/longhorn-ui@sha256:90b33eae3a3c5a1c932ae49589d0b0d9a33e88f55f00241e8c0cb175d819e031
- docker.io/longhornio/longhorn-ui:v1.4.2
sizeBytes: 72622024
- names:
- docker.io/library/httpd@sha256:7d45cb2af5484a1be9fdfc27ab5ada4fb8d23efdd0e34e7dd0aa994438c9f07b
- docker.io/library/httpd:latest
sizeBytes: 64694091
- names:
- docker.io/library/httpd@sha256:af30bcefc95cd5189701651fd9c710d411dc4b8536f93ef542cfa6f1fa32cb53
sizeBytes: 64693977
- names:
- docker.io/paulbouwer/hello-kubernetes@sha256:2ad94733189b30844049caed7e17711bf11ed9d1116eaf10000586081451690b
- docker.io/paulbouwer/hello-kubernetes:1.10
sizeBytes: 45466760
- names:
- docker.io/digitalocean/digitalocean-cloud-controller-manager@sha256:28b7564201b0c04620c7c9b9daf901a3a9ce688f072fd69d3a56d15f157f76fd
- docker.io/digitalocean/digitalocean-cloud-controller-manager:v0.1.43
sizeBytes: 35515603
- names:
- registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699
- registry.k8s.io/kube-proxy:v1.27.3
sizeBytes: 23897400
- names:
- docker.io/longhornio/csi-snapshotter@sha256:a3c71d6b1ecaefe69330312326b2ccf016a83af67e9b2bc2ad0685dc388c35df
- docker.io/longhornio/csi-snapshotter:v5.0.1
sizeBytes: 22163656
- names:
- docker.io/longhornio/csi-attacher@sha256:365b6e745404cf3eb39ee9bcc55b1af3c4f5357e44dddd0270c3cbb06b252a64
- docker.io/longhornio/csi-attacher:v3.4.0
sizeBytes: 22084988
- names:
- docker.io/longhornio/csi-resizer@sha256:5ce3edf2ab0167a76d09d949a644677ec42428e2b735f6ebabab76a2b3432475
- docker.io/longhornio/csi-resizer:v1.3.0
sizeBytes: 21671030
- names:
- docker.io/longhornio/csi-provisioner@sha256:daa7828fd45897fdeec17d6e987bc54cdc0a0928a7ba6b62d8f8496d6aa5d3be
- docker.io/longhornio/csi-provisioner:v2.1.2
sizeBytes: 21220203
- names:
- registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b
sizeBytes: 20266348
- names:
- docker.io/library/nginx@sha256:5e1ccef1e821253829e415ac1e3eafe46920aab0bf67e0fe8a104c57dbfffdf7
- docker.io/library/nginx:stable-alpine
sizeBytes: 16844563
- names:
- docker.io/calico/node-driver-registrar@sha256:fb4dc4863c20b7fe1e9dd5e52dcf004b74e1af07d783860a08ddc5c50560f753
- docker.io/calico/node-driver-registrar:v3.26.1
sizeBytes: 10956618
- names:
- docker.io/longhornio/csi-node-driver-registrar@sha256:b3288cdcb832c30acc90f0d6363fc681880f46448f0175078bbd07f2a178634e
- docker.io/longhornio/csi-node-driver-registrar:v2.5.0
sizeBytes: 9132327
- names:
- docker.io/calico/csi@sha256:4e05036f8ad1c884ab52cae0f1874839c27c407beaa8f008d7a28d113ad9e5ed
- docker.io/calico/csi:v3.26.1
sizeBytes: 8910887
- names:
- docker.io/longhornio/livenessprobe@sha256:4a8917674f9eb175ad7d9339d8a5bc0f6f1376612f9abfc3854e7e7282bde75d
- docker.io/longhornio/livenessprobe:v2.8.0
sizeBytes: 8892153
- names:
- docker.io/calico/pod2daemon-flexvol@sha256:2aefd77a4f8289c88cfe24c0db38822de3132292d1ea4ac9192abc9583e4b54c
- docker.io/calico/pod2daemon-flexvol:v3.26.1
sizeBytes: 7291173
- names:
- docker.io/library/alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69
- docker.io/library/alpine:3.12
sizeBytes: 2811698
- names:
- registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d
- registry.k8s.io/pause:3.8
sizeBytes: 311286
nodeInfo:
architecture: amd64
bootID: 2dce3c79-5153-4279-bf7f-5b5365f5ec0d
containerRuntimeVersion: containerd://1.7.2
kernelVersion: 6.1.0-9-amd64
kubeProxyVersion: v1.27.3
kubeletVersion: v1.27.3
machineID: 1775e2c46bb860db6e8b17d764a2f784
operatingSystem: linux
osImage: Debian GNU/Linux 12 (bookworm)
systemUUID: 1992b643-962e-49e3-9528-00041357bb0a I wonder if it is needed to add any labels to k8s nodes to be managed by digitalocean cloud controller manager, but didn't find any information in official documentation related to this issue. |
@lev-stas you shouldn't have to add any labels. There is actually a label to keep CCM from considering a node for inclusion in the LB rotation, but that's not what you seem to have set. There can be other conditions though that determine if a node is included or not. The corresponding filtering logic starts here and then calls into this method. The first feature gate
The provider ID must either be set explicitly on the kubelet or discovered by CCM through the DO API. What's a bit odd to me is that the node shouldn't be considered done provisioning by CCM if the provider ID has not been discovered, but in that case it should still have an "uninitialized" taint which I don't see. 🤔 So I'm not quite sure how the node could have reached its current state, but maybe these discoveries help you already somehow. |
@timoreimann thank you for your answer. As I mentioned in the init comment, I've tried both variant with
But there is no apiVersion: v1
kind: Node
metadata:
annotations:
alpha.kubernetes.io/provided-node-ip: 10.114.0.2
csi.volume.kubernetes.io/nodeid: '{"csi.tigera.io":"ivinco-k8s-worker01","driver.longhorn.io":"ivinco-k8s-worker01"}'
kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: "0"
projectcalico.org/IPv4Address: 10.114.0.2/20
projectcalico.org/IPv4VXLANTunnelAddr: 192.168.205.0
volumes.kubernetes.io/controller-managed-attach-detach: "true"
creationTimestamp: "2023-07-03T18:34:32Z"
labels:
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/os: linux
kubernetes.io/arch: amd64
kubernetes.io/hostname: ivinco-k8s-worker01
kubernetes.io/os: linux
name: ivinco-k8s-worker01
resourceVersion: "678507"
uid: 3de39ee4-3584-4cda-abfa-2e86e29c14bf
spec:
podCIDR: 192.168.1.0/24
podCIDRs:
- 192.168.1.0/24
status:
addresses:
- address: 10.114.0.2
type: InternalIP
- address: ivinco-k8s-worker01
type: Hostname
- address: 161.35.218.29
type: ExternalIP
allocatable:
cpu: "2"
ephemeral-storage: "56911881737"
hugepages-2Mi: "0"
memory: 1912064Ki
pods: "110"
capacity:
cpu: "2"
ephemeral-storage: 60306Mi
hugepages-2Mi: "0"
memory: 2014464Ki
pods: "110"
conditions:
- lastHeartbeatTime: "2023-07-06T13:03:37Z"
lastTransitionTime: "2023-07-06T13:03:37Z"
message: Calico is running on this node
reason: CalicoIsUp
status: "False"
type: NetworkUnavailable
- lastHeartbeatTime: "2023-07-06T13:03:29Z"
lastTransitionTime: "2023-07-06T13:03:29Z"
message: kubelet has sufficient memory available
reason: KubeletHasSufficientMemory
status: "False"
type: MemoryPressure
- lastHeartbeatTime: "2023-07-06T13:03:29Z"
lastTransitionTime: "2023-07-06T13:03:29Z"
message: kubelet has no disk pressure
reason: KubeletHasNoDiskPressure
status: "False"
type: DiskPressure
- lastHeartbeatTime: "2023-07-06T13:03:29Z"
lastTransitionTime: "2023-07-06T13:03:29Z"
message: kubelet has sufficient PID available
reason: KubeletHasSufficientPID
status: "False"
type: PIDPressure
- lastHeartbeatTime: "2023-07-06T13:03:29Z"
lastTransitionTime: "2023-07-06T13:03:29Z"
message: kubelet is posting ready status. AppArmor enabled
reason: KubeletReady
status: "True"
type: Ready
daemonEndpoints:
kubeletEndpoint:
Port: 10250
images:
- names:
- docker.io/longhornio/longhorn-engine@sha256:32170c96be28e47c9a66c85cca65c5df8637f3d2b0a1b2034c02ec186ac6996a
- docker.io/longhornio/longhorn-engine:v1.4.2
sizeBytes: 267482077
- names:
- docker.io/longhornio/longhorn-instance-manager@sha256:9157fe047fba170c70ac5ce83d5cc9e2fcdcd3b7a73885efb2ce786c2c69bd1d
- docker.io/longhornio/longhorn-instance-manager:v1.4.2
sizeBytes: 266275737
- names:
- registry.k8s.io/ingress-nginx/controller@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd
sizeBytes: 113903192
- names:
- registry.k8s.io/ingress-nginx/controller@sha256:744ae2afd433a395eeb13dc03d3313facba92e96ad71d9feaafc85925493fee3
sizeBytes: 111174024
- names:
- docker.io/longhornio/longhorn-manager@sha256:b318015eee458b08c2fc8a38fd5bd947cb9df2dd747c20b5ddb38163f131a732
- docker.io/longhornio/longhorn-manager:v1.4.2
sizeBytes: 98278778
- names:
- docker.io/calico/cni@sha256:3be3c67ddba17004c292eafec98cc49368ac273b40b27c8a6621be4471d348d6
- docker.io/calico/cni:v3.26.1
sizeBytes: 93375656
- names:
- docker.io/calico/node@sha256:8e34517775f319917a0be516ed3a373dbfca650d1ee8e72158087c24356f47fb
- docker.io/calico/node:v3.26.1
sizeBytes: 86592218
- names:
- docker.io/longhornio/longhorn-ui@sha256:90b33eae3a3c5a1c932ae49589d0b0d9a33e88f55f00241e8c0cb175d819e031
- docker.io/longhornio/longhorn-ui:v1.4.2
sizeBytes: 72622024
- names:
- docker.io/library/httpd@sha256:7d45cb2af5484a1be9fdfc27ab5ada4fb8d23efdd0e34e7dd0aa994438c9f07b
- docker.io/library/httpd:latest
sizeBytes: 64694091
- names:
- docker.io/library/httpd@sha256:af30bcefc95cd5189701651fd9c710d411dc4b8536f93ef542cfa6f1fa32cb53
sizeBytes: 64693977
- names:
- docker.io/paulbouwer/hello-kubernetes@sha256:2ad94733189b30844049caed7e17711bf11ed9d1116eaf10000586081451690b
- docker.io/paulbouwer/hello-kubernetes:1.10
sizeBytes: 45466760
- names:
- docker.io/digitalocean/digitalocean-cloud-controller-manager@sha256:28b7564201b0c04620c7c9b9daf901a3a9ce688f072fd69d3a56d15f157f76fd
- docker.io/digitalocean/digitalocean-cloud-controller-manager:v0.1.43
sizeBytes: 35515603
- names:
- registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699
- registry.k8s.io/kube-proxy:v1.27.3
sizeBytes: 23897400
- names:
- docker.io/longhornio/csi-snapshotter@sha256:a3c71d6b1ecaefe69330312326b2ccf016a83af67e9b2bc2ad0685dc388c35df
- docker.io/longhornio/csi-snapshotter:v5.0.1
sizeBytes: 22163656
- names:
- docker.io/longhornio/csi-attacher@sha256:365b6e745404cf3eb39ee9bcc55b1af3c4f5357e44dddd0270c3cbb06b252a64
- docker.io/longhornio/csi-attacher:v3.4.0
sizeBytes: 22084988
- names:
- docker.io/longhornio/csi-resizer@sha256:5ce3edf2ab0167a76d09d949a644677ec42428e2b735f6ebabab76a2b3432475
- docker.io/longhornio/csi-resizer:v1.3.0
sizeBytes: 21671030
- names:
- docker.io/longhornio/csi-provisioner@sha256:daa7828fd45897fdeec17d6e987bc54cdc0a0928a7ba6b62d8f8496d6aa5d3be
- docker.io/longhornio/csi-provisioner:v2.1.2
sizeBytes: 21220203
- names:
- registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b
sizeBytes: 20266348
- names:
- docker.io/library/nginx@sha256:5e1ccef1e821253829e415ac1e3eafe46920aab0bf67e0fe8a104c57dbfffdf7
- docker.io/library/nginx:stable-alpine
sizeBytes: 16844563
- names:
- docker.io/calico/node-driver-registrar@sha256:fb4dc4863c20b7fe1e9dd5e52dcf004b74e1af07d783860a08ddc5c50560f753
- docker.io/calico/node-driver-registrar:v3.26.1
sizeBytes: 10956618
- names:
- docker.io/longhornio/csi-node-driver-registrar@sha256:b3288cdcb832c30acc90f0d6363fc681880f46448f0175078bbd07f2a178634e
- docker.io/longhornio/csi-node-driver-registrar:v2.5.0
sizeBytes: 9132327
- names:
- docker.io/calico/csi@sha256:4e05036f8ad1c884ab52cae0f1874839c27c407beaa8f008d7a28d113ad9e5ed
- docker.io/calico/csi:v3.26.1
sizeBytes: 8910887
- names:
- docker.io/longhornio/livenessprobe@sha256:4a8917674f9eb175ad7d9339d8a5bc0f6f1376612f9abfc3854e7e7282bde75d
- docker.io/longhornio/livenessprobe:v2.8.0
sizeBytes: 8892153
- names:
- docker.io/calico/pod2daemon-flexvol@sha256:2aefd77a4f8289c88cfe24c0db38822de3132292d1ea4ac9192abc9583e4b54c
- docker.io/calico/pod2daemon-flexvol:v3.26.1
sizeBytes: 7291173
- names:
- docker.io/library/alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69
- docker.io/library/alpine:3.12
sizeBytes: 2811698
- names:
- registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d
- registry.k8s.io/pause:3.8
sizeBytes: 311286
nodeInfo:
architecture: amd64
bootID: e74facdc-7e82-4aae-9dbb-ba9fb3693791
containerRuntimeVersion: containerd://1.7.2
kernelVersion: 6.1.0-10-amd64
kubeProxyVersion: v1.27.3
kubeletVersion: v1.27.3
machineID: 1775e2c46bb860db6e8b17d764a2f784
operatingSystem: linux
osImage: Debian GNU/Linux 12 (bookworm)
systemUUID: 1992b643-962e-49e3-9528-00041357bb0a I have added this option to |
@lev-stas FWIW, for our managed Kubernetes offering DOKS we specify the But even if you don't specify it, CCM should discover it as I mentioned. If you could share the CCM logs for a newly bootstrapping node (ideally one where the provider ID is specified and one where it isn't), then we could try to look for any messages indicating why things aren't working. |
@timoreimann I've added providerID option to /var/lib/kubelet/config.yaml config. And it is still not displayed in node manifest.
The message about Controlplane node is tainted as NoSchedule, so I have the CMM pod only on worker node. Is it correct set up? |
Hey @lev-stas , I think I missed to update this issue, sorry about that. Before digging back into this one: do you still have the problem you described? |
I have solved the issue by adding string
to the file |
Thanks for confirming. |
Hi! I have deployed k8s cluster on DO droplets and I am using digitalocean-cloud-controller-manager in it. When I am creating nginx ingress controller, DO load balancer created automatically, but droplets, where k8s hosted are not added to it automatically. I see warning message in digitalocean-cloud-controller-manager pod with such content
I have deployed cloud controller manager with all recomendations. I set
--cloud-provider=external
in kubelet, and--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
in kube-api server. I also tried to add and remove--provider-id
option to kubelet, but it didn't have any effect.When I add droplets to Load Balancer manually, everything is working OK, and I cat manage traffic to necessary services. But in case of any changes in LoadBalacer type service in k8s all settings are dropped and I should do manual adding again.
I saw several issues related to this problems, but they don't have any certain recipes to solve it.
The text was updated successfully, but these errors were encountered: