You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Whether or not to support virtual ip jumps to other nodes when the control plane is notReady for some special reason.
Describe the solution you'd like
After control plane A, which holds the virtual IP, is notReady, the virtual ip jumps to control plane B or C.
Describe alternatives you've considered
I am simulating a network shutdown scenario and kube-vip is working fine. I would like to have some configurations to control the jump conditions for the virtual IP, such as node not ready, network disconnected, and so on.
When the jump condition is that the node is notReady
Node not ready and network disconnection will make the virtual ip jump to other nodes.
When the node network is disconnected at the time of the jump condition
Virtual ip does not jump when node is not ready.
Virtual ip jumps when the network is disconnected.
Or is it possible that the functionality already exists because my version of kube-vip is not the latest?
Here is the manifest for my static pod for bube-vip.
Similar to the problem described here, I have also observed that the vip does not jump to another control plane host when the apiserver on the host currently holding the vip is down but kube-vip is running. I would expect one of the running apiservers to take over the leader lease and assign the vip, but this does not happen. Compared to a haproxy/keepalived setup, this could be achieved by configuring an apiserver health check script within keepalived on the node. An apiserver failure would then result in a lower priority of the instance within the keepalived cluster and a vip jump to a working apiserver. Is this possible with kube-vip?
Currently it's not possible since kube-vip use leaderelection package to achieve active-standby mode HA. Only when the leader failed to acquire leader -- talking to apiserver to update lease, other follower could acquire the leader by claim the lease.
But an improvement could be made is to add a node controller or a process to watch for current node's status, if current node is not ready for example 1 minute, fatal exit, then other pod could try acquire the leader. Just need be care with how much time should the kube-vip decide to exit.
Is your feature request related to a problem? Please describe.
Whether or not to support virtual ip jumps to other nodes when the control plane is notReady for some special reason.
Describe the solution you'd like
After control plane A, which holds the virtual IP, is notReady, the virtual ip jumps to control plane B or C.
Describe alternatives you've considered
I am simulating a network shutdown scenario and kube-vip is working fine. I would like to have some configurations to control the jump conditions for the virtual IP, such as node not ready, network disconnected, and so on.
Node not ready and network disconnection will make the virtual ip jump to other nodes.
Additional context
Or is it possible that the functionality already exists because my version of kube-vip is not the latest?
Here is the manifest for my static pod for bube-vip.
The text was updated successfully, but these errors were encountered: