You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
I want to use kube-vip to have a vip address used by my cilium policy (for firewalling reason). Then my cilium policy uses the label kube-vip.io/has-ip set by kube-vip but the label is set on the wrong node.
To Reproduce
Steps to reproduce the behavior:
Apply the following yaml and wait for the label kube-vip.io/has-ip with the correct IP and on the right node. Actually, the vip is set correctly on my node on the ens192 interface but the label on the node sometimes disappears or is set on the wrong node.
Expected behavior
The label is set on the node which have the vip address et any label on the other node
Screenshots
In my screenshot, the first three zones display a ssh connection on my three masters. We can see that the IP address .51 (set by kube-vip) is present but the label kube-vip.io/has-ip is set on the node .170 (the third) instead of the node .166 (the first).
Environment (please complete the following information):
OS/Distro: RedHat RHEL 9.3
Kubernetes Version: 1.24.17
Kube-vip Version: 0.6.4 and 0.7.0
Kube-vip.yaml:
If Possible add in your kube-vip manifest (please remove anything that is confidential)
I'm not sure if this behaviour is intended, but this is defined at pkg/manager/manager_arp.go l121.
The label value is always the value from address config parameter, which can lead to confusion if you plan to use more than one IP at your kube-vip instance. Also this makes impossible to track where is a certain IP assigned by using the node label.
Expected behavior
When using multiple addresses, kube-vip should label the node with a list of the addresses that this same node holds.
Additional thoughts
While troubleshooting this issue i've also noticed that in the case where a node (i.e.: node01) looses connectivity to the controlplane, and this node has a VIP assigned, another node (i.e.: node02) assumes the leadership (as expected) and adds the kube-vip.io/has-ip label to itself without removing the label from node01. The label remains "duplicated" until node01 reconnects to te cluster.
Describe the bug
I want to use kube-vip to have a vip address used by my cilium policy (for firewalling reason). Then my cilium policy uses the label kube-vip.io/has-ip set by kube-vip but the label is set on the wrong node.
To Reproduce
Steps to reproduce the behavior:
Apply the following yaml and wait for the label kube-vip.io/has-ip with the correct IP and on the right node. Actually, the vip is set correctly on my node on the ens192 interface but the label on the node sometimes disappears or is set on the wrong node.
Expected behavior
The label is set on the node which have the vip address et any label on the other node
Screenshots
In my screenshot, the first three zones display a ssh connection on my three masters. We can see that the IP address .51 (set by kube-vip) is present but the label kube-vip.io/has-ip is set on the node .170 (the third) instead of the node .166 (the first).
Environment (please complete the following information):
Kube-vip.yaml
:If Possible add in your kube-vip manifest (please remove anything that is confidential)
Additional context
Cluster RKE2 1.24 (on Rancher)with cilium cni
My cilium Policy
apiVersion: cilium.io/v2
kind: CiliumEgressGatewayPolicy
metadata:
annotations:
name: egress-sample
spec:
destinationCIDRs:
- 0.0.0.0/0
egressGateway:
egressIP: 10.xx.xx.51
nodeSelector:
matchLabels:
kube-vip.io/has-ip: 10.xx.xx.51
selectors:
- podSelector:
matchLabels:
app: nginx
The text was updated successfully, but these errors were encountered: