-
-
Notifications
You must be signed in to change notification settings - Fork 215
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Control plane load balancer creation loop with multiple instances #784
Comments
Hi, do you mean if you just deploy single kube-vip instance with ipv6, it won't have issue? It would just happen when you deploying two kube-vip instances? |
My apologies, I thought it was, but after disabling the IPv4 instance, the issue still occurs, so it doesn't look like it's due to having multiple instances. |
Ok let me check if I can reproduce this |
I can't reproduce this issue in my env
But my kube-proxy is iptables mode. Looking at your look, it looks like something is conflicting with kube-vip. Since you just deployed single kube-vip but still observing it, so I assume it's something else. From your description from previous PR, before you use the dev build that has the fix, the
you mentioned But you also passed below config to your kube-proxy settings, could you check if that ipv6-neet correctly include the vip?
One verification would be disable ipvs for you kube-proxy and see if the issue still persist. I think kube-proxy is incorrectly using VIP address as the node ip. This could happen because for IPV6, the ip order on interface is opposite. The workaround we used is to pass node-ip to kubelet. You can verify that by |
Interesting, the above env I tested was inside a pure ipv6 env, now I'm able to reproduce this inside an environemnt that node has both ipv4 and ipv6 dual stack ip on the interface
I might triggered some weird issue though since my node only have ipv4 IP in this environment, while I'm trying to advertise ipv6 address
your frequency of error log still indicate kube-proxy is causing the conflict. |
Other than the suggestion I have here #784 (comment), you can try
This way we can rule out whether if it's kube-proxy is causing the issue |
internal-ip looks correct in the VIP holder:
InternalIP is 10.9.2.241, because I set:
IPv6 prefix in ipvs-exclude-cidrs is correct, but I'll try to manually change IPVS rules as you described to see if it changes anything. |
I did
|
Could you And I thought kube-proxy did created other ffff rule before, for example, those are not found?
|
Here's the full output, so you have all information:
Those are still there, I just grep on ffff so they were filtered from the output. |
My bad, I forgot to add the rule.
|
mm, how does that Scheduler changed from
I saw when you added here it's only wlc . It shouldn't matter though. If it's not modified by kube-proxy, then, could you paste the log of kube-proxy here, wanna see if there is any error inside. I'll also try on dual stack env with node address set to ipv6 address |
Hm I can't reproduce this on my with my dual stack env, node has both ipv4 and ipv6 addresses. Do you observe the issue if you just turn on kube-vip while turning off kube-proxy or use kube-proxy iptables mode? |
I did not stopped kube-vip on this test, hence why scheduler was rr. I tried again with kube-proxy stopped, and the result is the same (entry stays as-is and kube-proxy does not touch it).
k3s does logs everything under its daemon so it's a bit hard to extract kube-proxy logs, but some grep on warning or error doesn't tell anything useful or kube-proxy related.
Which part were you not able to reproduce ? The service creation looping part ? |
Yeah I don't see that in my env. But kube-proxy is using iptables mode, and I'm using kubeadm to bootstrap
Does kube-vip reports error even if you turn off kube-proxy? |
I just tried disabling kube-proxy entirely, but the same issue still occurs. |
Describe the bug
When multiple instances (in my case, one for IPv4 and an other one for IPv6) of kube-vip runs to provide control-plane load balancing, the load balancer is removed and re-created in loop.
To Reproduce
Run multiple instances of kube-vip with a different name, different VIP, same interface.
values.yaml:
Logs:
Expected behavior
Load balancer should be stable.
Environment (please complete the following information):
Thanks
The text was updated successfully, but these errors were encountered: