Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Loadbalancer targets all nodes in the cluster, not just ones with targets #222

Open
jacksontj opened this issue Jun 5, 2019 · 7 comments

Comments

@jacksontj
Copy link

The current behavior for loadbalancers is to add all nodes as droplets of the LoadBalancer. This is actually pretty inefficient as the service isn't necessarily running on all nodes in the cluster. Through the Service object in k8s we can determine which nodes have endpoints on them and set the droplets in the LB accordingly. This will avoid the unnecessary hops from non-service droplets to ones that actually run the service.

@timoreimann
Copy link
Collaborator

Hi @jacksontj. What you are requesting should already be possible by setting the upstream Service field externalTrafficPolicy to Local (at the price of possibly imbalanced workloads). An excellent blog post describing the different trade-offs is available here.

Would that serve your needs?

@jacksontj
Copy link
Author

jacksontj commented Jun 5, 2019 via email

@timoreimann
Copy link
Collaborator

timoreimann commented Jun 5, 2019

The nodes that do not host pods targeted by your Service should fail their health checks, however, and thus not be part of the LB's rotation and never receive traffic.

The UX on that part is really improvable. Nevertheless, it should hopefully yield the result you're looking for. Let me know though in case I missed something.

@jacksontj
Copy link
Author

jacksontj commented Jun 5, 2019 via email

@timoreimann
Copy link
Collaborator

@jacksontj to better understand, what's your exact concern about the status quo? Is it about usability or something different? AFAICS, the health checks should not be harmful or lead to actual traffic being sent the wrong way.

I'd be happy to discuss potential improvements if I understand the impact of the problem as it manifests for you.

@jacksontj
Copy link
Author

To clarify, I'm not seeing traffic going to all nodes in the cluster (assuming the healthcheck is configured properly). The main thing I see is that my loadbalancer has N targets (all of the nodes in my k8s cluster) so as this cluster scales I'll likely end up with significantly more nodes in the target group that fail the HC than pass it. This would make it effectively impossible to set any sort of alerting etc. around alive/non-alive reals on the LB. Additionaly (granted, a much smaller concern likely) for each LB there will be some number of healthchecks sent to every node, which is not a great scaling mechanism.

If it is desired to have the ability to include all nodes in the target group, then I think we'd just want to add an annotation to make the controller only add nodes with pods on them.

@mman
Copy link

mman commented Feb 26, 2020

I am new to this topic and got bitten by this today. Installed ambassador according to DO tutorial, and it correctly for performance reasons specifies to use externalTrafficPolicy: Local to route traffic only to nodes with ambassador pods. This is different from the default DO Load Balancer behavior where externalTrafficPolicy: ClusterIP and the traffic gets routed to all nodes and from there hops to available pods.

Since DO UI displays both load balancers as having assigned all nodes in a cluster, the one with externalTrafficPolicy: Local has many failed nodes (as expected if you understand the topic) and therefore indicates that there is a problem to solve. I have been solving it for two hours until I realized that this is not a problem.

My proposal is to adjust DO UI to only list nodes that are part of the load balancer in all cases, perhaps indicating the traffic policy somewhere in the UI.

This is to avoid displaying LB as half broken in cases where it is working perfectly well.

thanks,
Martin

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants