Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using the same adapter for the Vertical Pod Autoscaler (VPA) #359

Open
andriyfomenko opened this issue Aug 6, 2021 · 1 comment
Open

Comments

@andriyfomenko
Copy link

Expected Behavior

The same kind of annotations-based "magic" to work for VPA, in the same way as it is exposed for HPA

Specifications

It's just an idea, no spec [yet]

@jonathanbeber
Copy link
Contributor

Hello, @andriyfomenko, thanks for opening the discussion. Although, I think the best place to discuss that is on the VPA project. This project is just an implementation of a custom metrics API server.

I see recently it was part of a discussion in SIG autoscaling and, we can follow this issue for that: kubernetes/autoscaler#4135

Maybe, it's worth explaining, in that issue, how you would make use of custom metrics to scale your workloads. It would be beneficial for the discussion that's starting. Mention what kind of metrics and how you would map them to CPU and memory vertical scaling.

@jonathanbeber jonathanbeber changed the title Using the same adapter for the Vertical Pod Autoscaler (VPA) ? Using the same adapter for the Vertical Pod Autoscaler (VPA) Aug 9, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants