Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support container creation when resource quotas are hit #87

Open
Dutchy- opened this issue Apr 9, 2024 · 2 comments
Open

Support container creation when resource quotas are hit #87

Dutchy- opened this issue Apr 9, 2024 · 2 comments
Labels
enhancement New feature or request

Comments

@Dutchy-
Copy link

Dutchy- commented Apr 9, 2024

Currently, when kubedock creates pods and you hit a resource quota, the request fails with this error:

{"message":"pods \"kubedock-a4a0dbae1bd2\" is forbidden: exceeded quota: tenant-quota, requested: limits.cpu=2, used: limits.cpu=31650m, limited: limits.cpu=32"}

Context: we run kubedock as a sidecontainer on Tekton to support java testcontainers.

Is it possible to support creating pods in this situation without having the request fail immediately?

Some solutions I considered:

  • Use a Deployment instead of a Pod to schedule the container
  • Use a retry mechanism

And maybe there are other options?

@joyrex2001
Copy link
Owner

In earlier versions kubedock was using deployments, and had the option to use jobs. Deployments were problematic when the orchestrated containers were actually one-off jobs (jobs did solve that).

A retry option might make sense and probably leads to increased succes when orchestrating, but tests might still fail because of time-outs in the actual tests as well.

Something that might work already, is setting lower requests and limits (via labels, global settings, or a global pod template).

@joyrex2001 joyrex2001 added the enhancement New feature or request label Apr 11, 2024
@Dutchy-
Copy link
Author

Dutchy- commented Apr 11, 2024

Yes, lowering requests and limits relieves the problem temporarily, and it's definitely something we're doing now to work around the problem, but it does not solve the problem permanently - even on lower limits the pod might hit the quota and get denied.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants