Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

We need to always delete the kind cluster if we create kind cluster with Prow jobs #14

Open
chizhg opened this issue Nov 4, 2020 · 1 comment

Comments

@chizhg
Copy link
Member

chizhg commented Nov 4, 2020

Currently when we run test flows with kind, we run kubetest2 kind --up --down which will also delete the kind cluster after the test flow finishes in the normal way. But if the Prow job is interrupted in the middle, the cluster will not be properly deleted thus will be causing memory leakage, as described in kubernetes-sigs/kind#303 (comment)

We should add a trap command to make sure the cluster can still be cleaned up even when the Prow job is interrupted.

slinkydeveloper pushed a commit to slinkydeveloper/hack that referenced this issue Nov 25, 2020
* WIP: Apply proposal

* fix build
@BenTheElder
Copy link

you also most likely need to set the prowjob timeout / grace period to something other than the 15s? default. bootstrap.py used to do 15m by default, so prow.k8s.io uses that as the default (it's configurable for a prow instance), otherwise your scripts don't get much time to cleanup.

unfortunately nested kubernetes brings quite a suite of problems, you probably want to look through #303 for anything else you've missed (e.g. inotify).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants