You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
kubectl get --raw /openapi/v3 takes > 30 seconds to respond. This behavior is inconsistent, as sometimes it is instantaneous, and other times it takes too long
This seemed to be caused by a large number of CRDs on the cluster
Intern, terraform resources like helm_release end up timing out, causing errors like:
╷
│ Error: unable to build kubernetes object for pre-delete hook kyverno/templates/hooks/pre-delete.yaml: error validating "": error validating data: the server was unable to return a response in the time allotted, but may still be processing the request
│
│
╵
What did you expect to happen?
This is a multipart issue, that can be resolved either by fixing helm terraform provider, to allow larger timeouts.
I am creating this issue here because it seems to be in part related to vcluster only, as our EKS deployments are just fine. (On the other hand, we aren't running k8s 1.27 on our EKS clusters, which means we likely not leveraging /openapi/v3)
How can we reproduce it (as minimally and precisely as possible)?
Install vcluster with lots of CRDs (monitoring stack, kyverno), size may matter, here are our CRDs:
What happened?
kubectl get --raw /openapi/v3
takes > 30 seconds to respond. This behavior is inconsistent, as sometimes it is instantaneous, and other times it takes too longThis seemed to be caused by a large number of CRDs on the cluster
Intern, terraform resources like helm_release end up timing out, causing errors like:
What did you expect to happen?
This is a multipart issue, that can be resolved either by fixing helm terraform provider, to allow larger timeouts.
I am creating this issue here because it seems to be in part related to vcluster only, as our EKS deployments are just fine. (On the other hand, we aren't running k8s 1.27 on our EKS clusters, which means we likely not leveraging /openapi/v3)
How can we reproduce it (as minimally and precisely as possible)?
Install vcluster with lots of CRDs (monitoring stack, kyverno), size may matter, here are our CRDs:
run
kubectl get --raw /openapi/v3
a few times, to see how long it takes to respondAnything else we need to know?
No response
Host cluster Kubernetes version
Host cluster Kubernetes distribution
Client Version: v1.29.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.2
vlcuster version
vcluster version 0.19.4
Vcluster Kubernetes distribution(k3s(default)), k8s, k0s)
k3s
OS and Arch
amd64
The text was updated successfully, but these errors were encountered: