Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubelet could not start up kubernetes apiserver and core components post docker upgrade v25 #49

Open
krishnakc1 opened this issue Jan 25, 2024 · 1 comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@krishnakc1
Copy link

Until docker-ce v24.0.7, kubernetes image components are successfully run by kubelet. With the upgrade of docker-ce to v25.0, kubelet cannot start up the main components of k8s with an error stating ID and size unknown

kubelet[21094]: E0125 07:46:51.645034 21094 remote_image.go:94] ImageStatus failed: Id or size of image "k8s.gcr.io/kube-proxy:v1.17.12" is not set kubelet[21094]: E0125 07:46:51.645064 21094 kuberuntime_image.go:85] ImageStatus for image {"k8s.gcr.io/kube-proxy:v1.17.12"} failed: Id or size of image "k8s.gcr.io/kube-proxy:v1.17.12" is not set E0125 07:46:51.645109 21094 kuberuntime_manager.go:809] container start failed: ImageInspectError: Failed to inspect image "k8s.gcr.io/kube-proxy:v1.17.12": Id or size of image "k8s.gcr.io/kube-proxy:v1.17.12" is not set Error syncing pod 3ed55839-d24d-482a-a2ea-5fa52af9a07a ("kube-proxy-r6kxl_kube-system(3ed55839-d24d-482a-a2ea-5fa52af9a07a)"), skipping: failed to "StartContainer" for "kube-proxy" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-proxy:v1.17.12\": Id or size of image \"k8s.gcr.io/kube-proxy:v1.17.12\" is not set"
kubernetes version(older one) 1.17, but I have seen this happening in latest version of k8s in 1.29 as well in k3s for some images

The change that is causing this problem is moby/moby#45469

Client: Docker Engine - Community
Version: 25.0.1

Server: Docker Engine - Community
Engine:
Version: 25.0.1

Kubelet version - 1.17.12(Old, I know but I have seen some saying it is a problem in the newer versions)

Downgrading docker to 24.0.7 works but it is not an option

For some reasons, I cannot upgrade kubernetes to later versions but if there is a way to remediate this problem, it helps with the latest version of docker

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

3 participants