Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow to reuse podman images of the podman machine when using podman driver #17415

Open
benoitf opened this issue Oct 13, 2023 · 17 comments
Open
Labels
co/podman-driver podman driver issues co/runtime/crio CRIO related issues kind/feature Categorizes issue or PR as related to a new feature. triage/discuss Items for discussion

Comments

@benoitf
Copy link

benoitf commented Oct 13, 2023

What Happened?

Minikube allows to run a kubernetes cluster on top of a container engine like docker or podman.

We can also specify a runtime with minikube like docker, containerd or cri-o.

In case of cri-o engine, it's nice as podman is also available inside the container

image

so with minikube podman-env there is a way to use podman engine inside minikube container and then share images with cri-o engine as it's sharing /var/lib/containers path

Minikube explains different ways of pushing images to the cluster
https://minikube.sigs.k8s.io/docs/handbook/pushing/

I think there is room for another method. As a podman user, I would like to continue to build using the default podman running inside the podman machine and cherry on the cake, see the images I'm building in cri-o (so it's close of podman-env method but I shouldn't need to run podman inside podman, just use the podman on the podman machine/host.

so if we could bind the podman host /var/lib/containers to the minikube container and that cri-o on that container use this mount folder, then as a user, I could do

podman build -t my-image . and then reference in a k8s yaml this image and doing kubectl apply -f mypod.yaml quickly spin a pod

image

it requires to add inside crio.conf the path of the mounted folder of the podman machine host rather than defaulting to the /var/lib

so, to sum up:

start minikube with an extra mount path

something like

minikube start --driver=podman --container-runtime=cri-o --mount --mount-string=/var/lib/containers:/host-containers

and then add a way in kicbase image to add the extra crio configuration

[crio]

# Path to the "root directory". CRI-O stores all of its data, including
# containers images, in this directory.
# root = "/var/lib/containers/storage"
root = "/host-containers/storage"

maybe the mount folder could be the rootless host path (so instead of mounting /var/lib/containers, mounting /var/home/core/.local/share/containers)

How do you see how we could have a toggle for changing/updating crio configuration to use the underlying images folder.

on a side issue, could cri-o runtime and podman runtime could be updated as well inside the kicbase image ?

Attach the log file

N/A

Operating System

macOS (Default)

Driver

Podman

@benoitf benoitf changed the title Allow to reuse podman images of the host when using podman driver Allow to reuse podman images of the podman machine when using podman driver Oct 13, 2023
@afbjorklund
Copy link
Collaborator

afbjorklund commented Oct 15, 2023

on a side issue, could cri-o runtime and podman runtime could be updated as well inside the kicbase image ?

Historically the cri-o was kept within one version of the kubernetes, but the release schedule has been changed.

Now it is being released together with kubernetes, so it should probably be updated even more often now...

But this is outdated:

CRIO_VERSION="1.24"

The podman version could be updated, it was just using the available Kubic packages (and didn't need newer)

@afbjorklund afbjorklund added co/podman-driver podman driver issues co/runtime/crio CRIO related issues kind/feature Categorizes issue or PR as related to a new feature. labels Oct 15, 2023
@afbjorklund
Copy link
Collaborator

I noticed that the lock files are in the runroot, so probably should make both of them available?

# Path to the "root directory". CRI-O stores all of its data, including
# containers images, in this directory.
# root = "/var/lib/containers/storage"

# Path to the "run directory". CRI-O stores all of its state in this directory.
# runroot = "/run/containers/storage"

Something like:

/host-containers/var/storage
/host-containers/run/storage

@afbjorklund
Copy link
Collaborator

afbjorklund commented Oct 15, 2023

How do you see how we could have a toggle for changing/updating crio configuration to use the underlying images folder.

Some of the other runtime values are configurable, such as the image repository. Maybe root/runroot could be, too?

   --root value, -r value                                     The CRI-O root directory. (default: "/var/lib/containers/storage") [$CONTAINER_ROOT]
   --runroot value                                            The CRI-O state directory. (default: "/run/containers/storage") [$CONTAINER_RUNROOT]
      --data-root string                        Root directory of persistent Docker state (default "/var/lib/docker")
      --exec-root string                        Root directory for execution state files (default "/var/run/docker")
   --root value                 containerd root directory
   --state value                containerd state directory

Something rhyming with "root" and "state"


Note: for this feature to work with Docker, it requires a version that uses containerd for the image storage:

https://docs.docker.com/storage/containerd/

It separates the images by "namespace", but they would be sharing layers and so on.

@benoitf
Copy link
Author

benoitf commented Oct 16, 2023

@afbjorklund yes if we can have --root --runroot or --data-root yes it would help to setup these things

About cri-o's version, do you see that as a separate enhancement or we need to link all of them under the same umbrella ?

@afbjorklund
Copy link
Collaborator

I think the versions are separate issue(s), this is more about being able to share the storage with the engine.

Especially since we might need to decouple the CR installation from the OS installation, in the future:

Previously you would just install something like "Docker", and it would work with all the Kubernetes versions.

But now you need a specific version of CRI and sometimes even a special version of the runtime (CRI-O), for each

@afbjorklund
Copy link
Collaborator

@benoitf : added a discussion topic for the "Podman Community Cabal" tomorrow, to get some input from the team

https://podman.io/community

@benoitf
Copy link
Author

benoitf commented Oct 18, 2023

@afbjorklund ok thanks for the topic discussion 👍

On my side unfortunately I'll be in another call at that time. I'll watch the recording.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Oct 19, 2023

I was trying to start minikube in podman machine, and it didn't work at all. Seemed to have issues both with the default "netavark" network backend, and with the original "cni" network backend. And minikube doesn't support it, on Linux.

The first issue with podman-remote-static v4.7.0 was that it defaulted to 1 vCPU and rootless, changed to 2 CPU and root.

⛔ Exiting due to RSRC_INSUFFICIENT_CORES: Requested cpu count 2 is greater than the available cpus of 1

Then minikube wouldn't recognize the response of the network query, so wasn't able to determine the IP of the node.

❌ Exiting due to GUEST_PROVISION: error provisioning guest: ExcludeIP() requires IP or CIDR as a parameter


[core@localhost ~]$ minikube start --driver=podman --container-runtime=cri-o --mount --mount-string=/run/containers/storage:/run/containers/storage --mount-string=/var/lib/containers/storage:/var/lib/containers/storage --preload=false

Still no luck, even with CNI:

[core@localhost ~]$ sudo -n podman container inspect -f {{.NetworkSettings.IPAddress}} minikube

[core@localhost ~]$ sudo -n podman container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
,

Setting --static-ip gives error:

plugin type="bridge" failed (add): cni plugin bridge failed: failed to allocate for range 0: requested IP address 192.168.49.2 is not available in range set 192.168.49.1-192.168.49.254

So in the end, it is a failure.

❌ Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: start: sudo -n podman start --cgroup-manager cgroupfs minikube: exit status 125


It started up OK with 3.4.2.

Which still used cgroups v1.

EDIT: Recreated the VM, and everything seems fine. Maybe it was the new Fedora CoreOS, maybe it was a fluke.

Nope, it is actually the /var/lib/containers/storage volume mount that destroys the network, go figure.

@benoitf
Copy link
Author

benoitf commented Nov 6, 2023

@afbjorklund If I use --mount and --mount-string I'm not able to start minikube in podman rootless mode (but it's ok in rootful mode)

minikube start --driver=podman --container-runtime=cri-o --base-image=quay.io/fbenoit/kicbase:2023-10-12 --mount --mount-string "/var/lib/containers:/host-containers"

❌  Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: setting up container node: creating volume for minikube container: podman volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true: exit status 125

is it a known issue ?

@afbjorklund
Copy link
Collaborator

afbjorklund commented Nov 6, 2023

There are several known issues, with regards to running minikube on top of rootless podman:

https://github.com/kubernetes/minikube/issues?q=is%3Aopen+is%3Aissue+label%3Aco%2Fpodman-driver

But I am not sure if volume create is normally the one that fails, so needs the log to know why

podman volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true

@benoitf
Copy link
Author

benoitf commented Nov 6, 2023

logs.txt

@afbjorklund afbjorklund added the triage/discuss Items for discussion label Nov 19, 2023
@afbjorklund
Copy link
Collaborator

The new docker version has an option to use containerd for storage, that means it could also share images...

https://docs.docker.com/storage/containerd/

The containerd image store is an experimental feature of Docker Engine

We should discuss making this deployment "form factor" (shared storage) into a standard feature of minikube.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Nov 19, 2023

Legacy Docker

legacy_docker

Also including other hypervisors (instead of VirtualBox) and other runtimes (instead of Docker), with same setup

Docker / Podman

minikube_storage

Including both Docker Engine / Podman Engine (on Linux), and Docker Desktop / Podman Desktop (external VM)

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 17, 2024
@benoitf
Copy link
Author

benoitf commented Feb 17, 2024

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 17, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 17, 2024
@benoitf
Copy link
Author

benoitf commented May 17, 2024

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/podman-driver podman driver issues co/runtime/crio CRIO related issues kind/feature Categorizes issue or PR as related to a new feature. triage/discuss Items for discussion
Projects
None yet
Development

No branches or pull requests

4 participants