Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

podman build: rootless build takes up a lot of disk space #1570

Closed
eero-t opened this issue Apr 14, 2023 · 11 comments · Fixed by #1571
Closed

podman build: rootless build takes up a lot of disk space #1570

eero-t opened this issue Apr 14, 2023 · 11 comments · Fixed by #1571
Labels

Comments

@eero-t
Copy link

eero-t commented Apr 14, 2023

Issue Description

With default options, rootless podman build takes all disk space, using about 100x compared to docker.

Upstream release notes for newer Podman versions did not have any mention of fixing such issue, so I'm assuming that this bug is still valid.

Steps to reproduce the issue

Steps to reproduce the issue

  1. Restore default state: sudo rm -rf ~/.config/containers/ ~/.local/share/containers/
  2. Build large container: podman build --format=docker --rm -t <tag> -f Dockerfile .

Describe the results you received

Podman uses hundreds of GBs to build few GB container, even after the container build has finished, due to defaulting to "vfs" driver, although "fuse-overlayfs" (1.7.1) is installed.

Describe the results you expected

Podman has sane disk usage, by defaulting to "overlay" storage driver when "fuse-overlayfs" is present: containers/podman#1726

podman info output

$ podman info
host:
  arch: amd64
  buildahVersion: 1.28.2
  cgroupControllers:
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon_2.0.25+ds1-1.1_amd64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.25, commit: unknown'
  cpuUtilization:
    idlePercent: 99.59
    systemPercent: 0.06
    userPercent: 0.34
  cpus: 64
  distribution:
    codename: jammy
    distribution: ubuntu
    version: "22.04"
  eventLogger: journald
  hostname: texel
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1001
      size: 1
    - container_id: 1
      host_id: 165536
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1001
      size: 1
    - container_id: 1
      host_id: 165536
      size: 65536
  kernel: 5.15.0-60-generic
  linkmode: dynamic
  logDriver: journald
  memFree: 36563644416
  memTotal: 134931664896
  networkBackend: netavark
  ociRuntime:
    name: runc
    package: runc_1.1.4-0ubuntu1~22.04.1_amd64
    path: /usr/sbin/runc
    version: |-
      runc version 1.1.4-0ubuntu1~22.04.1
      spec: 1.0.2-dev
      go: go1.18.1
      libseccomp: 2.5.3
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/1001/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns_1.0.1-2_amd64
    version: |-
      slirp4netns version 1.0.1
      commit: 6a7b16babc95b6a3056b33fb45b74a6f62262dd4
      libslirp: 4.6.1
  swapFree: 0
  swapTotal: 0
  uptime: 1240h 19m 46.00s (Approximately 51.67 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  volume:
  - local
registries: {}
store:
  configFile: /home/user/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: vfs
  graphOptions: {}
  graphRoot: /home/user/.local/share/containers/storage
  graphRootAllocated: 786455846912
  graphRootUsed: 315135950848
  graphStatus: {}
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 48
  runRoot: /run/user/1001/containers
  volumePath: /home/user/.local/share/containers/storage/volumes
version:
  APIVersion: 4.3.1
  Built: 0
  BuiltTime: Thu Jan  1 02:00:00 1970
  GitCommit: ""
  GoVersion: go1.20.2
  Os: linux
  OsArch: linux/amd64
  Version: 4.3.1

Podman in a container

No

Privileged Or Rootless

Rootless

Upstream Latest Release

No

Additional environment details

This is Ubuntu 23.04 "podman" package installed to Ubuntu 22.04 installation.

Additional information

This should have been fixed already before Podman v1.0 release: containers/podman#1726, so I guess it's a regression?

WA for the issue was found from here: containers/buildah#1040

$ sudo rm -r ~user/.local/share/containers/

$ mkdir -p ~user/.config/containers

$ cat > ~user/.config/containers/storage.conf
[storage]
driver = "overlay"
@Luap99
Copy link
Member

Luap99 commented Apr 14, 2023

Can you test this with podman 4.4.4? Or better podman v4.5-rc2 or main branch.
It is possible that this is already fixed, I recall recent changes in storage that could effect it.

@eero-t
Copy link
Author

eero-t commented Apr 14, 2023

I'd rather use distro packages, so that I can easily upgrade and downgrade the components, and deps are correct. Ubuntu 23.04 (to be released soon) is still at 4.3, as is Debian testing. Debian experimental seems to have 4.4, but I'd rather wait few weeks for it migrate testing, before trying that.

Btw. what changes there are to storage, and in which version?

Only comment referring to "storage" in the release notes after v3.4.1, is this one for v4.5.0-rc2:

Updated the containers/storage library to v1.46.1

@eero-t
Copy link
Author

eero-t commented Apr 14, 2023

Note that it's very easy to test this bug.

Just make sure fuse-overlayfs is present:

$ dpkg -l fuse-overlayfs | grep ^ii
ii  fuse-overlayfs 1.7.1-1      amd64        implementation of overlay+shiftfs in FUSE for rootless containers
$ which fuse-overlayfs
/usr/bin/fuse-overlayfs

That storage is not specified in your global config:

$ sudo grep -i -R storage /etc/containers/

And remove your local storage config:

$ mv .config/containers/storage.conf .config/containers/storage.bak

Then just do:

$ podman info

And if it say following (like my podman 4.3.1 does):

ERRO[0000] User-selected graph driver "vfs" overwritten by graph driver "overlay" from database - delete libpod local files to resolve.  May prevent use of images created by other tools

It's defaulting to "vfs" storage, i.e. buggy.

@Luap99
Copy link
Member

Luap99 commented Apr 14, 2023

It seems like most distros set the default driver to overlay in /user/share/containers/storage.conf.
Looks like #1460 at least allows some configuration, not sure why vfs is still the default there when it was not set explicitly by the distro.
@rhatdan @giuseppe PTAL

@eero-t
Copy link
Author

eero-t commented Apr 14, 2023

What are "most distros"?

Ubuntu (22.04) does not install any "storage.conf":

$ dpkg -S storage.conf
dpkg-query: no path found matching pattern *storage.conf*

nor have "storage" section in confs in /etc/containers/...

@giuseppe
Copy link
Member

I don't have an Ubuntu installation to try it out quickly, I'd need to setup a VM first. Can you please try the last version of Podman and confirm the issue still exists there?

@Luap99
Copy link
Member

Luap99 commented Apr 14, 2023

@giuseppe I tried on main if you remove all storage.conf files it will pick vfs on main.

@eero-t
Copy link
Author

eero-t commented Apr 14, 2023

IMHO bug causing disks to run out of space is pretty critical, especially now that Podman is getting wider usage (e.g. Ubuntu added it only in latest LTS). This bug can make a single build to easily fill previously near-empty TB-sized disk.

While the fix is fairly trivial (reset podman & add two line config), finding it can take time.

@giuseppe
Copy link
Member

@Luap99 thanks for checking that!

@umohnani8 would you like to take a look at this one?

@umohnani8
Copy link
Member

@giuseppe I am happy to take a look at it, but I have limited availability the next two weeks. If this is time critical, I think it would be better for you to look into it instead.

@giuseppe giuseppe transferred this issue from containers/podman Apr 14, 2023
giuseppe added a commit to giuseppe/storage that referenced this issue Apr 14, 2023
if there are no configuration files present, attempt to use overlay
for rootless if fuse-overlayfs is installed or if the kernel is >= 5.13.

Closes: containers#1570

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
@giuseppe
Copy link
Member

sure, opened a PR here: #1571

giuseppe added a commit to giuseppe/storage that referenced this issue Apr 14, 2023
if there are no configuration files present, attempt to use overlay
for rootless if fuse-overlayfs is installed or if the kernel is >= 5.13.

Closes: containers#1570

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
giuseppe added a commit to giuseppe/storage that referenced this issue Apr 15, 2023
if there are no configuration files present, attempt to use overlay
for rootless if fuse-overlayfs is installed or if the kernel is >= 5.13.

Closes: containers#1570

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
giuseppe added a commit to giuseppe/storage that referenced this issue Apr 15, 2023
if there are no configuration files present, attempt to use overlay
for rootless if fuse-overlayfs is installed or if the kernel is >= 5.13.

Closes: containers#1570

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
giuseppe added a commit to giuseppe/storage that referenced this issue Apr 17, 2023
if there are no configuration files present, attempt to use overlay
for rootless if fuse-overlayfs is installed or if the kernel is >= 5.13.

Closes: containers#1570

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
giuseppe added a commit to giuseppe/storage that referenced this issue Apr 17, 2023
if there are no configuration files present, attempt to use overlay
for rootless if fuse-overlayfs is installed or if the kernel is >= 5.13.

Closes: containers#1570

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants