Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OCI runtime error: crun: writing file /sys/fs/cgroup/cgroup.subtree_control: Invalid argument #1322

Open
henrywang opened this issue Oct 10, 2023 · 5 comments

Comments

@henrywang
Copy link

henrywang commented Oct 10, 2023

Issue Description

Run container failed on Fedora 37 and 38 with the same error Error: OCI runtime error: crun: writing file '/sys/fs/cgroup/cgroup.subtree_control': Invalid argument

Fedora 37 test log: https://github.com/virt-s1/rhel-edge/actions/runs/6463309998/job/17546300072#step:4:5427
Fedora 38 test log: https://github.com/virt-s1/rhel-edge/actions/runs/6463310904/job/17546300008#step:4:5577

The debug info:

sudo podman run --log-level debug -d --name rhel-edge --network edge --ip 192.168.200.1 docker://registry.hub.docker.com/***/rhel-edge:8qi7
time="2023-10-10T01:09:09Z" level=info msg="podman filtering at log level debug"
time="2023-10-10T01:09:09Z" level=debug msg="Called run.PersistentPreRunE(podman run --log-level debug -d --name rhel-edge --network edge --ip 192.168.200.1 docker://registry.hub.docker.com/***/rhel-edge:8qi7)"
time="2023-10-10T01:09:09Z" level=debug msg="Using conmon: \"/usr/bin/conmon\""
time="2023-10-10T01:09:09Z" level=debug msg="Initializing boltdb state at /var/lib/containers/storage/libpod/bolt_state.db"
time="2023-10-10T01:09:09Z" level=debug msg="Using graph driver overlay"
time="2023-10-10T01:09:09Z" level=debug msg="Using graph root /var/lib/containers/storage"
time="2023-10-10T01:09:09Z" level=debug msg="Using run root /run/containers/storage"
time="2023-10-10T01:09:09Z" level=debug msg="Using static dir /var/lib/containers/storage/libpod"
time="2023-10-10T01:09:09Z" level=debug msg="Using tmp dir /run/libpod"
time="2023-10-10T01:09:09Z" level=debug msg="Using volume path /var/lib/containers/storage/volumes"
time="2023-10-10T01:09:09Z" level=debug msg="Using transient store: false"
time="2023-10-10T01:09:09Z" level=debug msg="[graphdriver] trying provided driver \"overlay\""
time="2023-10-10T01:09:09Z" level=debug msg="Cached value indicated that overlay is supported"
time="2023-10-10T01:09:09Z" level=debug msg="Cached value indicated that overlay is supported"
time="2023-10-10T01:09:09Z" level=debug msg="Cached value indicated that metacopy is being used"
time="2023-10-10T01:09:09Z" level=debug msg="Cached value indicated that native-diff is not being used"
time="2023-10-10T01:09:09Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
time="2023-10-10T01:09:09Z" level=debug msg="backingFs=btrfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true"
time="2023-10-10T01:09:09Z" level=debug msg="Initializing event backend journald"
time="2023-10-10T01:09:09Z" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument"
time="2023-10-10T01:09:09Z" level=debug msg="Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument"
time="2023-10-10T01:09:09Z" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument"
time="2023-10-10T01:09:09Z" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument"
time="2023-10-10T01:09:09Z" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument"
time="2023-10-10T01:09:09Z" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument"
time="2023-10-10T01:09:09Z" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument"
time="2023-10-10T01:09:09Z" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument"
time="2023-10-10T01:09:09Z" level=debug msg="Using OCI runtime \"/usr/bin/crun\""
time="2023-10-10T01:09:09Z" level=info msg="Setting parallel job count to 7"
time="2023-10-10T01:09:09Z" level=debug msg="Successfully loaded network edge: &{edge 4c089dae410af4ccc178770c4eaac958f1762252f4fbe1f8764b9937629f7dac bridge podman1 2023-10-10 00:58:10.216328359 +0000 UTC [{{{192.168.200.0 ffffff00}} 192.168.200.254 <nil>}] [] false false true [] map[] map[] map[driver:host-local]}"
time="2023-10-10T01:09:09Z" level=debug msg="Successfully loaded 2 networks"
time="2023-10-10T01:09:09Z" level=debug msg="Pulling image docker://registry.hub.docker.com/***/rhel-edge:8qi7 (policy: missing)"
time="2023-10-10T01:09:09Z" level=debug msg="Looking up image \"registry.hub.docker.com/***/rhel-edge:8qi7\" in local containers storage"
time="2023-10-10T01:09:09Z" level=debug msg="Normalized platform linux/amd64 to {amd64 linux  [] }"
time="2023-10-10T01:09:09Z" level=debug msg="Trying \"registry.hub.docker.com/***/rhel-edge:8qi7\" ..."
time="2023-10-10T01:09:09Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@f2ed9840375470c7b08b6e94140a0357474ace2cb2e20b589f6b671d3620aaba\""
time="2023-10-10T01:09:09Z" level=debug msg="Found image \"registry.hub.docker.com/***/rhel-edge:8qi7\" as \"registry.hub.docker.com/***/rhel-edge:8qi7\" in local containers storage"
time="2023-10-10T01:09:09Z" level=debug msg="Found image \"registry.hub.docker.com/***/rhel-edge:8qi7\" as \"registry.hub.docker.com/***/rhel-edge:8qi7\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@f2ed9840375470c7b08b6e94140a0357474ace2cb2e20b589f6b671d3620aaba)"
time="2023-10-10T01:09:09Z" level=debug msg="exporting opaque data as blob \"sha256:f2ed9840375470c7b08b6e94140a0357474ace2cb2e20b589f6b671d3620aaba\""
time="2023-10-10T01:09:09Z" level=debug msg="Looking up image \"registry.hub.docker.com/***/rhel-edge:8qi7\" in local containers storage"
time="2023-10-10T01:09:09Z" level=debug msg="Normalized platform linux/amd64 to {amd64 linux  [] }"
time="2023-10-10T01:09:09Z" level=debug msg="Trying \"registry.hub.docker.com/***/rhel-edge:8qi7\" ..."
time="2023-10-10T01:09:09Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@f2ed9840375470c7b08b6e94140a0357474ace2cb2e20b589f6b671d3620aaba\""
time="2023-10-10T01:09:09Z" level=debug msg="Found image \"registry.hub.docker.com/***/rhel-edge:8qi7\" as \"registry.hub.docker.com/***/rhel-edge:8qi7\" in local containers storage"
time="2023-10-10T01:09:09Z" level=debug msg="Found image \"registry.hub.docker.com/***/rhel-edge:8qi7\" as \"registry.hub.docker.com/***/rhel-edge:8qi7\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@f2ed9840375470c7b08b6e94140a0357474ace2cb2e20b589f6b671d3620aaba)"
time="2023-10-10T01:09:09Z" level=debug msg="exporting opaque data as blob \"sha256:f2ed9840375470c7b08b6e94140a0357474ace2cb2e20b589f6b671d3620aaba\""
time="2023-10-10T01:09:09Z" level=debug msg="Looking up image \"f2ed9840375470c7b08b6e94140a0357474ace2cb2e20b589f6b671d3620aaba\" in local containers storage"
time="2023-10-10T01:09:09Z" level=debug msg="Trying \"f2ed9840375470c7b08b6e94140a0357474ace2cb2e20b589f6b671d3620aaba\" ..."
time="2023-10-10T01:09:09Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@f2ed9840375470c7b08b6e94140a0357474ace2cb2e20b589f6b671d3620aaba\""
time="2023-10-10T01:09:09Z" level=debug msg="Found image \"f2ed9840375470c7b08b6e94140a0357474ace2cb2e20b589f6b671d3620aaba\" as \"f2ed9840375470c7b08b6e94140a0357474ace2cb2e20b589f6b671d3620aaba\" in local containers storage"
time="2023-10-10T01:09:09Z" level=debug msg="Found image \"f2ed9840375470c7b08b6e94140a0357474ace2cb2e20b589f6b671d3620aaba\" as \"f2ed9840375470c7b08b6e94140a0357474ace2cb2e20b589f6b671d3620aaba\" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@f2ed9840375470c7b08b6e94140a0357474ace2cb2e20b589f6b671d3620aaba)"
time="2023-10-10T01:09:09Z" level=debug msg="Inspecting image f2ed9840375470c7b08b6e94140a0357474ace2cb2e20b589f6b671d3620aaba"
time="2023-10-10T01:09:09Z" level=debug msg="exporting opaque data as blob \"sha256:f2ed9840375470c7b08b6e94140a0357474ace2cb2e20b589f6b671d3620aaba\""
time="2023-10-10T01:09:09Z" level=debug msg="exporting opaque data as blob \"sha256:f2ed9840375470c7b08b6e94140a0357474ace2cb2e20b589f6b671d3620aaba\""
time="2023-10-10T01:09:09Z" level=debug msg="Inspecting image f2ed9840375470c7b08b6e94140a0357474ace2cb2e20b589f6b671d3620aaba"
time="2023-10-10T01:09:09Z" level=debug msg="Inspecting image f2ed9840375470c7b08b6e94140a0357474ace2cb2e20b589f6b671d3620aaba"
time="2023-10-10T01:09:09Z" level=debug msg="Inspecting image f2ed9840375470c7b08b6e94140a0357474ace2cb2e20b589f6b671d3620aaba"
time="2023-10-10T01:09:09Z" level=debug msg="using systemd mode: false"
time="2023-10-10T01:09:09Z" level=debug msg="setting container name rhel-edge"
time="2023-10-10T01:09:09Z" level=debug msg="No hostname set; container's hostname will default to runtime default"
time="2023-10-10T01:09:09Z" level=debug msg="Loading seccomp profile from \"/usr/share/containers/seccomp.json\""
time="2023-10-10T01:09:09Z" level=debug msg="Allocated lock 0 for container 296c10cf2ce142c7d67816179d49379e5066fb39aba482d04afeaba7e047d5fd"
time="2023-10-10T01:09:09Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@f2ed9840375470c7b08b6e94140a0357474ace2cb2e20b589f6b671d3620aaba\""
time="2023-10-10T01:09:09Z" level=debug msg="exporting opaque data as blob \"sha256:f2ed9840375470c7b08b6e94140a0357474ace2cb2e20b589f6b671d3620aaba\""
time="2023-10-10T01:09:09Z" level=debug msg="Cached value indicated that idmapped mounts for overlay are supported"
time="2023-10-10T01:09:09Z" level=debug msg="Created container \"296c10cf2ce142c7d67816179d49379e5066fb39aba482d04afeaba7e047d5fd\""
time="2023-10-10T01:09:09Z" level=debug msg="Container \"296c10cf2ce142c7d67816179d49379e5066fb39aba482d04afeaba7e047d5fd\" has work directory \"/var/lib/containers/storage/overlay-containers/296c10cf2ce142c7d67816179d49379e5066fb39aba482d04afeaba7e047d5fd/userdata\""
time="2023-10-10T01:09:09Z" level=debug msg="Container \"296c10cf2ce142c7d67816179d49379e5066fb39aba482d04afeaba7e047d5fd\" has run directory \"/run/containers/storage/overlay-containers/296c10cf2ce142c7d67816179d49379e5066fb39aba482d04afeaba7e047d5fd/userdata\""
time="2023-10-10T01:09:09Z" level=debug msg="overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/UJAIZBSZTFNFINOS5BZ75MCXN6,upperdir=/var/lib/containers/storage/overlay/db1ff570f5be0c840aeb1a7df06b33e65a715b4a9f53d059ff1829e22a4284a3/diff,workdir=/var/lib/containers/storage/overlay/db1ff570f5be0c840aeb1a7df06b33e65a715b4a9f53d059ff1829e22a4284a3/work,nodev,metacopy=on,context=\"system_u:object_r:container_file_t:s0:c793,c968\""
time="2023-10-10T01:09:09Z" level=debug msg="Mounted container \"296c10cf2ce142c7d67816179d49379e5066fb39aba482d04afeaba7e047d5fd\" at \"/var/lib/containers/storage/overlay/db1ff570f5be0c840aeb1a7df06b33e65a715b4a9f53d059ff1829e22a4284a3/merged\""
time="2023-10-10T01:09:09Z" level=debug msg="Made network namespace at /run/netns/netns-7391994e-99f9-9946-9317-e5fd8c07a5a1 for container 296c10cf2ce142c7d67816179d49379e5066fb39aba482d04afeaba7e047d5fd"
time="2023-10-10T01:09:09Z" level=debug msg="Created root filesystem for container 296c10cf2ce142c7d67816179d49379e5066fb39aba482d04afeaba7e047d5fd at /var/lib/containers/storage/overlay/db1ff570f5be0c840aeb1a7df06b33e65a715b4a9f53d059ff1829e22a4284a3/merged"
[DEBUG netavark::network::validation] "Validating network namespace..."
[DEBUG netavark::commands::setup] "Setting up..."
[INFO  netavark::firewall] Using iptables firewall driver
[DEBUG netavark::network::bridge] Setup network edge
[DEBUG netavark::network::bridge] Container interface name: eth0 with IP addresses [192.168.200.1/24]
[DEBUG netavark::network::bridge] Bridge name: podman1 with IP addresses [192.168.200.254/24]
[DEBUG netavark::network::core_utils] Setting sysctl value for net.ipv4.ip_forward to 1
[DEBUG netavark::network::core_utils] Setting sysctl value for /proc/sys/net/ipv6/conf/eth0/autoconf to 0
[INFO  netavark::network::netlink] Adding route (dest: 0.0.0.0/0 ,gw: 192.168.200.254, metric 100)
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-6C8D437DC1276 created on table nat
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_ISOLATION_2 created on table filter
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_ISOLATION_3 created on table filter
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_INPUT created on table filter
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_FORWARD created on table filter
[DEBUG netavark::firewall::varktables::helpers] rule -d 192.168.200.0/24 -j ACCEPT created on table nat and chain NETAVARK-6C8D437DC1276
[DEBUG netavark::firewall::varktables::helpers] rule ! -d 224.0.0.0/4 -j MASQUERADE created on table nat and chain NETAVARK-6C8D437DC1276
[DEBUG netavark::firewall::varktables::helpers] rule -s 192.168.200.0/24 -j NETAVARK-6C8D437DC1276 created on table nat and chain POSTROUTING
[DEBUG netavark::firewall::varktables::helpers] rule -p udp -s 192.168.200.0/24 --dport 53 -j ACCEPT created on table filter and chain NETAVARK_INPUT
[DEBUG netavark::firewall::varktables::helpers] rule -m conntrack --ctstate INVALID -j DROP created on table filter and chain NETAVARK_FORWARD
[DEBUG netavark::firewall::varktables::helpers] rule -d 192.168.200.0/24 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT created on table filter and chain NETAVARK_FORWARD
[DEBUG netavark::firewall::varktables::helpers] rule -s 192.168.200.0/24 -j ACCEPT created on table filter and chain NETAVARK_FORWARD
[DEBUG netavark::firewall::iptables] Adding firewalld rules for network 192.168.200.0/24
[DEBUG netavark::firewall::firewalld] Adding subnet 192.168.200.0/24 to zone trusted as source
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-SETMARK created on table nat
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-MASQ created on table nat
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-DNAT created on table nat
[DEBUG netavark::firewall::varktables::helpers] rule -j MARK  --set-xmark 0x2000/0x2000 created on table nat and chain NETAVARK-HOSTPORT-SETMARK
[DEBUG netavark::firewall::varktables::helpers] rule -j MASQUERADE -m comment --comment 'netavark portfw masq mark' -m mark --mark 0x2000/0x2000 created on table nat and chain NETAVARK-HOSTPORT-MASQ
[DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-DNAT -m addrtype --dst-type LOCAL created on table nat and chain PREROUTING
[DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-DNAT -m addrtype --dst-type LOCAL created on table nat and chain OUTPUT
[DEBUG netavark::dns::aardvark] Spawning aardvark server
[DEBUG netavark::dns::aardvark] start aardvark-dns: ["systemd-run", "-q", "--scope", "/usr/libexec/podman/aardvark-dns", "--config", "/run/containers/networks/aardvark-dns", "-p", "53", "run"]
[DEBUG netavark::commands::setup] {
        "edge": StatusBlock {
            dns_search_domains: Some(
                [
                    "dns.podman",
                ],
            ),
            dns_server_ips: Some(
                [
                    192.168.200.254,
                ],
            ),
            interfaces: Some(
                {
                    "eth0": NetInterface {
                        mac_address: "f6:14:ce:f8:d8:9f",
                        subnets: Some(
                            [
                                NetAddress {
                                    gateway: Some(
                                        192.168.200.254,
                                    ),
                                    ipnet: 192.168.200.1/24,
                                },
                            ],
                        ),
                    },
                },
            ),
        },
    }
[DEBUG netavark::commands::setup] "Setup complete"
time="2023-10-10T01:09:09Z" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode subscription"
time="2023-10-10T01:09:09Z" level=debug msg="Setting Cgroups for container 296c10cf2ce142c7d67816179d49379e5066fb39aba482d04afeaba7e047d5fd to machine.slice:libpod:296c10cf2ce142c7d67816179d49379e5066fb39aba482d04afeaba7e047d5fd"
time="2023-10-10T01:09:09Z" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d"
time="2023-10-10T01:09:09Z" level=debug msg="Workdir \"/\" resolved to host path \"/var/lib/containers/storage/overlay/db1ff570f5be0c840aeb1a7df06b33e65a715b4a9f53d059ff1829e22a4284a3/merged\""
time="2023-10-10T01:09:09Z" level=debug msg="Created OCI spec for container 296c10cf2ce142c7d67816179d49379e5066fb39aba482d04afeaba7e047d5fd at /var/lib/containers/storage/overlay-containers/296c10cf2ce142c7d67816179d49379e5066fb39aba482d04afeaba7e047d5fd/userdata/config.json"
time="2023-10-10T01:09:09Z" level=debug msg="/usr/bin/conmon messages will be logged to syslog"
time="2023-10-10T01:09:09Z" level=debug msg="running conmon: /usr/bin/conmon" args="[--api-version 1 -c 296c10cf2ce142c7d67816179d49379e5066fb39aba482d04afeaba7e047d5fd -u 296c10cf2ce142c7d67816179d49379e5066fb39aba482d04afeaba7e047d5fd -r /usr/bin/crun -b /var/lib/containers/storage/overlay-containers/296c10cf2ce142c7d67816179d49379e5066fb39aba482d04afeaba7e047d5fd/userdata -p /run/containers/storage/overlay-containers/296c10cf2ce142c7d67816179d49379e5066fb39aba482d04afeaba7e047d5fd/userdata/pidfile -n rhel-edge --exit-dir /run/libpod/exits --full-attach -s -l journald --log-level debug --syslog --conmon-pidfile /run/containers/storage/overlay-containers/296c10cf2ce142c7d67816179d49379e5066fb39aba482d04afeaba7e047d5fd/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/libpod --exit-command-arg --network-config-dir --exit-command-arg  --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /var/lib/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg boltdb --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 296c10cf2ce142c7d67816179d49379e5066fb39aba482d04afeaba7e047d5fd]"
time="2023-10-10T01:09:09Z" level=info msg="Running conmon under slice machine.slice and unitName libpod-conmon-296c10cf2ce142c7d67816179d49379e5066fb39aba482d04afeaba7e047d5fd.scope"
time="2023-10-10T01:09:10Z" level=debug msg="Received: -1"
time="2023-10-10T01:09:10Z" level=debug msg="Cleaning up container 296c10cf2ce142c7d67816179d49379e5066fb39aba482d04afeaba7e047d5fd"
time="2023-10-10T01:09:10Z" level=debug msg="Tearing down network namespace at /run/netns/netns-7391994e-99f9-9946-9317-e5fd8c07a5a1 for container 296c10cf2ce142c7d67816179d49379e5066fb39aba482d04afeaba7e047d5fd"
[DEBUG netavark::commands::teardown] "Tearing down.."
[INFO  netavark::firewall] Using iptables firewall driver
[INFO  netavark::network::bridge] removing bridge podman1
[DEBUG netavark::firewall::iptables] Removing firewalld rules for IPs 192.168.200.0/24
[DEBUG netavark::commands::teardown] "Teardown complete"
time="2023-10-10T01:09:13Z" level=debug msg="Unmounted container \"296c10cf2ce142c7d67816179d49379e5066fb39aba482d04afeaba7e047d5fd\""
time="2023-10-10T01:09:13Z" level=debug msg="ExitCode msg: \"crun: writing file `/sys/fs/cgroup/cgroup.subtree_control`: invalid argument: oci runtime error\""

Steps to reproduce the issue

Steps to reproduce the issue

  1. Deploy a VM from openstack
  2. git clone https://github.com/virt-s1/rhel-edge.git
  3. cd rhel-edge
  4. DOCKERHUB_USERNAME= DOCKERHUB_PASSWORD= DOWNLOAD_NODE= ./ostree-raw-image.sh

Describe the results you received

Run container with podman failed.

Describe the results you expected

Run container without error

podman info output

sudo podman info
host:
  arch: amd64
  buildahVersion: 1.32.0
  cgroupControllers:
  - cpuset
  - cpu
  - io
  - memory
  - hugetlb
  - pids
  - misc
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.7-2.fc38.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.7, commit: '
  cpuUtilization:
    idlePercent: 49.25
    systemPercent: 9.09
    userPercent: 41.66
  cpus: 2
  databaseBackend: boltdb
  distribution:
    distribution: fedora
    variant: cloud
    version: "38"
  eventLogger: journald
  freeLocks: 2048
  hostname: runner-gcp-fedora-38-medium-7712.c.virt-qe.internal
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 6.2.9-300.fc38.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 4686262272
  memTotal: 8309764096
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.8.0-1.fc38.x86_64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.8.0
    package: netavark-1.8.0-2.fc38.x86_64
    path: /usr/libexec/podman/netavark
    version: netavark 1.8.0
  ociRuntime:
    name: crun
    package: crun-1.9.2-1.fc38.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.9.2
      commit: 35274d346d2e9ffeacb22cc11590b0266a23d634
      rundir: /run/user/0/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt-0^20230908.g05627dc-1.fc38.x86_64
    version: |
      pasta 0^20230908.g05627dc-1.fc38.x86_64
      Copyright Red Hat
      GNU General Public License, version 2 or later
        <https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: false
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.1-1.fc38.x86_64
    version: |-
      slirp4netns version 1.2.1
      commit: 09e31e92fa3d2a1d3ca261adaeb012c8d75a8194
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.3
  swapFree: 8308387840
  swapTotal: 8308912128
  uptime: 0h 15m 57.00s
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /usr/share/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /var/lib/containers/storage
  graphRootAllocated: 84738572288
  graphRootUsed: 5946957824
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Supports shifting: "true"
    Supports volatile: "true"
    Using metacopy: "true"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 0
  runRoot: /run/containers/storage
  transientStore: false
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 4.7.0
  Built: 1695839078
  BuiltTime: Wed Sep 27 18:24:38 2023
  GitCommit: ""
  GoVersion: go1.20.8
  Os: linux
  OsArch: linux/amd64
  Version: 4.7.0

Podman in a container

No

Privileged Or Rootless

None

Upstream Latest Release

Yes

Additional environment details

Additional environment details

Additional information

aardvark-dns-1.8.0-1.fc38.x86_64
container-selinux-2:2.222.0-1.fc38.noarch
containers-common-4:1-89.fc38.noarch
containers-common-extra-4:1-89.fc38.noarch
crun-1.9.2-1.fc38.x86_64
podman-5:4.7.0-1.fc38.x86_64
podman-plugins-5:4.7.0-1.fc38.x86_64

@chuanchang
Copy link
Contributor

Please refer to #704

@flouthoc
Copy link
Collaborator

Moving this to crun and please report back here once this if last provided issue worked for you.

@flouthoc flouthoc transferred this issue from containers/podman Oct 10, 2023
henrywang added a commit to virt-s1/kite-action that referenced this issue Oct 11, 2023
henrywang added a commit to virt-s1/kite-action that referenced this issue Oct 11, 2023
@maxbrunet
Copy link

I am facing the issue on GitHub hosted-runners, I run podman inside a Node.js process (a CLI tool wrapped in a GitHub Actions) and when it recently upgraded from Node v16 to v20, the container release builds started failing. My current workaround has been to downgrade this dependency (maxbrunet/prometheus-elasticache-sd#522). It would be nice to have a solution to move forward, if it cannot be fixed in crun, maybe I could configure something in the Node runtime, but I do not know what to look for.

@flouthoc
Copy link
Collaborator

@maxbrunet Could you share the logs here ? I am not sure if the issue you are facing is directly related to crun but logs would help.

@maxbrunet
Copy link

maxbrunet commented Nov 5, 2023

Hey @flouthoc, here is a minimal reproduction: https://github.com/maxbrunet/crun-node20-subtree-control

Please feel free to fork it and tinker with it if needed

The logs: https://github.com/maxbrunet/crun-node20-subtree-control/actions/runs/6763428127/job/18380539732#step:7:95

Thank you in advance for your help

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants