Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to run a process in the background #7640

Open
vedujoshi opened this issue Dec 28, 2023 · 12 comments
Open

Unable to run a process in the background #7640

vedujoshi opened this issue Dec 28, 2023 · 12 comments
Assignees
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@vedujoshi
Copy link

What happened?

Using kubectl exec or crictl on cri-o based containers, am trying to run a command in the background, but unable to do so.
Ex :

 /tmp# date; crictl exec -i 4821 /bin/bash -c "nohup sleep 10 & " ; date
Thu Dec 28 09:40:11 UTC 2023
Thu Dec 28 09:40:21 UTC 2023
/tmp#

What did you expect to happen?

From the above command, like in the case of docker exec, the nohup command should have returned immediately, but it does not. I get control only after the sleep command runs for 10s

How can we reproduce it (as minimally and precisely as possible)?

Run the above command on any container

Anything else we need to know?

No response

CRI-O and Kubernetes version

$ crio --version
crio version 1.24.4
Version:          1.24.4
GitCommit:        69945ab622a70ab01f4b0df10b107bc48e38dc9e
GitTreeState:     clean
BuildDate:        2023-06-14T16:22:54Z
GoVersion:        go1.19.9
Compiler:         gc
Platform:         linux/amd64
Linkmode:         dynamic
BuildTags:        exclude_graphdriver_devicemapper, seccomp, selinux
SeccompEnabled:   true
AppArmorEnabled:  false
$ kubectl version --output=json
{
  "clientVersion": {
    "major": "1",
    "minor": "24",
    "gitVersion": "v1.24.13-ves",
    "gitCommit": "7bebfe5dd7222ed749dfa9f1c47f41b45dbea0d7",
    "gitTreeState": "clean",
    "buildDate": "2023-05-29T14:45:00Z",
    "goVersion": "go1.19.8",
    "compiler": "gc",
    "platform": "linux/amd64"
  },
  "kustomizeVersion": "v4.5.4",
  "serverVersion": {
    "major": "1",
    "minor": "24",
    "gitVersion": "v1.24.13",
    "gitCommit": "49433308be5b958856b6949df02b716e0a7cf0a3",
    "gitTreeState": "clean",
    "buildDate": "2023-04-12T12:08:36Z",
    "goVersion": "go1.19.8",
    "compiler": "gc",
    "platform": "linux/amd64"
  }
}

OS version

# On Linux:
$ cat /etc/os-release
NAME="CentOS Linux"
VERSION="7.2009.46 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7.2009.46 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
OSTREE_VERSION=7.2009.46
$ uname -a
Linux master-0 4.18.0-240.10.1.ves3.el7.x86_64 #1 SMP Wed Oct 4 19:00:49 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

Additional environment details (AWS, VirtualBox, physical, etc.)

@vedujoshi vedujoshi added the kind/bug Categorizes issue or PR as related to a bug. label Dec 28, 2023
@haircommander haircommander added good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. labels Jan 2, 2024
@haircommander
Copy link
Member

interesting, I find myself wondering if containerd behaves the same or if we differ from it as well?

@btwshivam
Copy link

hey, i would like to work on it assign me

@haircommander
Copy link
Member

assigned! thanks for your interest :)

@kwilczynski
Copy link
Member

@shivam200446 , checking-in to see how you are getting on. Do you need any help?

@prakrit55
Copy link

hey @kwilczynski, I would like to give it a try

@PiyushRaj927
Copy link

interesting, I find myself wondering if containerd behaves the same or if we differ from it as well?

To find that out, I ran tests on both cri-o and containerd using crictl, and here are my findings:

time sudo crictl  exec -i 27b70a /bin/bash -c "nohup sleep 15 & "
sudo crictl exec -i 27b70a /bin/bash -c "nohup sleep 15 & "  0.00s user 0.02s system 0% cpu 15.178 total

➜ time sudo crictl --runtime-endpoint="unix:///run/containerd/containerd.sock" exec -i 83 /bin/bash -c "nohup sleep 15 & " 
sudo crictl --runtime-endpoint="unix:///run/containerd/containerd.sock" exec   0.01s user 0.01s system 0% cpu 15.665 total

➜ time docker exec -i e8b540b0df58 /bin/bash -c "nohup sleep 10 & "  
docker exec -i e8b540b0df58 /bin/bash -c "nohup sleep 10 & "  0.02s user 0.02s system 1% cpu 2.170 total

it appears that both the cri-o and containerd runtimes have a 15-second wait time instead of returning instantly. Additionally, when I streamed events from containerd i noticed that containers exit immediately as expected. However, the crictl command returns only after the sleep command finishes.

➜ sudo ctr --debug events 
2024-02-12 14:17:50.202554306 +0000 UTC moby /tasks/exec-added {"container_id":"e8b540b0df586582659aa8b78d395c9c5d466fee559e4854ceeb33c8977c248c","exec_id":"db7633c1c987d17ab101294cd44e614d44936ebd033a446eaec6799be3ca8c94"}
2024-02-12 14:17:50.341529587 +0000 UTC moby /tasks/exec-started {"container_id":"e8b540b0df586582659aa8b78d395c9c5d466fee559e4854ceeb33c8977c248c","exec_id":"db7633c1c987d17ab101294cd44e614d44936ebd033a446eaec6799be3ca8c94","pid":14735}
2024-02-12 14:17:50.341874955 +0000 UTC moby /tasks/exit {"container_id":"e8b540b0df586582659aa8b78d395c9c5d466fee559e4854ceeb33c8977c248c","id":"db7633c1c987d17ab101294cd44e614d44936ebd033a446eaec6799be3ca8c94","pid":14735,"exited_at":"2024-02-12T14:17:50.340274073Z"}
2024-02-12 14:18:09.18425279 +0000 UTC k8s.io /tasks/exec-added {"container_id":"83f91711772455acbaccdc473063ee47c1e6106ea09ecfa92c360cd44fb81a77","exec_id":"747066dbfd6a5faa3f7f07f22dda56b82efa26b85f44c6446c4c06e50fd87f26"}
2024-02-12 14:18:09.291344753 +0000 UTC k8s.io /tasks/exec-started {"container_id":"83f91711772455acbaccdc473063ee47c1e6106ea09ecfa92c360cd44fb81a77","exec_id":"747066dbfd6a5faa3f7f07f22dda56b82efa26b85f44c6446c4c06e50fd87f26","pid":14806}
2024-02-12 14:18:09.29464263 +0000 UTC k8s.io /tasks/exit {"container_id":"83f91711772455acbaccdc473063ee47c1e6106ea09ecfa92c360cd44fb81a77","id":"747066dbfd6a5faa3f7f07f22dda56b82efa26b85f44c6446c4c06e50fd87f26","pid":14806,"exited_at":"2024-02-12T14:18:09.293761675Z"}

I believe there might be an issue with crictl, I'd like to confirm by streaming events from cri-o. Any guidance on how to do this would be appreciated.
@haircommander and @kwilczynski, can you share your thoughts on this

@kwilczynski
Copy link
Member

@PiyushRaj927, these events look like API access logs, perhaps? I don't know what these are precisely.

For CRI-O, you can enable the debug log level (or even "trace"—very verbose) and start the CRI-O process in the foreground to get the output directly into your terminal, which is helpful for troubleshooting.

Would this help?

@PiyushRaj927
Copy link

@PiyushRaj927, these events look like API access logs, perhaps? I don't know what these are precisely.

Yes, they are logs from containerd runtime. I want to draw your attention to the timestamps, which indicate that the tasks(exec) exited immediately in both docker and crictl cases. However, with crictl, the prompt was returned after the sleep command finished.

Also, when I used the -t flag in crictl, the expected behavior was observed.

time sudo crictl  exec -i 27b70a /bin/bash -c "nohup sleep 15 &" 
sudo crictl exec -i 27b70a /bin/bash -c "nohup sleep 15 &"  0.01s user 0.01s system 0% cpu 15.175 total
➜ time sudo crictl  exec -it 27b70a /bin/bash -c "nohup sleep 15 &" 
sudo crictl exec -it 27b70a /bin/bash -c "nohup sleep 15 &"  0.01s user 0.01s system 8% cpu 0.170 total
➜ time sudo crictl --runtime-endpoint="unix:///run/containerd/containerd.sock" exec -it 83 /bin/bash -c "nohup sleep 15 & "
sudo crictl --runtime-endpoint="unix:///run/containerd/containerd.sock" exec   0.01s user 0.01s system 2% cpu 0.485 total

Could the issue be related to not having a tty attached? I'm uncertain about how crictl is handling the stdin and tty. I will try to debug the crictl next.

@kwilczynski Appreciate the help with cri-o, though the containerd logs were more informative about exec exits. Maybe someone else can interpret them better. Logs attached below.

DEBU[2024-02-13 13:44:59.208098143+05:30] Request: &ExecRequest{ContainerId:27b70a,Cmd:[/bin/bash -c nohup sleep 15 &],Tty:false,Stdin:true,Stdout:true,Stderr:true,}  file="otel-collector/interceptors.go:62" id=49c6eae4-f106-490e-8e6a-ae3d0b116719 name=/runtime.v1.RuntimeService/Exec
DEBU[2024-02-13 13:44:59.208306023+05:30] Response: &ExecResponse{Url:http://127.0.0.1:46607/exec/9avkp9s1,}  file="otel-collector/interceptors.go:74" id=49c6eae4-f106-490e-8e6a-ae3d0b116719 name=/runtime.v1.RuntimeService/Exec
DEBU[2024-02-13 13:45:23.897244896+05:30] Request: &VersionRequest{Version:,}           file="otel-collector/interceptors.go:62" id=ca688d2b-e376-4008-8716-9806acc32ef1 name=/runtime.v1.RuntimeService/Version
DEBU[2024-02-13 13:45:23.897868126+05:30] Using /sbin/apparmor_parser binary            file="supported/supported.go:61"
DEBU[2024-02-13 13:45:23.898012938+05:30] Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.30.0,RuntimeApiVersion:v1,}  file="otel-collector/interceptors.go:74" id=ca688d2b-e376-4008-8716-9806acc32ef1 name=/runtime.v1.RuntimeService/Version
DEBU[2024-02-13 13:45:23.901699640+05:30] Request: &ExecRequest{ContainerId:27b70a,Cmd:[/bin/bash -c nohup sleep 15 &],Tty:true,Stdin:true,Stdout:true,Stderr:false,}  file="otel-collector/interceptors.go:62" id=73382c7a-d58b-4ceb-aea2-4dc33924ead6 name=/runtime.v1.RuntimeService/Exec
DEBU[2024-02-13 13:45:23.902185302+05:30] Response: &ExecResponse{Url:http://127.0.0.1:46607/exec/ff75epAA,}  file="otel-collector/interceptors.go:74" id=73382c7a-d58b-4ceb-aea2-4dc33924ead6 name=/runtime.v1.RuntimeService/Exec
WARN[2024-02-13 13:45:24.002563262+05:30] Stdout copy error: read /dev/ptmx: input/output error  file="oci/oci_unix.go:63"
Full crio debug output
➜ sudo crio -l DEBUG
INFO[2024-02-13 13:44:52.294925693+05:30] Starting CRI-O, version: 1.30.0, git: b4bd55d02e9be1196284997347e0a3ca20e27589(clean) 
INFO[2024-02-13 13:44:52.301782364+05:30] Node configuration value for hugetlb cgroup is true 
INFO[2024-02-13 13:44:52.302007807+05:30] Node configuration value for pid cgroup is true 
INFO[2024-02-13 13:44:52.302293964+05:30] Node configuration value for memoryswap cgroup is true 
INFO[2024-02-13 13:44:52.302423668+05:30] Node configuration value for cgroup v2 is true 
INFO[2024-02-13 13:44:52.313414693+05:30] Node configuration value for systemd AllowedCPUs is true 
DEBU[2024-02-13 13:44:52.313849959+05:30] Cached value indicated that overlay is supported  file="overlay/overlay.go:247"
DEBU[2024-02-13 13:44:52.314050866+05:30] Cached value indicated that overlay is supported  file="overlay/overlay.go:247"
DEBU[2024-02-13 13:44:52.314181092+05:30] Cached value indicated that metacopy is not being used  file="overlay/overlay.go:408"
DEBU[2024-02-13 13:44:52.314406404+05:30] Cached value indicated that native-diff is usable  file="overlay/overlay.go:798"
DEBU[2024-02-13 13:44:52.314531619+05:30] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false  file="overlay/overlay.go:467"
INFO[2024-02-13 13:44:52.314647266+05:30] [graphdriver] using prior storage driver: overlay  file="drivers/driver.go:406"
INFO[2024-02-13 13:44:52.318125474+05:30] Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL  file="capabilities/capabilities_linux.go:38"
DEBU[2024-02-13 13:44:52.318445776+05:30] Found valid runtime "runc" for runtime_path "/usr/bin/crio-runc"  file="config/config.go:1541"
DEBU[2024-02-13 13:44:52.319752301+05:30] Allowed annotations for runtime: []           file="config/config.go:1576"
DEBU[2024-02-13 13:44:52.319908825+05:30] Found valid runtime "crun" for runtime_path "/usr/bin/crio-crun"  file="config/config.go:1541"
DEBU[2024-02-13 13:44:52.319958768+05:30] Allowed annotations for runtime: [io.containers.trace-syscall]  file="config/config.go:1576"
DEBU[2024-02-13 13:44:52.374378073+05:30] Loading registries configuration "/etc/containers/registries.conf"  file="sysregistriesv2/system_registries_v2.go:926"
DEBU[2024-02-13 13:44:52.374746283+05:30] Loading registries configuration "/etc/containers/registries.conf.d/crio.conf"  file="sysregistriesv2/system_registries_v2.go:926"
DEBU[2024-02-13 13:44:52.375008976+05:30] Using hooks directory: /usr/share/containers/oci/hooks.d  file="config/config.go:1142"
DEBU[2024-02-13 13:44:52.375222939+05:30] Using pinns from $PATH: /usr/bin/pinns        file="config/config.go:1408"
INFO[2024-02-13 13:44:52.375455054+05:30] Checkpoint/restore support disabled           file="config/config.go:1167"
INFO[2024-02-13 13:44:52.375574098+05:30] Using seccomp default profile when unspecified: true  file="seccomp/seccomp.go:101"
INFO[2024-02-13 13:44:52.375695996+05:30] Using the internal default seccomp profile    file="seccomp/seccomp.go:154"
INFO[2024-02-13 13:44:52.375851097+05:30] Installing default AppArmor profile: crio-default  file="apparmor/apparmor_linux.go:46"
DEBU[2024-02-13 13:44:52.376002311+05:30] Using /sbin/apparmor_parser binary            file="supported/supported.go:61"
INFO[2024-02-13 13:44:52.422917455+05:30] No blockio config file specified, blockio not configured  file="blockio/blockio.go:74"
INFO[2024-02-13 13:44:52.423212518+05:30] RDT not available in the host system          file="rdt/rdt.go:56"
INFO[2024-02-13 13:44:52.423271018+05:30] Using conmon executable: /usr/bin/crio-conmon  file="config/config.go:1414"
INFO[2024-02-13 13:44:52.425243070+05:30] Conmon does support the --sync option         file="conmonmgr/conmonmgr.go:85"
INFO[2024-02-13 13:44:52.425368876+05:30] Conmon does support the --log-global-size-max option  file="conmonmgr/conmonmgr.go:71"
INFO[2024-02-13 13:44:52.425403040+05:30] Using conmon executable: /usr/bin/crio-conmon  file="config/config.go:1414"
INFO[2024-02-13 13:44:52.426954323+05:30] Conmon does support the --sync option         file="conmonmgr/conmonmgr.go:85"
INFO[2024-02-13 13:44:52.427056244+05:30] Conmon does support the --log-global-size-max option  file="conmonmgr/conmonmgr.go:71"
INFO[2024-02-13 13:44:52.431819767+05:30] Found CNI network crio (type=bridge) at /etc/cni/net.d/11-crio-ipv4-bridge.conflist  file="ocicni/ocicni.go:343"
INFO[2024-02-13 13:44:52.432040150+05:30] Updated default CNI network name to crio      file="ocicni/ocicni.go:375"
DEBU[2024-02-13 13:44:52.432566978+05:30] reading hooks from /usr/share/containers/oci/hooks.d  file="hooks/read.go:65"
INFO[2024-02-13 13:44:52.432832267+05:30] Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus  file="server/server.go:413"
INFO[2024-02-13 13:44:52.432903120+05:30] Restore irqbalance config: failed to get current CPU ban list, ignoring  file="runtimehandlerhooks/high_performance_hooks_linux.go:913"
DEBU[2024-02-13 13:44:52.438293190+05:30] Golang's threads limit set to 20790           file="server/server_linux.go:139"
INFO[2024-02-13 13:44:52.501985442+05:30] Got pod network &{Name:nginx-sandbox-containred Namespace:default ID:9e5bf3e00f32a5d38deea3e53859ab1b9e1d03de0c17f973d59546638a597a84 UID:hdishd83djaidwnduwk28bcsb NetNS:/var/run/netns/5f2d6756-a61c-4044-a9c9-fbd5828eae1c Networks:[] RuntimeConfig:map[crio:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000440538}] Aliases:map[]}  file="ocicni/ocicni.go:795"
INFO[2024-02-13 13:44:52.502555761+05:30] Checking pod default_nginx-sandbox-containred for CNI network crio (type=bridge)  file="ocicni/ocicni.go:695"
INFO[2024-02-13 13:44:52.526457966+05:30] Checking CNI network crio (config version=1.0.0)  file="ocicni/ocicni.go:732"
DEBU[2024-02-13 13:44:52.527106983+05:30] Sandboxes: [0xc000310380]                     file="server/server.go:531"
INFO[2024-02-13 13:44:52.529831279+05:30] Registered SIGHUP reload watcher              file="server/server.go:583"
DEBU[2024-02-13 13:44:52.531874304+05:30] Metrics are disabled                          file="server/server.go:541"
INFO[2024-02-13 13:44:52.531924989+05:30] Starting seccomp notifier watcher             file="server/server_linux.go:22"
INFO[2024-02-13 13:44:52.532002554+05:30] Create NRI interface                          file="nri/nri.go:94"
INFO[2024-02-13 13:44:52.532130825+05:30] NRI interface is disabled in the configuration.  file="nri/nri.go:101"
DEBU[2024-02-13 13:44:52.535659479+05:30] monitoring "/usr/share/containers/oci/hooks.d" for hooks  file="hooks/monitor.go:43"
DEBU[2024-02-13 13:44:59.204070965+05:30] Request: &VersionRequest{Version:,}           file="otel-collector/interceptors.go:62" id=a7d0c6f1-c907-4452-9ea0-4a76dd027997 name=/runtime.v1.RuntimeService/Version
DEBU[2024-02-13 13:44:59.204272663+05:30] Using /sbin/apparmor_parser binary            file="supported/supported.go:61"
DEBU[2024-02-13 13:44:59.204886456+05:30] Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.30.0,RuntimeApiVersion:v1,}  file="otel-collector/interceptors.go:74" id=a7d0c6f1-c907-4452-9ea0-4a76dd027997 name=/runtime.v1.RuntimeService/Version
DEBU[2024-02-13 13:44:59.208098143+05:30] Request: &ExecRequest{ContainerId:27b70a,Cmd:[/bin/bash -c nohup sleep 15 &],Tty:false,Stdin:true,Stdout:true,Stderr:true,}  file="otel-collector/interceptors.go:62" id=49c6eae4-f106-490e-8e6a-ae3d0b116719 name=/runtime.v1.RuntimeService/Exec
DEBU[2024-02-13 13:44:59.208306023+05:30] Response: &ExecResponse{Url:http://127.0.0.1:46607/exec/9avkp9s1,}  file="otel-collector/interceptors.go:74" id=49c6eae4-f106-490e-8e6a-ae3d0b116719 name=/runtime.v1.RuntimeService/Exec
DEBU[2024-02-13 13:45:23.897244896+05:30] Request: &VersionRequest{Version:,}           file="otel-collector/interceptors.go:62" id=ca688d2b-e376-4008-8716-9806acc32ef1 name=/runtime.v1.RuntimeService/Version
DEBU[2024-02-13 13:45:23.897868126+05:30] Using /sbin/apparmor_parser binary            file="supported/supported.go:61"
DEBU[2024-02-13 13:45:23.898012938+05:30] Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.30.0,RuntimeApiVersion:v1,}  file="otel-collector/interceptors.go:74" id=ca688d2b-e376-4008-8716-9806acc32ef1 name=/runtime.v1.RuntimeService/Version
DEBU[2024-02-13 13:45:23.901699640+05:30] Request: &ExecRequest{ContainerId:27b70a,Cmd:[/bin/bash -c nohup sleep 15 &],Tty:true,Stdin:true,Stdout:true,Stderr:false,}  file="otel-collector/interceptors.go:62" id=73382c7a-d58b-4ceb-aea2-4dc33924ead6 name=/runtime.v1.RuntimeService/Exec
DEBU[2024-02-13 13:45:23.902185302+05:30] Response: &ExecResponse{Url:http://127.0.0.1:46607/exec/ff75epAA,}  file="otel-collector/interceptors.go:74" id=73382c7a-d58b-4ceb-aea2-4dc33924ead6 name=/runtime.v1.RuntimeService/Exec
WARN[2024-02-13 13:45:24.002563262+05:30] Stdout copy error: read /dev/ptmx: input/output error  file="oci/oci_unix.go:63"
^CDEBU[2024-02-13 13:45:38.825989084+05:30] received signal                               file="crio/main.go:57" signal=interrupt
DEBU[2024-02-13 13:45:38.826233404+05:30] Caught SIGINT                                 file="crio/main.go:68"
DEBU[2024-02-13 13:45:38.827250208+05:30] Closed http server                            file="crio/main.go:396"
DEBU[2024-02-13 13:45:38.833623299+05:30] Closing exit monitor...                       file="server/server.go:776"
DEBU[2024-02-13 13:45:38.842381529+05:30] hook monitoring canceled: context canceled    file="hooks/monitor.go:60"
DEBU[2024-02-13 13:45:39.003075320+05:30] Closed stream server                          file="crio/main.go:426"
DEBU[2024-02-13 13:45:39.003376206+05:30] Closed monitors                               file="crio/main.go:428"
DEBU[2024-02-13 13:45:39.003711707+05:30] Closed hook monitor                           file="crio/main.go:431"
DEBU[2024-02-13 13:45:39.004315690+05:30] Closed main server                            file="crio/main.go:436"

Copy link

A friendly reminder that this issue had no activity for 30 days.

@github-actions github-actions bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 15, 2024
@kwilczynski
Copy link
Member

/remove-lifecycle stale

@openshift-ci openshift-ci bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 2, 2024
@kwilczynski
Copy link
Member

/assign haircommander
/assign kwilczynski

Copy link

github-actions bot commented Jun 2, 2024

A friendly reminder that this issue had no activity for 30 days.

@github-actions github-actions bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

6 participants