Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubelet restart: failed to initialize top level QOS containers: root container [kubepods] doesn't exist #7701

Open
zhurkin opened this issue Jan 19, 2024 · 9 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@zhurkin
Copy link

zhurkin commented Jan 19, 2024

What happened?

kubelet not autostart Power off the machine (cold).

What did you expect to happen?

Autostart kubelelt

How can we reproduce it (as minimally and precisely as possible)?

Debian 12.2 / 12.4
kernel - 6.1.0-17-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.69-1 (2023-12-30) x86_64 GNU/Linux
kubernetes v1.27.6 / v1.27.10 / v1.28.6
crio version 1.27.3 / 1.28.3
crun 1.9.2 / 1.13

Anything else we need to know?

I was unable to replicate the behavior on centos similar operating system and centos 8.x

When looking for information on the error was found:
https://github.innominds.com/kubernetes/kubernetes/issues/122681
#4035
k3s-io/k3s#6318

CRI-O and Kubernetes version

$ crio --version
crio version 1.28.3
Version:        1.28.3
GitCommit:      94c0704abe46255b169a8dae736ff7f86836ec74
GitCommitDate:  2024-01-11T21:36:06Z
GitTreeState:   clean
BuildDate:      1970-01-01T00:00:00Z
GoVersion:      go1.20.10
Compiler:       gc
Platform:       linux/amd64
Linkmode:       static
BuildTags:
  static
  netgo
  osusergo
  exclude_graphdriver_btrfs
  exclude_graphdriver_devicemapper
  seccomp
  apparmor
  selinux
LDFlags:          unknown
SeccompEnabled:   true
AppArmorEnabled:  true
$ kubectl version --output=json
{
  "clientVersion": {
    "major": "1",
    "minor": "28",
    "gitVersion": "v1.28.6",
    "gitCommit": "be3af46a4654bdf05b4838fe94e95ec8c165660c",
    "gitTreeState": "clean",
    "buildDate": "2024-01-17T13:49:03Z",
    "goVersion": "go1.20.13",
    "compiler": "gc",
    "platform": "linux/amd64"
  },
  "kustomizeVersion": "v5.0.4-0.20230601165947-6ce0bf390ce3",
  "serverVersion": {
    "major": "1",
    "minor": "28",
    "gitVersion": "v1.28.6",
    "gitCommit": "be3af46a4654bdf05b4838fe94e95ec8c165660c",
    "gitTreeState": "clean",
    "buildDate": "2024-01-17T13:39:00Z",
    "goVersion": "go1.20.13",
    "compiler": "gc",
    "platform": "linux/amd64"
  }
}

OS version

# On Linux:
$ cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
NAME="Debian GNU/Linux"
VERSION_ID="12"
VERSION="12 (bookworm)"
VERSION_CODENAME=bookworm
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
$ uname -a
6.1.0-17-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.69-1 (2023-12-30) x86_64 GNU/Linux

Additional environment details (AWS, VirtualBox, physical, etc.)

physical and hyper-v and qemu
@zhurkin zhurkin added the kind/bug Categorizes issue or PR as related to a bug. label Jan 19, 2024
@zhurkin
Copy link
Author

zhurkin commented Jan 19, 2024

I forgot to attach the log.
Log kubelet after cold restart of the machine.

kubelet[539]: I0119 13:38:02.257657     539 server.go:895] "Client rotation is on, will bootstrap in background"
kubelet[539]: I0119 13:38:02.259201     539 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
kubelet[539]: I0119 13:38:02.266154     539 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
kubelet[539]: I0119 13:38:02.293486     539 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
kubelet[539]: I0119 13:38:02.294335     539 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
kubelet[539]: I0119 13:38:02.294606     539 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","Kub>
kubelet[539]: I0119 13:38:02.295079     539 topology_manager.go:138] "Creating topology manager with none policy"
kubelet[539]: I0119 13:38:02.295094     539 container_manager_linux.go:301] "Creating device plugin manager"
kubelet[539]: I0119 13:38:02.295480     539 state_mem.go:36] "Initialized new in-memory state store"
kubelet[539]: I0119 13:38:02.299174     539 kubelet.go:393] "Attempting to sync node with API server"
kubelet[539]: I0119 13:38:02.299741     539 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests"
kubelet[539]: I0119 13:38:02.300108     539 kubelet.go:309] "Adding apiserver pod source"
kubelet[539]: I0119 13:38:02.300209     539 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
kubelet[539]: W0119 13:38:02.300558     539 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://kub-lb-api-icce.fltd.loc:6443/api/v1/nodes?fieldSelector=metadata.name>
kubelet[539]: E0119 13:38:02.300706     539 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://kub-lb-api-icce.fltd.loc:6443/api/v1/nodes?f>
kubelet[539]: W0119 13:38:02.305555     539 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://kub-lb-api-icce.fltd.loc:6443/api/v1/services?limit=500&resourceVer>
kubelet[539]: E0119 13:38:02.305628     539 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://kub-lb-api-icce.fltd.loc:6443/api/v1/s>
kubelet[539]: I0119 13:38:02.305787     539 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="cri-o" version="1.28.3" apiVersion="v1"
kubelet[539]: I0119 13:38:02.310785     539 server.go:1232] "Started kubelet"
kubelet[539]: I0119 13:38:02.313468     539 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
kubelet[539]: I0119 13:38:02.313484     539 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
kubelet[539]: I0119 13:38:02.313988     539 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
kubelet[539]: E0119 13:38:02.314362     539 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"icce-k8s-ctrl01.17abb95a847b632f", GenerateName:"",>
kubelet[539]: I0119 13:38:02.315760     539 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
kubelet[539]: I0119 13:38:02.315949     539 server.go:462] "Adding debug handlers to kubelet server"
kubelet[539]: E0119 13:38:02.320361     539 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache"
kubelet[539]: E0119 13:38:02.323479     539 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"icce-k8s-ctrl01\" not found"

Log of kubelet after starting by hand.

kubelet[539]: I0119 13:38:02.323504     539 volume_manager.go:291] "Starting Kubelet Volume Manager"
kubelet[539]: I0119 13:38:02.323612     539 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
kubelet[539]: W0119 13:38:02.328092     539 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://kub-lb-api-icce.fltd.loc:6443/apis/storage.k8s.io/v1/csidrivers?l>
kubelet[539]: E0119 13:38:02.328152     539 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://kub-lb-api-icce.fltd.loc:6443/apis>
kubelet[539]: E0119 13:38:02.328212     539 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://kub-lb-api-icce.fltd.loc:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ic>
kubelet[539]: I0119 13:38:02.334176     539 reconciler_new.go:29] "Reconciler: start to sync state"
kubelet[539]: I0119 13:38:02.346037     539 cpu_manager.go:214] "Starting CPU manager" policy="none"
kubelet[539]: I0119 13:38:02.346052     539 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
kubelet[539]: I0119 13:38:02.346067     539 state_mem.go:36] "Initialized new in-memory state store"
kubelet[539]: I0119 13:38:02.346384     539 state_mem.go:88] "Updated default CPUSet" cpuSet=""
kubelet[539]: I0119 13:38:02.346403     539 state_mem.go:96] "Updated CPUSet assignments" assignments={}
kubelet[539]: I0119 13:38:02.346410     539 policy_none.go:49] "None policy: Start"
kubelet[539]: I0119 13:38:02.346830     539 memory_manager.go:169] "Starting memorymanager" policy="None"
kubelet[539]: I0119 13:38:02.346875     539 state_mem.go:35] "Initializing new in-memory state store"
kubelet[539]: I0119 13:38:02.347170     539 state_mem.go:75] "Updated machine memory state"
kubelet[539]: E0119 13:38:02.387527     539 kubelet.go:1511] "Failed to start ContainerManager" err="failed to initialize top level QOS containers: root container [kubepods] doesn't exist"`

`kubelet[604]: I0119 13:42:03.333832     604 volume_manager.go:291] "Starting Kubelet Volume Manager"
kubelet[604]: I0119 13:42:03.333975     604 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
kubelet[604]: I0119 13:42:03.334858     604 reconciler_new.go:29] "Reconciler: start to sync state"
kubelet[604]: W0119 13:42:03.335234     604 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://kub-lb-api-loc:6443/apis/storage.k8s.io/v1/csidrivers?l>
kubelet[604]: E0119 13:42:03.335337     604 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://kub-lb-api-loc:6443/apis>
kubelet[604]: E0119 13:42:03.336302     604 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://kub-lb-api-loc:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ic>
kubelet[604]: I0119 13:42:03.339311     604 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
kubelet[604]: I0119 13:42:03.340204     604 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
kubelet[604]: I0119 13:42:03.340224     604 status_manager.go:217] "Starting to sync pod status with apiserver"
kubelet[604]: I0119 13:42:03.340236     604 kubelet.go:2303] "Starting kubelet main sync loop"
kubelet[604]: E0119 13:42:03.340275     604 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
kubelet[604]: W0119 13:42:03.345696     604 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://kub-lb-api-loc:6443/apis/node.k8s.io/v1/runtimeclass>
kubelet[604]: E0119 13:42:03.345726     604 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://kub-lb-api-loc:644>
kubelet[604]: I0119 13:42:03.366008     604 cpu_manager.go:214] "Starting CPU manager" policy="none"
kubelet[604]: I0119 13:42:03.366023     604 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
kubelet[604]: I0119 13:42:03.366035     604 state_mem.go:36] "Initialized new in-memory state store"
kubelet[604]: I0119 13:42:03.366156     604 state_mem.go:88] "Updated default CPUSet" cpuSet=""
kubelet[604]: I0119 13:42:03.366173     604 state_mem.go:96] "Updated CPUSet assignments" assignments={}
kubelet[604]: I0119 13:42:03.366179     604 policy_none.go:49] "None policy: Start"
kubelet[604]: I0119 13:42:03.366589     604 memory_manager.go:169] "Starting memorymanager" policy="None"
kubelet[604]: I0119 13:42:03.366602     604 state_mem.go:35] "Initializing new in-memory state store"
kubelet[604]: I0119 13:42:03.366782     604 state_mem.go:75] "Updated machine memory state"
kubelet[604]: I0119 13:42:03.418832     604 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
kubelet[604]: E0119 13:42:03.419335     604 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"k8s-ctrl01\" not found"
kubelet[604]: I0119 13:42:03.420083     604 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
kubelet[604]: I0119 13:42:03.434567     604 kubelet_node_status.go:70] "Attempting to register node" node="k8s-ctrl01"
kubelet[604]: E0119 13:42:03.434823     604 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://kub-lb-api-loc:6443/api/v1/nodes\": dial tcp 172.17.100.12:6443: connect: con>
kubelet[604]: I0119 13:42:03.441302     604 topology_manager.go:215] "Topology Admit Handler" podUID="24d1bedeb51fad5af2d555295ef8c203" podNamespace="kube-system" podName="etcd-k8s-ctrl01"
kubelet[604]: I0119 13:42:03.441810     604 topology_manager.go:215] "Topology Admit Handler" podUID="d9e600a65f178daeb0a9a7cb85fc3cf9" podNamespace="kube-system" podName="kube-apiserver-k8s-ctrl01"
kubelet[604]: I0119 13:42:03.442218     604 topology_manager.go:215] "Topology Admit Handler" podUID="37b578e358818ddec2c649de9de2fc4c" podNamespace="kube-system" podName="kube-controller-manager-k8s-ctrl01"
kubelet[604]: I0119 13:42:03.442921     604 topology_manager.go:215] "Topology Admit Handler" podUID="692e90e189e0f4773fac788d44280aa9" podNamespace="kube-system" podName="kube-scheduler-k8s-ctrl01"
kubelet[604]: E0119 13:42:03.537295     604 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://kub-lb-api-loc:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ic>
kubelet[604]: I0119 13:42:03.545193     604 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/37b578e35881>
kubelet[604]: I0119 13:42:03.545304     604 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/692e90e189e0f4773fac788d442>
kubelet[604]: I0119 13:42:03.545384     604 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/24d1bedeb51fad5af2d555295ef>
kubelet[604]: I0119 13:42:03.545458     604 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d9e600a65f17>
kubelet[604]: I0119 13:42:03.545527     604 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/37b578e358818ddec2c649d>
kubelet[604]: I0119 13:42:03.545613     604 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/37b578>
kubelet[604]: I0119 13:42:03.545678     604 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d9e600>
kubelet[604]: I0119 13:42:03.545738     604 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d9e600a65f178daeb0a9a7cb85fc>
kubelet[604]: I0119 13:42:03.545811     604 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/37b578e358818ddec2c649de9de2f>
kubelet[604]: I0119 13:42:03.545900     604 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/37b578e358818ddec2c649de9de>
kubelet[604]: I0119 13:42:03.545961     604 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/24d1bedeb51fad5af2d555295ef8>
kubelet[604]: I0119 13:42:03.546027     604 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d9e600a65f178daeb0a9a7cb85fc3>
kubelet[604]: I0119 13:42:03.546095     604 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/37b578e358818ddec2>
kubelet[604]: I0119 13:42:03.546189     604 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/37b578e358818ddec2c649de9de2>
kubelet[604]: I0119 13:42:03.546256     604 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d9e600a65f178daeb0>
kubelet[604]: I0119 13:42:03.635773     604 kubelet_node_status.go:70] "Attempting to register node" node="k8s-ctrl01"
kubelet[604]: E0119 13:42:03.636161     604 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://kub-lb-api-loc:6443/api/v1/nodes\": dial tcp 172.17.100.12:6443: connect: con>
kubelet[604]: E0119 13:42:03.938154     604 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://kub-lb-api-loc:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ic>
kubelet[604]: I0119 13:42:04.037097     604 kubelet_node_status.go:70] "Attempting to register node" node="k8s-ctrl01"
kubelet[604]: E0119 13:42:04.037307     604 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://kub-lb-api-loc:6443/api/v1/nodes\": dial tcp 172.17.100.12:6443: connect: con>
kubelet[604]: W0119 13:42:04.278384     604 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://kub-lb-api-loc:6443/apis/storage.k8s.io/v1/csidrivers?l>
kubelet[604]: E0119 13:42:04.278425     604 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://kub-lb-api-loc:6443/apis>
kubelet[604]: W0119 13:42:04.420218     604 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://kub-lb-api-loc:6443/apis/node.k8s.io/v1/runtimeclass>
kubelet[604]: E0119 13:42:04.420643     604 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://kub-lb-api-loc:644>
kubelet[604]: I0119 13:42:04.838526     604 kubelet_node_status.go:70] "Attempting to register node" node="k8s-ctrl01"`

@haircommander
Copy link
Member

I seem to remember this has something to do with cgroupv2. I also wonder if it's an error coming from libcontainer. @kolyshkin does anything come to mind immediately?

@harche
Copy link
Contributor

harche commented Feb 8, 2024

I seem to remember this has something to do with cgroupv2

Another instance where this was observed while using cgroup v2 - kubernetes/kubernetes#122955 (comment)

@harche
Copy link
Contributor

harche commented Feb 8, 2024

/assign

@harche
Copy link
Contributor

harche commented Feb 12, 2024

I was able to observe this on latest openshift as well,

$ oc get clusterversion 
NAME      VERSION                         AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.16.0-0.ci-2024-02-08-031624   True        False         143m    Cluster version is 4.16.0-0.ci-2024-02-08-031624

Copy link

A friendly reminder that this issue had no activity for 30 days.

@github-actions github-actions bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 14, 2024
@saschagrunert saschagrunert removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 15, 2024
@zhurkin
Copy link
Author

zhurkin commented Apr 17, 2024

I seem to remember this has something to do with cgroupv2. I also wonder if it's an error coming from libcontainer. @kolyshkin does anything come to mind immediately?

https://github.com/openshift/machine-config-operator/pull/4217/files
That solves the problem. It is not very clear why this is not repeated on all distributions .

@haircommander
Copy link
Member

haircommander commented Apr 19, 2024

it does but it wasn't merged because I don't think we should delegate : openshift/machine-config-operator#4217 (comment)

Basically: the problem is when kubelet creates the kubepods slice, it doesn't specify AllowedCPUs (which would enable the cpuset cgroup on the top-level), so systemd doesn't create it, and then the kubelet errors.

though the fix won't be that simple I'm afraid.

Copy link

A friendly reminder that this issue had no activity for 30 days.

@github-actions github-actions bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

4 participants