You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The operator sometimes fails to correctly bootstrap/initialize a new cluster, instead it settles on a yellow state with shards stuck in unallocated and initializing statuses.
How can one reproduce the bug?
Note that this doesn't always happen, so you might have to try multiple times; however it happens for me more often than not:
Apply the minimal example below. It's basically the first example from the docs, with the now mandatory TLS added, and Dashboards removed. Wait until the bootstrapping finishes.
When the setup process finishes, the bootstrap pod is removed. Also around this time when to operator sometimes decides to log the event "Starting to rolling restart", and recreate the first node (pod). If this happens, sometimes the cluster ends up in a yellow state, that the operator does not resolve. If at this point I manually delete the cluster_manager pod (second node, usually), it will be recreated and the issue seems to resolve itself.
What is the expected behavior?
A cluster with a green state after setup. Preferably without unnecessary restarts.
What is your host/environment?
minikube v1.33.0 on Opensuse-Tumbleweed 20240511 w/ docker driver
I'm currently evaluating the operator locally. It might be part of the problem, as it forces me to run 3 nodes on a single machine. (However it does have sufficient resources to accommodate the nodes. The issue was also reproduced on a MacBook, albeit also with minikube.)
Going to be honest here not having enough resources to host the cluster is probably where you are running into issues. OpenSearch gets really unstable when there is not enough memory. I've personally experienced this as well when running docker containers with OpenSearch.
These logs are concerning but it's hard to say the are unrelated to OOM type issues.
Node 0 Log: [2024-05-13T17:35:44,103][WARN ][o.o.s.SecurityAnalyticsPlugin] [my-first-cluster-nodes-0] Failed to initialize LogType config index and builtin log types
Node 1 Log:
[2024-05-13T17:35:42,014][INFO ][o.o.i.i.MetadataService ] [my-first-cluster-nodes-1] ISM config index not exist, so we cancel the metadata migration job.
[2024-05-13T17:35:43,296][ERROR][o.o.s.l.LogTypeService ] [my-first-cluster-nodes-1] Custom LogType Bulk Index had failures:
[2024-05-13T17:35:43,296][ERROR][o.o.s.l.LogTypeService ] [my-first-cluster-nodes-1] Custom LogType Bulk Index had failures:```
Now I can confirm that this issue happens on an actul production Kubernetes cluster with plenty of resources too. The operator erroneously decides to do a rolling restart and fails to deliver, leaving the cluster in a yellow state. Seems like a concurrency issue as it doesn't always happen.
What is the bug?
The operator sometimes fails to correctly bootstrap/initialize a new cluster, instead it settles on a yellow state with shards stuck in unallocated and initializing statuses.
How can one reproduce the bug?
Note that this doesn't always happen, so you might have to try multiple times; however it happens for me more often than not:
Apply the minimal example below. It's basically the first example from the docs, with the now mandatory TLS added, and Dashboards removed. Wait until the bootstrapping finishes.
When the setup process finishes, the bootstrap pod is removed. Also around this time when to operator sometimes decides to log the event "Starting to rolling restart", and recreate the first node (pod). If this happens, sometimes the cluster ends up in a yellow state, that the operator does not resolve. If at this point I manually delete the
cluster_manager
pod (second node, usually), it will be recreated and the issue seems to resolve itself.What is the expected behavior?
A cluster with a green state after setup. Preferably without unnecessary restarts.
What is your host/environment?
minikube v1.33.0 on Opensuse-Tumbleweed 20240511
w/docker driver
I'm currently evaluating the operator locally. It might be part of the problem, as it forces me to run 3 nodes on a single machine. (However it does have sufficient resources to accommodate the nodes. The issue was also reproduced on a MacBook, albeit also with minikube.)
Do you have any additional context?
See the attached files. Some logs are probably missing since a pod was recreated.
kubectl_describe.txt
operator.log
node-2.log
node-1.log
node-0.log
allocation_explain.json
cat_shards.txt
cat_nodes.txt
The text was updated successfully, but these errors were encountered: