You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi everyone! I'm currently trying to deploy Kafka on Kubernetes running with 3 masters and 3 worker nodes. I'm using ArgoCD for the deployment. Unfortunately, when I bootstrap Kafka in Kraft mode, the brokers aren't up and failing with the following errors: Liveness probe failed: dial tcp 10.233.68.92:9092: connect: connection refused Liveness probe failed: dial tcp 10.233.68.92:9092: connect: connection refused
If I use Zookeeper mode, the brokers are run successfully. Where might be a problem? Below is the configuration I used for.
[2024-05-10 12:44:22,524] INFO [BrokerLifecycleManager id=100] Unable to register the broker because the RPC got timed out before it could be sent. (kafka.server.BrokerLifecycleManager)
[2024-05-10 12:44:22,559] INFO [MetadataLoader id=100] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-05-10 12:44:22,659] INFO [MetadataLoader id=100] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2024-05-10 12:44:22,722] INFO Terminating process due to signal SIGTERM (org.apache.kafka.common.utils.LoggingSignalHandler)
[2024-05-10 12:44:22,731] INFO App info kafka.server for 100 unregistered (org.apache.kafka.common.utils.AppInfoParser)
Sorry for the delay in getting back to you. Did you try to deploy the chart without using ArgoCD? I'ld like to know if it's a problem with the solution or the values you are using or with the way you are deploying it.
Name and Version
charts.bitnami.com/bitnami 28.0.0
What architecture are you using?
amd64
What steps will reproduce the bug?
Hi everyone! I'm currently trying to deploy Kafka on Kubernetes running with 3 masters and 3 worker nodes. I'm using ArgoCD for the deployment. Unfortunately, when I bootstrap Kafka in Kraft mode, the brokers aren't up and failing with the following errors:
Liveness probe failed: dial tcp 10.233.68.92:9092: connect: connection refused
Liveness probe failed: dial tcp 10.233.68.92:9092: connect: connection refused
If I use Zookeeper mode, the brokers are run successfully. Where might be a problem? Below is the configuration I used for.
Pod's logs:
Are you using any custom parameters or values?
What is the expected behavior?
The Kafka controllers and brokers are up and running in Kraft mode.
What do you see instead?
Only controllers are up and running in Kraft mode, meantime the brokers failed.
Additional information
Helm v3.14.3
Kubernetes v1.29.3
Argo CD v2.10.7
The text was updated successfully, but these errors were encountered: