You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This can be due to the fact clickhouse wont spawn more threads than there is logical cores, but there's also a thread limit on clickhouse that's enforced with other various settings which are defaulted to 16 everywhere in clickhouse.
It seems the most common way to bypass the limitation is to set kafka_disable_num_consumers_limit to 1.
Would worth documenting this and providing a way to turn this on, as it can be quite limiting.
The text was updated successfully, but these errors were encountered:
I think the way to do this properly is like so, in the docker compose - ../config/clickhouse/kafka_tweaks.xml:/etc/clickhouse-server/config.d/kafka_tweaks.xml:ro
As per the title of the issue, by default clickhouse will limit you to 16 consumers for Kafka which isn't enough in heavy deployments. Also if using a VM it will /2 the vCPUs as your limit, kafka_disable_num_consumers_limit turns that off and allows it to go past 16 in general so long as the other two settings which are defaulted to 16 are set higher than 16.
I haven't had the chance to fully test this myself on a heavy running Akvorado instance, but I've noticed the main bottleneck has been not being able to scale out the consumers, which this should resolve.
This can be due to the fact clickhouse wont spawn more threads than there is logical cores, but there's also a thread limit on clickhouse that's enforced with other various settings which are defaulted to 16 everywhere in clickhouse.
It seems the most common way to bypass the limitation is to set kafka_disable_num_consumers_limit to 1.
Would worth documenting this and providing a way to turn this on, as it can be quite limiting.
The text was updated successfully, but these errors were encountered: