Skip to content
This repository has been archived by the owner on May 27, 2022. It is now read-only.

Endless Kafka stacktraces when bad producer config #60

Open
gquintana opened this issue Jan 28, 2018 · 2 comments
Open

Endless Kafka stacktraces when bad producer config #60

gquintana opened this issue Jan 28, 2018 · 2 comments

Comments

@gquintana
Copy link
Contributor

When the producer config is wrong, the createProducer raises an exception.
As this method is called by the LazyInitializer when a log arrives, each logs tries to create a producer, it fails and a stack trace is logged.
Prior to 0.2-RC1, the producer config check may have mitigated this issue.

Some solutions I can see:

  • Let the producer to be created when the KafkaAppender starts (eager initialization), this would bring a fail fast behaviour but may slow down startup.
  • Add some circuit breaker behaviour in the LazyInitializer, and allow only one producer creation.
@danielwegener
Copy link
Owner

eager initialization: I am not sure if I remember correctly but the eager initialization is either really early (on the logger creation thread which may be the classloader thread which initializes private static Logger LOG fields) OR the first logging call (which again might be a user request thread).

But I agree, the config check should fail as soon as possible and only once. To fix this particular issue I'd however prefer the second option until we find a good abstraction for the producer lifecycle issues (see my comment on your PR).

@gsreddy99
Copy link

I also see endless stacktrace with bad producer config. Is there a way to restrict the no of retrials to a particular number?
I tried the below config but has no affect on endless stacktrace:
reconnect.backoff.ms=4000
request.timeout.ms=4000
retry.backoff.ms=1000
I did add fallback appender.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants