Skip to content
This repository has been archived by the owner on May 27, 2022. It is now read-only.

log in the queue that was not sent to kafka will be lost. #112

Open
ppyong opened this issue Jan 11, 2022 · 0 comments
Open

log in the queue that was not sent to kafka will be lost. #112

ppyong opened this issue Jan 11, 2022 · 0 comments

Comments

@ppyong
Copy link

ppyong commented Jan 11, 2022

Hi, I'm using kafka appender.
I have a question because there is an issue during use.

If you look at the currently used kafkaAppender, the following code was supposed to be executed at the end.

@Override
    public void stop() {
        super.stop();
        if (lazyProducer != null && lazyProducer.isInitialized()) {
            try {
                lazyProducer.get().close();
            } catch (KafkaException e) {
                this.addWarn("Failed to shut down kafka producer: " + e.getMessage(), e);
            }
            lazyProducer = null;
        }
    }

With the above code, it seems that the producer will block until it is completely transmitted to kafka, but the actual producer's data on the queue before transmission can be lost, right?

Currently, several logs are randomly lost at the end of the application.
Log loss does not occur when the application is terminated after a certain time delay to Thread.Sleep.

The version currently used is logback-kafka-appender-0.1.0.jar, but the upper version does not seem to have changed much.

How can I solve this problem? Can't we make them wait until we consume all the logs accumulated in the queue?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant