Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

deadlock when use with lumberjack in multi goroutine #1243

Open
dtdyq opened this issue Mar 7, 2023 · 3 comments
Open

deadlock when use with lumberjack in multi goroutine #1243

dtdyq opened this issue Mar 7, 2023 · 3 comments

Comments

@dtdyq
Copy link

dtdyq commented Mar 7, 2023

Describe the bug
A clear and concise description of what the bug is.
goroutine 6 [semacquire, 11 minutes]:
internal/poll.runtime_Semacquire(0xc000096840?)
/root/gows/go/src/runtime/sema.go:67 +0x27
internal/poll.(*fdMutex).rwlock(0xc0000960c0, 0x0?)
/root/gows/go/src/internal/poll/fd_mutex.go:154 +0xd2
internal/poll.(*FD).writeLock(...)
/root/gows/go/src/internal/poll/fd_mutex.go:239
internal/poll.(*FD).Write(0xc0000960c0, {0xc0000c2800, 0x71, 0x400})
/root/gows/go/src/internal/poll/fd_unix.go:370 +0x72
os.(*File).write(...)
/root/gows/go/src/os/file_posix.go:48
os.(*File).Write(0xc000014018, {0xc0000c2800?, 0x71, 0x0?})
/root/gows/go/src/os/file.go:175 +0x65
go.uber.org/zap/zapcore.multiWriteSyncer.Write({0xc000088860?, 0x2, 0xd57aa0?}, {0xc0000c2800, 0x71, 0x400})
/root/go/pkg/mod/go.uber.org/zap@v1.21.0/zapcore/write_syncer.go:105 +0xae
go.uber.org/zap/zapcore.(*ioCore).Write(0xc0001b0510, {0x0, {0xc0f9947d6b131bac, 0x98a9a112d2, 0xdaffa0}, {0x0, 0x0}, {0xc0005e4f00, 0x20}, {0x1, ...}, ...}, ...)
/root/go/pkg/mod/go.uber.org/zap@v1.21.0/zapcore/core.go:90 +0xb5
go.uber.org/zap/zapcore.(*CheckedEntry).Write(0xc0003100c0, {0x0, 0x0, 0x0})
/root/go/pkg/mod/go.uber.org/zap@v1.21.0/zapcore/entry.go:220 +0x1d9
go.uber.org/zap.(*SugaredLogger).log(0xc000014128, 0x0, {0x9b4487?, 0x11?}, {0xc000076f68?, 0x1?, 0x1?}, {0x0, 0x0, 0x0})
/root/go/pkg/mod/go.uber.org/zap@v1.21.0/sugar.go:227 +0xee
go.uber.org/zap.(*SugaredLogger).Infof(0xc0005d8ba0?, {0x9b4487?, 0x0?}, {0xc000076f68?, 0x0?, 0x0?})
/root/go/pkg/mod/go.uber.org/zap@v1.21.0/sugar.go:138 +0x45
To Reproduce
Steps to reproduce the behavior
use zap with lumberjack in multi goroutine
Expected behavior
A clear and concise description of what you expected to happen.

Additional context
Add any other context about the problem here.

@ozfive
Copy link

ozfive commented Mar 10, 2023

A solution for this issue is to wrap the lumberjack logger with a mutex so that only one goroutine can write to the log file at a time. This can be done by creating a custom logger that embeds the lumberjack logger and a mutex, and using the custom logger in place of the lumberjack logger in the Zap configuration.

Example:

package main

import (
	"sync"

	"go.uber.org/zap"
	zapcore "go.uber.org/zap/zapcore"
	"gopkg.in/natefinch/lumberjack.v2"
)

type MutexLumberjack struct {
	sync.Mutex
	lumberjack.Logger
}

func (ml *MutexLumberjack) Write(p []byte) (n int, err error) {
	ml.Lock()
	defer ml.Unlock()
	return ml.Logger.Write(p)
}

func main() {
	ml := &MutexLumberjack{
		Logger: lumberjack.Logger{
			Filename:   "/var/log/myapp.log",
			MaxSize:    10, // megabytes
			MaxBackups: 3,
			MaxAge:     28, // days
			Compress:   false,
		},
	}

	core := zapcore.NewCore(
		zapcore.NewJSONEncoder(zap.NewProductionEncoderConfig()),
		zapcore.AddSync(ml),
		zap.InfoLevel,
	)

	logger := zap.New(core)

	logger.Info("Hello, world!")
}

Another solution to the deadlock issue when using lumberjack in a multi-goroutine environment is to use a channel to queue log entries and have a single goroutine handle writing to the file. Each goroutine can send log entries to the channel, and the writer goroutine can read from the channel and write to the file. This will ensure that writes to the file are serialized, preventing deadlocks. To implement this solution, you can create a channel of type []byte, with a capacity to hold a reasonable number of log entries. When a goroutine needs to write a log entry, it can create a byte slice of the log message and send it to the channel. The writer goroutine can use a loop to read from the channel and write the log messages to the file. This will ensure that only one write operation is occurring at a time, avoiding the deadlock issue.

Example

// create a buffered channel to hold log entries
logChan := make(chan []byte, 1000)

// start the writer goroutine
go func() {
    for {
        select {
        case logEntry := <-logChan:
            // write the log entry to the file
            _, err := writer.Write(logEntry)
            if err != nil {
                // handle the error
            }
        }
    }
}()

// in each goroutine that needs to write a log entry:
// create a byte slice of the log message
logBytes := []byte("example log message")

// send the log entry to the channel
logChan <- logBytes

@tchung1118
Copy link

Could you provide a specific repro for this issue?

@de1acr0ix
Copy link

As I understand lumberjack itself does have a mutex for its Write() method according to https://github.com/natefinch/lumberjack/blob/4cb27fcfbb0f35cb48c542c5ea80b7c1d18933d0/lumberjack.go#L136.

In the stack trace post in the issue I can see multiWriteSyncer.Write() calls File.Write() directly, which isn't really related to lumberjack.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

No branches or pull requests

5 participants