New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
All threads blocked #786
Comments
I also encountered this issue, it will be resolved after restarting |
well that is not resolving this problem :( |
@koszta5 Under which JDK version is this occurring? |
openjdk version "11.0.19" 2023-04-18 |
Can you take a look at the problem in the link? https://jira.qos.ch/browse/LOGBACK-1406 It's very similar to this issue, I've also encountered this problem. only one thread(Thread 507 belows) BLOCKED in Thread 507: (state = BLOCKED)
Thread 402: (state = BLOCKED)
|
Sorry, but no - if you examine e.g. my very recent thread dump here: You will find out that all my threads are simply WAITING for a lock that is now lost (no other thread is holding it) Lock 0x0000000701dc5ff0 is completely lost to the JVM :( Causing the whole app to get stuck |
Logback version :1.2.11
Springboot version: 2.7.4
Apparently in some cases all locks are lost and logback is just "stuck" - only restart helps. Thread dump is attached
StuckLogback.threads.txt
All threads remain in this state WAITING for lock endlessly
"http-nio-8184-exec-1" #139 daemon prio=5 os_prio=0 cpu=53544.34ms elapsed=232090.85s tid=0x00002af5d1b77800 nid=0x15fe7 waiting on condition [0x00002af676207000] java.lang.Thread.State: WAITING (parking) at jdk.internal.misc.Unsafe.park(java.base@11.0.19/Native Method) - parking to wait for <0x00000000e0c37af0> (a java.util.concurrent.locks.ReentrantLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(java.base@11.0.19/LockSupport.java:194) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(java.base@11.0.19/AbstractQueuedSynchronizer.java:885) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(java.base@11.0.19/AbstractQueuedSynchronizer.java:917) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(java.base@11.0.19/AbstractQueuedSynchronizer.java:1240) at java.util.concurrent.locks.ReentrantLock.lock(java.base@11.0.19/ReentrantLock.java:267) at ch.qos.logback.core.OutputStreamAppender.writeBytes(OutputStreamAppender.java:197) at ch.qos.logback.core.OutputStreamAppender.subAppend(OutputStreamAppender.java:231) at ch.qos.logback.core.OutputStreamAppender.append(OutputStreamAppender.java:102) at ch.qos.logback.core.UnsynchronizedAppenderBase.doAppend(UnsynchronizedAppenderBase.java:84) at ch.qos.logback.core.spi.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:51) at ch.qos.logback.classic.Logger.appendLoopOnAppenders(Logger.java:270) at ch.qos.logback.classic.Logger.callAppenders(Logger.java:257) at ch.qos.logback.classic.Logger.buildLoggingEventAndAppend(Logger.java:421) at ch.qos.logback.classic.Logger.filterAndLog_1(Logger.java:398) at ch.qos.logback.classic.Logger.debug(Logger.java:486) at com.ourcomapany.integration.distribution.v2.delivery.TokenService.addToCache(TokenService.java:233)
This bug is very closely related to issue originally reported here: https://jira.qos.ch/projects/LOGBACK/issues/LOGBACK-1406?filter=allopenissues (same thing really). It is also similar to this: #767
The text was updated successfully, but these errors were encountered: