Replies: 3 comments 4 replies
-
Hello, I also see the described problem (Keycloak 20.0.1). Step to reproduce: make load testing of the same refresh token. After a while latency starting to increase and warn messages: |
Beta Was this translation helpful? Give feedback.
-
Is there any solution to that error? Im facing the same issue using keykloack 21.1.1... And My test is login and refreshing token to see that value registered inside the cache(Im using an external one to control it), and I have been seen that warning log everytime. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Hello,
We are running Keycloak
16.1.0
on jboss wildfly, in HA.We also have some extensions with custom business logic that we added (mostly custom required user actions, and some REST apis).
Since the end of december, the instances began to freeze after maybe 2-3 days of uptime: the requests are not served anymore for an amount of time between 2s and 10s, if not more. Eventually the pods end up being killed by Kubernetes, causing a service interruption.
The issue is always there, it's just that it usually starts quiet, with a small amount of freeze / warning logs once in a while, until it eventually becomes too slow for the liveness probe, and that's the moment Kubernetes kills the pods.
This is the kind of warnings in the logs everytime it happens:
Sometimes, errors with the 'clientSessions' cache are in the mix too:
I don't know if this is relevant for the issue, but when it arrives, I trace the session using the REST API, and I always end up with 3 specific users, even after a restart. These users have been there for a long time though, and don't seem different at first glance.
So my questions would be :
Thank you.
Beta Was this translation helpful? Give feedback.
All reactions