New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consistent Memory Increase in Webflux Application #3154
Comments
Please clarify whether you create the WebClient for every request? If yes then please check this: https://stackoverflow.com/questions/77715508/httpclient-recomendations |
@aspOEDev The mentioned versions are quite old, please upgrade to the latest supported versions. |
@violetagg thanks for the suggestions.
Yes I am creating a new webclient and builder instance per request. In our initial iteration we had used a common autowired builder instance which resulted in api and request content getting mixed up and wrong calls being fired. Hence we went ahead with the safest approach of creating new instance per request. I understand we can cache client instance per host and reduce the memory footprint to some extent. Also will upgrade versions and validate. Let me give it a try an get back to you. |
If you would like us to look at this issue, please provide the requested information. If the information is not provided within the next 7 days this issue will be closed. |
Closing due to lack of requested feedback. If you would like us to look at this issue, please provide the requested information and we will re-open. |
I am relatively new to reactor framework and I have created a new BFF layer service for our application integrating with 7 different downstream systems using Webflux but we are observing gradual memory increase in our pods memory consumption. In cases when there are timeouts or downstream failures the memory start spiking and does not comeback to normal until a restart of the pod is done.
Below are the versions we have used:
Below is how I have initialized our webclient in a generic client service.
Initially I was using the Autowired WebClient.Builder instance to initialise the client but with increase in load I observed calls in the same downstream client going to wrong apis resulting in mixing of calls. So I changed the approach to use WebClient.builder() to create new builder instance everytime as suggested on some blogs and that solved the wrong call issue. We also reduced the logs to improve memory consumption post heap dump analysis but I have only been able to make the service take longer time to crash.
This is how the memory trend looks like during service startup and then the curve becomes relatively flat but there is always a gradual increase until the service crashes:
We are running this using tomcat in Kubernetes.
Below are the current heap dump dominator tree screenshots
The primary suspect as per heap dump analysis is the following class:
Stacktrace of issue causing thread in heap dump
I tried reproducing this on local setup but I am not able to reproduce this issue. Any suggestions or guidance on where I can improve so as to improve the application performance would really be helpful.
The text was updated successfully, but these errors were encountered: