New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak issue with attachments #1602
Comments
Thank you for the thorough report. I was able to verify the issue. Unfortunately, based on how Nodemailer is internally structured, it is not an easy problem to fix. Nodemailer internally uses a series of piped streams, and here the issue is that Nodemailer initiates the HTTP request and pipes it into the processing stream, but the processing stream never gets the information about cancelled sending and thus does not know to abort the already started web request. As no one is reading the final output stream anymore, the stream in the processing pipeline gets stuck (it is waiting for something to read from it), and the initialized data chunks are never garbage collected. |
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days. |
Thank you! Commenting to prevent issue closure. Left unchecked this issue will eventually crash any server with a memory leak, will see if I can do anything to fix it or mitigate it |
Added the |
Strictly speaking, it's not a pure memory leak. One of my servers (a docker container host server) started randomly crashing at various time intervals with incredibly high loads, which required further investigation. Curiously, I found in
dmesg
, the following error message:Weird - I increased my tcp socket count to mitigate the issue for any users, and then went hunting. This led me to the following blog post - https://hechao.li/2022/09/30/a-tcp-timeout-investigation/ which helped me create a bash script and a python script to identify the container responsible for crashing the server.
(piped the output of the above script into skmem_all.txt and then ran...)
Which revealed it was one of my custom containers running my own code! The code it was running was an express server which periodically sends out emails through nodemailer.
I diligently attached my debugger and captured a few heap dumps, which revealed to me the reason why I was running out of sockets was because of an endless stream of ArrayBuffers which were associated with nodemailer.
Specifically, it was trying to send an (just one!) email, with an attachment (linked to via its URL in the href field of the attachment object array) to an invalid email address. Every time this was run, it appears that the sendMail function generated an error (which was caught and logged to not disrupt the flow of the server), but the socket to the attachment URL remained open. Forever. And everytime it periodically would try to resend this email to an invalid address, it would open more and more and more sockets, until it ran out of memory.
I created a small proof of concept code:
index.js:
Then run the code with
node --inspect index.js
and attach your chrome devtools node debugger. You will get a trace as follows:After a few seconds
After a few more seconds...
Note the growth in ArrayBuffer objects in that time, and the trace showing that each second the sendMail function is run, more arraybuffers are allocated onto memory but never released, despite this being expected behaviour once the sendMail function has resulted in an error.
The text was updated successfully, but these errors were encountered: