Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 abort listeners added to [EventEmitter] #63

Closed
ahmedsamir-dev opened this issue Apr 5, 2023 · 51 comments · Fixed by #66
Assignees
Labels
bug Something isn't working

Comments

@ahmedsamir-dev
Copy link

Using the connection's emitter for undici's request causes a event emitter memory leak, for each request an event handler is attached to the abort signal and never deleted and because the max listeners by default is 10 for each event, the process shows up warning fro that.

example:

MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 abort listeners added to [EventEmitter]. Use emitter.setMaxListeners() to increase limit
    at _addListener (node:events:587:17)
    at EventEmitter.addListener (node:events:605:10)
    at addSignal (/app/server-transpiled/node_modules/undici/lib/api/abort-signal.js:35:19)

There is already unresolved issues related to the same problem:

elastic/elasticsearch-js#1741
elastic/elasticsearch-js#1733
elastic/elasticsearch-js#1716

Here is a suggested Fix for it but closed unmerged
#55

@ahmedsamir-dev
Copy link
Author

Hi tomas, we hope it gets your team attention 🙏
@delvedor

@delvedor
Copy link
Member

delvedor commented Apr 5, 2023

Heya, I'm no longer actively workin on this project, @JoshMock is the new maintainer.

@JoshMock JoshMock added the bug Something isn't working label Apr 6, 2023
@JoshMock JoshMock self-assigned this Apr 6, 2023
@madispuk
Copy link

Hi Josh, thanks for working on it!

@arciisine
Copy link

Just spent a moment digging into this, and it looks like its tied to the EventEmitter in the Undici connection class. I verified that by introducing this fix (9012ddc), the errors all disappear.

@JoshMock
Copy link
Member

Thanks for taking a look @arciisine. 🖤 I'm just onboarding as the new maintainer of this library so I'll take a look at this soon. Your solution is straightforward, so I'll play with it soon to make sure it has no undesired side effects.

@ftejeria
Copy link

Hey guys, I removed the warning by adding in the TransportRequestOptions for the signal field a new AbortController().signal;, here an example client.search( { yourQuery }, { signal: new AbortController().signal, } Any problem attached to this workaround to remove the warning?

@JoshMock
Copy link
Member

As a note while I'm working to prioritize a possible fix for this, former maintainer @delvedor mentioned on the Kibana repo that this is just a warning about a possible memory leak for behavior we know is, in fact, just a large number of concurrent requests in progress. So, it's mostly just noise, which you can configure your project to ignore based on the concurrent request limits you'd expect to see in your app.

Hello! If you are reusing the abort signal multiple times, the client is cleaning up listeners as soon as they are no longer needed, but if you are sending more than 10 concurrent requests with the same abort signal, then you will hit that "issue" again. I'm using quotes around issue because that's just a warning. Node.js has no way to know if having dozens of listeners is ok, so it's proactive and lets you know because sometimes it can mean a memory leak.

If it's expected to share the same abort signal among multiple concurrent request, you can change the max listeners limit with:

@pocesar
Copy link

pocesar commented Apr 27, 2023

@JoshMock we are seeing a steady increase in memory until the containers die out of memory, over the course of 8-10 hours, happens literally every day. I don't think it's just noise, our baseline goes from 64MB of memory to 8GB...

@JoshMock
Copy link
Member

JoshMock commented May 1, 2023

@JoshMock we are seeing a steady increase in memory until the containers die out of memory, over the course of 8-10 hours, happens literally every day. I don't think it's just noise, our baseline goes from 64MB of memory to 8GB...

That's good to know. And you've narrowed it down to memory being consumed by Elasticsearch client activity? Would love any traces, usage examples, versions used (Elasticsearch and client) or other relevant details.

@JoshMock
Copy link
Member

JoshMock commented Jun 2, 2023

I ran a simple test where I indexed 100 documents every 10 seconds for 8 hours, and then another where I ran 100 searches every 10 seconds for 8 hours, and was not able to reproduce a memory leak. The results show typical Node.js garbage collection behavior. (charts below)

I'd still like to see a usage example where a memory leak can be seen and traced back to the client.

As has been mentioned before, the Possible EventEmitter memory leak detected warning is just a warning, and increasing max event listeners to reflect how many concurrent Elasticsearch requests your application expects is an appropriate way to prevent that warning message.

image

image

@JoshMock
Copy link
Member

JoshMock commented Jun 2, 2023

Running another 8-hour test now that indexes 10k documents every second to push the client a bit harder. Also running another 8-hour test that is identical, but with the EventEmitter fix proposed in #55. Will report back next week.

@JoshMock
Copy link
Member

JoshMock commented Jun 5, 2023

A more intense test (index 10k docs/second for 8 hours) yielded more interesting results! See charts below. Unfortunately, the proposed fix in #55 did not have any notable positive impact, other than not getting the MaxListenersExceededWarning. Will need to do a deeper analysis of memory usage.

index 10000 docs per second for 8 hours

index 10000 docs per second for 8 hours, with transport fix

@JoshMock
Copy link
Member

JoshMock commented Jun 6, 2023

Ran another long-running process as a test, with an actual memory allocation timeline profiler running and identified a potential leak in Undici that may be fixed in a newer version than 5.5.1, which this library currently uses. Rerunning the test with Undici updated to latest (5.22.1) to see if the results differ.

JoshMock added a commit that referenced this issue Jun 7, 2023
The upgrade fixes some bugs that were the cause of a slow memory
leak in the Elasticsearch JS client. See #63.
@JoshMock
Copy link
Member

JoshMock commented Jun 7, 2023

Upgrading Undici appears to solve the issue. Working on rolling out a change for the next patch release of the Elasticsearch client.

image

@phil-nelson-bt
Copy link

phil-nelson-bt commented Jun 26, 2023

I just tried this upgrade and there is no change to the warning message. This is with ES client 8.8.1, transport version 8.3.2 and undici version 5.22.1

I've also been unable to find a place where setting the global defaultMaxlisteners as indicated in the node docs has any effect at all. The code doesn't fail, and doesn't do anything

Did this update hope to remove the warning or did you focus more in the memory consumption later in the thread?

(node:33355) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 abort listeners added to [EventEmitter]. Use emitter.setMaxListeners() to increase limit
    at _addListener (node:events:587:17)
    at EventEmitter.addListener (node:events:605:10)
    at addSignal (/Users/philnelson/Projects/productivitysuiteservice/node_modules/undici/lib/api/abort-signal.js:35:19)
    at new RequestHandler (/Users/philnelson/Projects/productivitysuiteservice/node_modules/undici/lib/api/api-request.js:68:5)
    at Pool.request (/Users/philnelson/Projects/productivitysuiteservice/node_modules/undici/lib/api/api-request.js:170:25)
    at /Users/xxxxxxxx/Projects/xxxxxxxxxx/node_modules/undici/lib/api/api-request.js:163:15
    at new Promise (<anonymous>)
    at Pool.request (/Users/philnelson/Projects/productivitysuiteservice/node_modules/undici/lib/api/api-request.js:162:12)
    at Connection.request (/Users/xxxxxxx/Projects/xxxxxxxxxxxnode_modules/@elastic/transport/lib/connection/UndiciConnection.js:143:41)
    `
   

@JoshMock
Copy link
Member

JoshMock commented Jun 26, 2023

@phil-nelson-bt In my tests, upgrading Undici also had the effect of eliminating the max listeners warning. I'll reopen this and run some more tests soon to verify that. In the meantime, updating defaultMaxListeners will still work on a case by case basis.

@JoshMock JoshMock reopened this Jun 26, 2023
@JoshMock
Copy link
Member

Thanks for your patience, everyone. I was out on leave and am just getting back to work.

@breno-alves put together a PR that solves for the warnings being logged. Thanks for that! I'll test and merge soon.

However, that PR does not address any underlying memory leak issues that might be related to abort listeners. I'd like to dig into that more before fully closing this. If anyone has any isolated code that can reproduce a leak, I'd love to see it.

@JoshMock
Copy link
Member

It would also be helpful to know what versions of Node.js you're experiencing the issue on. According to a couple comments, this is less likely to happen on Node.js versions ^18.16.0 and ^19.8.0 (and, presumably, ^20.0.0 as well).

@pChausseC
Copy link

same issue node v18.19.1, @elastic/elasticsearch ^8.12.2.

@cipher450
Copy link

cipher450 commented Mar 10, 2024

Same issue here
Node : v18.18.0
@elastic/elasticsearch : "^8.11.0",
memory keeps increasing very slightly overtime starting from 414 mb to 918.32 mb in 7 days

@JoshMock
Copy link
Member

Anyone here who is on Node 18+ and experiencing memory leaks: please try out @elastic/transport 8.5.0 (using an override). We upgraded Undici from 5.22.1 to 6.7.0, which includes a MaxListenersExceededWarning fix via nodejs/undici#2823. If this is an underlying cause of memory leaks, your situation may be improved. At the very least, it looks like it will help resolve all the warning messages.

@yukha-dw
Copy link

Upgrading to 8.5.0 or 8.5.1 seems fix the issue. I have run a 10 minutes run and don't see MaxListenersExceededWarning anymore, usually it comes out right away on 1000 RPS+.

Node: v18.16.1
@elastic/elasticsearch: 8.13.1
@elastic/transport: 8.5.1

@daveyarwood
Copy link

With @elastic/transport 8.5.1, I'm still seeing the warnings, and my application is having memory issues. See nodejs/undici#3131

@JoshMock
Copy link
Member

Thanks @daveyarwood. I'll keep investigating soon.

@alexey-sh
Copy link

Any chance to fix it within two years?

@JoshMock
Copy link
Member

I've continued to address different aspects of this issue over the past year as I've had time. Unfortunately there seem to be a few possible things going on rather than one single root cause.

Also, the more clear code examples I can get that consistently reproduce a memory leak that can be traced back to EventEmitter or abort listeners, the easier it will be to track down. I've not received any yet, and the test scenarios I've come up with myself have been solved with fixes that have already been merged.

@sibelius
Copy link

when a new release will come up?

@mmcDevops
Copy link

mmcDevops commented Apr 23, 2024

@JoshMock tried every possible solution with version upgradation of elastic/transport and undici. But haven't got the fix.

@mmcDevops
Copy link

@JoshMock
I have tried replicating this on the local environment to reproduce the above mentioned error. I have written a simple script which is using the elastic/elasticsearch package with version "8.13.1" and is creating indices on elasticsearch and fetching data from the same indices. But when I increase the value from 10 to above, I get the error. I had tried upgrading the “undici” package as well from 5.5.1 to 5.22.1 but didn’t get the fix.

Script file:

const { Client } = require('@elastic/elasticsearch');

const client = new Client({
 node: 'https://localhost:9200',
 auth: {
   username: *****,
   password: *****,
 }
 tls: {
   rejectUnauthorized: false,
 },
});
async function createIndex(indexName) {
   try {
       await client.indices.create({
           index: indexName
       });
       console.log(`Index '${indexName}' created successfully.`);
   } catch (error) {
       console.error(`Error creating index '${indexName}':`, error);
   }
}
async function getIndexInfo(indexName) {
   try {
       const response = await client.cat.indices({ index: indexName });
       console.log(`Index '${indexName}' info:`, response.body);
   } catch (error) {
       console.error(`Error retrieving info for index '${indexName}':`, error);
   }
}
async function main() {
   const indexTasks = [];
   for (let i = 0; i < 100; i++) {
       const indexName = `index_${i}`;
       indexTasks.push(createIndex(indexName));
   }
   await Promise.all(indexTasks);
   const retrievalTasks = [];
   for (let i = 0; i < 100; i++) {
       const indexName = `index_${i}`;
       retrievalTasks.push(getIndexInfo(indexName));
   }
   await Promise.all(retrievalTasks);
   console.log('All tasks completed.');
}
main().catch(console.error);

Output:

kunalkumargiri@Kunals-MacBook-Air mmc-server % npm run dev

> mmc-server@1.0.0 dev
> nodemon --trace-warnings --max-http-header-size=16384 --max-old-space-size=4096 server/index.js

[nodemon] 3.1.0
[nodemon] to restart at any time, enter `rs`
[nodemon] watching path(s): *.*
[nodemon] watching extensions: js,mjs,cjs,json
[nodemon] starting `node --trace-warnings --max-http-header-size=16384 --max-old-space-size=4096 server/index.js`
(node:74817) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 abort listeners added to [EventEmitter]. Use emitter.setMaxListeners() to increase limit
    at _addListener (node:events:587:17)
    at EventEmitter.addListener (node:events:605:10)
    at addAbortListener (/Users/kunalkumargiri/Desktop/mmc/mmc-server/node_modules/@elastic/elasticsearch/node_modules/undici/lib/core/util.js:449:10)
    at addSignal (/Users/kunalkumargiri/Desktop/mmc/mmc-server/node_modules/@elastic/elasticsearch/node_modules/undici/lib/api/abort-signal.js:33:3)
    at new RequestHandler (/Users/kunalkumargiri/Desktop/mmc/mmc-server/node_modules/@elastic/elasticsearch/node_modules/undici/lib/api/api-request.js:68:5)
    at Pool.request (/Users/kunalkumargiri/Desktop/mmc/mmc-server/node_modules/@elastic/elasticsearch/node_modules/undici/lib/api/api-request.js:169:25)
    at /Users/kunalkumargiri/Desktop/mmc/mmc-server/node_modules/@elastic/elasticsearch/node_modules/undici/lib/api/api-request.js:162:15
    at new Promise (<anonymous>)
    at Pool.request (/Users/kunalkumargiri/Desktop/mmc/mmc-server/node_modules/@elastic/elasticsearch/node_modules/undici/lib/api/api-request.js:161:12)
Index 'index_2' created successfully.
Index 'index_7' created successfully.
Index 'index_0' created successfully.
Index 'index_11' created successfully.
Index 'index_1' created successfully.
Index 'index_3' created successfully.
Index 'index_6' created successfully.
Index 'index_4' created successfully.
Index 'index_5' created successfully.
Index 'index_10' created successfully.
Index 'index_8' created successfully.
Index 'index_9' created successfully.

@tomimarkus991
Copy link

tomimarkus991 commented Apr 25, 2024

@sibelius
Copy link

we are getting a lot of request timeoutError

/usr/src/app/node_modules/@elastic/transport/lib/Transport.js in SniffingTransport.request at line 540:31

@mmcDevops
Copy link

@JoshMock Is there any way to suppress these warnings until a solution is proposed..

@cipher450
Copy link

@JoshMock Is there any way to suppress these warnings until a solution is proposed..

#63 (comment)

JoshMock added a commit that referenced this issue May 1, 2024
A potential fix for
#63, largely
inspired by a community member's PR that was never merged:
#55

According to an Undici core committer in this comment
elastic/elasticsearch-js#1716 (comment)
the issue that triggers the MaxListenersExceededWarning, and possibly a
memory leak in some cases, is caused by attaching an EventEmitter to
each request by default when a per-request timeout is set, rather than
attaching an AbortSignal.

My assumption is that an EventEmitter was used because AbortSignal and
AbortController were not added to Node.js until v14.17.0, so we couldn't
guarantee v14 users would have it. I'm not certain why using
EventEmitters makes a difference memory-wise, but it does get rid of the
MaxListenersExceededWarning.
@JoshMock
Copy link
Member

JoshMock commented May 1, 2024

I wasn't able to reproduce a memory leak using the code snippet from @mmcDevops. When observing memory while running it, heap usage does spike pretty high as connections are opened, but everything is properly garbage-collected once Promise.all(indexTasks) is resolved:

memory1

However, I rediscovered a PR that fell by the wayside that attempted to solve this problem, addressing @ronag's primary concern. I've opened #96 to reproduce that work, with a few minor tweaks. This does get rid of the MaxListenersExceededWarning, which is great! But it doesn't have any significant impact on memory usage:

memory2

In any case, if anyone would like to pull down the repo and test on that PR's code with their use cases, that'd help a ton! I'll try to reproduce a leak a few more times, and look to merge the change closer to the release of Elasticsearch 8.14 in mid-May.

JoshMock added a commit that referenced this issue May 21, 2024
A potential fix for
#63, largely
inspired by a community member's PR that was never merged:
#55

According to an Undici core committer in this comment
elastic/elasticsearch-js#1716 (comment)
the issue that triggers the MaxListenersExceededWarning, and possibly a
memory leak in some cases, is caused by attaching an EventEmitter to
each request by default when a per-request timeout is set, rather than
attaching an AbortSignal.

My assumption is that an EventEmitter was used because AbortSignal and
AbortController were not added to Node.js until v14.17.0, so we couldn't
guarantee v14 users would have it. I'm not certain why using
EventEmitters makes a difference memory-wise, but it does get rid of the
MaxListenersExceededWarning.
@JoshMock
Copy link
Member

#96 has been merged and deployed in v8.5.2. I'll keep this open for a couple weeks, or until someone can upgrade their transport and verify that it resolves the issue. Whichever comes first. 🖤

@KerliK
Copy link

KerliK commented May 27, 2024

#96 has been merged and deployed in v8.5.2. I'll keep this open for a couple weeks, or until someone can upgrade their transport and verify that it resolves the issue. Whichever comes first. 🖤

For me, the upgrade resolved the issue.

@daveyarwood
Copy link

I can also confirm that this fixed my issue. Thanks @JoshMock! 🙌

@JoshMock JoshMock closed this as completed Jun 3, 2024
@alexey-sh
Copy link

We all have to say a big thank you @JoshMock
Long story short, the issue is fixed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.