New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leaks in Jest when running tests serially with nApi
enabled
#8989
Comments
Thanks for the bug report @driimus - we are looking into it. Just to make sure I understood correctly: In your understanding our (node-api library) logging implementation is leaking memory? By the way:
Is there a feature request issue for this, or the overall "Limitations" section? We really feedback value from the community and active users like you, so I would be very interested in having this spelled out a bit more. You are probably not the only one with that problem, so this might very well be interesting for us to implement or improve support for. |
I will come back to this issue tomorrow, but some quick observations that I'd like to get clear before we move forward: https://github.com/driimus/napi-jest/blob/master/tests/logged.test.ts#L14 This one spawns a new client for every test. This means we now have N channels that stream logs to the same event loop, meaning we have Rust underneath and the fastest logger on earth spamming that one node event loop with logs. Of course the node loop is a bottleneck and can't process them fast enough, so we keep them in memory on the Rust side. Could you try the example by reusing the same client in every test? What I also think I can try tomorrow is reducing the channel size for the logger callback, meaning if the node process doesn't process logs fast enough, we either block that thread, or just throw the log line away. Sorry if I misunderstood something in this, I've had a very long day already :) |
Not as of yet. I'm waiting on #4703 to explore test parallelization in more detail, since re-applying migrations plays a big part in using separate database schemas per test suite.
It's a fairly big assumption on my part, but yes, I think the logging component causes leaks in My understanding of Node-API and the query engine is very limited, but this line in particular was the main reason I started looking at how
As that seemed like something that the weak-napi package used by @pimeys I've played around a fair bit with instantiating a
None of those have made a difference as far as the More importantly:
I've made two more changes to the minimal reproduction repo, one for importing a client, one for removing explicit disconnects. Hopefully the additional scenarios provide more insight. |
Yeah, how we handle logs is kind of painful, but we should find a solution for this problem now. I'm running a few tests to see what we can do. I'd also be interested if @Brooooooklyn knows something I don't in here... |
@pimeys Thanks for mention, I will dig into this problem in this week. |
I'm trying to dig in as much information as I can. I'm still not sure is it on our side or in the Node API layer. What it looks like to me is how we use the threadsafe callback for logging, and when running a client in parallel from Jest, the logs are not cleaned from the heap. |
Ok, one day of research behind, and coming from a perspective of a person who rarely does javascript: jest does something really weird here. Then, again, I can do this: const { PrismaClient } = require("@prisma/client");
async function main() {
var prisma = new PrismaClient();
while (true) {
await prisma.user.deleteMany();
const user = await prisma.user.create({
data: {
email: "test",
},
});
console.log(user);
await prisma.$disconnect();
}
}
main(); And I see no growth in memory. I can also do this: const { PrismaClient } = require("@prisma/client");
async function main() {
while (true) {
var prisma = new PrismaClient();
await prisma.user.deleteMany();
const user = await prisma.user.create({
data: {
email: "test",
},
});
console.log(user);
await prisma.$disconnect();
}
}
main(); And the queries slow down quite a bit, but it still doesn't give similar RES growth compared to jest. |
Thanks for the further responses @driimus. I also spent some time with this today, and have a bunch more follow up questions:
|
|
Ok, while rereading this issue and experimenting a bit, I noticed we never saw the 100MB+ anywhere. Does this refer to the memory increase after running a large test suite? That would of course make sense then. The memory increase I mentioned above (in the reproduction without Jest, in 2) (and reproduced in https://github.com/janpio/prisma-node-api-memory-investigation/actions/runs/1187121305) only concerns the rss, not the heap which is nicely kept in check and cleaned regularly. So the heap usage hovers around 100MB. That lead me to create a fork of your reproduction repo, and expand a bit: Add the
https://github.com/janpio/prisma-leaks/runs/3488782051?check_suite_focus=true#step:9:1 The
https://github.com/janpio/prisma-leaks/runs/3488782120?check_suite_focus=true#step:9:1 The only difference is that for the I quickly graphed the While this is certainly not optimal, in my understanding of what I am seeing the Node-API library is only slightly worse than the binary. But the general behavior is the same. Do you agree with that? Except for the |
After some investigation, I believe the behavior in https://github.com/driimus/prisma-leaks is as expected, and
|
I can confirm my company's Jest CI tests started leaking like crazy after upgrading to Prisma 3.x. This was the only changed variable so there's definitely something happening. I am trying to reproduce with a simpler setup. |
We are having memory leaks in our pipeline too |
We too. The ram usage keep rising indefinitly after the update. |
I downgraded a sandbox from 3.x to 2.30.x and it resolved the issue immediately. When I then added the napi preview feature, the problem reappeared. |
We have two ways to resolve this issue in jest testing scenario (Note. This issue should not appear in production application since there are few connections in production app.)
|
I like the idea of |
We are experiencing very high memory usage too, actually 1.4gb after just a single test suite (around 60 integration tests). |
@garrensmith Thanks for pointing out where the problem was. This issue was blocking our adoption of Prisma. I created a PR for a fix. It refactors the way that the There are failing tests in @driimus I'm pretty sure that the real leak is from here. There are references to the engines in |
Note: |
@Ustice thanks for looking into it a bit. The prisma client team has been doing some deep diving into this. We have found a few things. Using the engine or mocking it out doesn't affect the heap size that much. We have tried a few things around the loading and removing of the engine and it made no difference. I also created another test sample and only loaded the engine without the typescript side and then there was no heap memory increase. So we looking in a few places to solve this. We have some ideas and doing a fair amount of experimenting with different ideas to fix this. |
Any updates? I currently suffer from memory leaks on my server written in Nest.js when I run tests. GitHub Actions is crashing. |
Thanks guys for your hard work on this issue |
Every test that has even a |
I'm running isolated tests in prisma using vitest and https://www.npmjs.com/package/@chax-at/transactional-prisma-testing |
I am still experiencing OOM errors when using '--runInBand' even with the #14174 fix. I can avoid the OOM by using multiple workers ('-w 4') but then jest gets stuck with the last 1 or 2 tests and never exits despite using '--forceExit'. If I rerun only the tests that it gets stuck on, then those tests complete just fine. This is making it impossible to run our test suite in our CI pipeline. Please keep this issue open. |
@ethancadoo ouch. Is there any chance you could provide us with a reproduction or more details? What is your Node.js version? Which other CLI flags except |
@ethancadoo are you running jest with |
Also, which OOM do you refer to? Does Node.js terminate itself with "Allocation failed - JavaScript heap out of memory" error, or is the process killed by the OS because it is out of physical memory? The latter often happens when |
I haven't tried any real v4 tests yet, but here's a test run for v4.1, which pertains to this specific issue https://github.com/driimus/prisma-leaks/actions/runs/2699611305 (take your pick of node version, schema size, test file count). Unless you have a significant amount of test files, the problem might lie somewhere else in your tests, as the loss now seems to be insignificant at ~2-3MB per file. |
We are experiencing memory leaks with:
When we inspect our heap its littered with incrementally expanding Prisma strings. This crashes our CI pipeline. |
Please open a new issue and provide additional information. Your leak is 99.9% not the same we fixed via this issue a year ago. Thanks. |
@andrewmclagan did you create a new issue? We're seeing the same thing |
Bug description
Background
Currently trying to migrate to the library engine for an existing project (>1kloc prisma schema). After enabling the
nApi
flag, I've noticed that Jest will periodically crash while running test suites serially (hundreds of tests spread across almost 30 test suites).After taking a closer look using jest's
--logHeapUsage
flag, I've found that memory usage goes up by 100MB+ per test suite. About thirds into a test run, jest would be eating up over 2GB of memory and crash soon after.Limitations
Unfortunately, I've found no successful mechanism that would allow running tests in parallel in an isolated environment when using Prisma. I've tried setting up a test environment that creates temporary schemas (since Prisma doesn't seem to allow the usage of
pg_temp
) and applies migrations for each suite, but haven't achieved desirable results.The problem
Instantiating a library engine leads to memory leaks when using jest (barebones example), which is noticeable when running tests using the
--runInBand
flag. The issue also gets picked up when using--detectLeaks
.I've also tested a version of the library engine with logging disabled (repo) and did not see the issue: neither on a simple instantiation(barebones example), nor when using it in a generated prisma client (by manually replacing the path in
node_modules/@prisma/client/runtime/index.js
).How to reproduce
Minimal reproduction repo
https://github.com/driimus/prisma-leaks - see the action runs for logs
Steps:
nApi
feature--runInBand
or-w 1
flag and monitor memory usage (e.g. by also using the--logHeapUsage
flag)Expected behavior
No response
Prisma information
Schema: https://github.com/driimus/prisma-leaks/blob/main/prisma/schema.prisma
Environment & setup
macOS
,debian
PostgreSQL
v16.7.0
,LTS
Prisma Version
The text was updated successfully, but these errors were encountered: