Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leaks in Jest when running tests serially with nApi enabled #8989

Closed
Tracked by #12339
driimus opened this issue Aug 29, 2021 · 43 comments · Fixed by #14174
Closed
Tracked by #12339

Memory leaks in Jest when running tests serially with nApi enabled #8989

driimus opened this issue Aug 29, 2021 · 43 comments · Fixed by #14174
Assignees
Labels
bug/0-unknown Bug is new, does not have information for reproduction or reproduction could not be confirmed. kind/bug A reported bug. team/client Issue for team Client. tech/engines Issue for tech Engines. tech/typescript Issue for tech TypeScript. topic: node-api formerly `nApi` topic: performance/memory
Milestone

Comments

@driimus
Copy link

driimus commented Aug 29, 2021

Bug description

Background

Currently trying to migrate to the library engine for an existing project (>1kloc prisma schema). After enabling the nApi flag, I've noticed that Jest will periodically crash while running test suites serially (hundreds of tests spread across almost 30 test suites).

After taking a closer look using jest's --logHeapUsage flag, I've found that memory usage goes up by 100MB+ per test suite. About thirds into a test run, jest would be eating up over 2GB of memory and crash soon after.

Limitations

Unfortunately, I've found no successful mechanism that would allow running tests in parallel in an isolated environment when using Prisma. I've tried setting up a test environment that creates temporary schemas (since Prisma doesn't seem to allow the usage of pg_temp) and applies migrations for each suite, but haven't achieved desirable results.

The problem

Instantiating a library engine leads to memory leaks when using jest (barebones example), which is noticeable when running tests using the --runInBand flag. The issue also gets picked up when using --detectLeaks.

I've also tested a version of the library engine with logging disabled (repo) and did not see the issue: neither on a simple instantiation(barebones example), nor when using it in a generated prisma client (by manually replacing the path in node_modules/@prisma/client/runtime/index.js).

How to reproduce

Minimal reproduction repo

https://github.com/driimus/prisma-leaks - see the action runs for logs

Steps:

  1. Enable the nApi feature
  2. Run some Jest test suites using the --runInBand or -w 1 flag and monitor memory usage (e.g. by also using the --logHeapUsage flag)
  3. Note the memory usage going up.
  4. When running large test suites, it is noted that the runner may crash:
<--- Last few GCs --->

[594:0x5d8b870]   327150 ms: Scavenge (reduce) 1912.3 (2075.9) -> 1912.0 (2076.4) MB, 3.8 / 0.0 ms  (average mu = 0.394, current mu = 0.400) allocation failure 
[594:0x5d8b870]   327155 ms: Scavenge (reduce) 1912.7 (2076.4) -> 1912.3 (2076.4) MB, 3.0 / 0.0 ms  (average mu = 0.394, current mu = 0.400) allocation failure 
[594:0x5d8b870]   327162 ms: Scavenge (reduce) 1913.0 (2076.4) -> 1912.6 (2076.9) MB, 3.0 / 0.0 ms  (average mu = 0.394, current mu = 0.400) allocation failure 


<--- JS stacktrace --->

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
 1: 0xb02cd0 node::Abort() [/home/driimus/.nvm/versions/node/v16.7.0/bin/node]
 2: 0xa1812d node::FatalError(char const*, char const*) [/home/driimus/.nvm/versions/node/v16.7.0/bin/node]
 3: 0xceb72e v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [/home/driimus/.nvm/versions/node/v16.7.0/bin/node]
 4: 0xcebaa7 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/home/driimus/.nvm/versions/node/v16.7.0/bin/node]
 5: 0xeb5485  [/home/driimus/.nvm/versions/node/v16.7.0/bin/node]
 6: 0xeb5f74  [/home/driimus/.nvm/versions/node/v16.7.0/bin/node]
 7: 0xec43e7 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/home/driimus/.nvm/versions/node/v16.7.0/bin/node]
 8: 0xec779c v8::internal::Heap::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/home/driimus/.nvm/versions/node/v16.7.0/bin/node]
 9: 0xe89d25 v8::internal::Factory::AllocateRaw(int, v8::internal::AllocationType, v8::internal::AllocationAlignment) [/home/driimus/.nvm/versions/node/v16.7.0/bin/node]
10: 0xe82934 v8::internal::FactoryBase<v8::internal::Factory>::AllocateRawWithImmortalMap(int, v8::internal::AllocationType, v8::internal::Map, v8::internal::AllocationAlignment) [/home/driimus/.nvm/versions/node/v16.7.0/bin/node]
11: 0xe84630 v8::internal::FactoryBase<v8::internal::Factory>::NewRawOneByteString(int, v8::internal::AllocationType) [/home/driimus/.nvm/versions/node/v16.7.0/bin/node]
12: 0x110da42 v8::internal::String::SlowFlatten(v8::internal::Isolate*, v8::internal::Handle<v8::internal::ConsString>, v8::internal::AllocationType) [/home/driimus/.nvm/versions/node/v16.7.0/bin/node]
13: 0x1097e77 v8::internal::JSRegExp::Initialize(v8::internal::Handle<v8::internal::JSRegExp>, v8::internal::Handle<v8::internal::String>, v8::base::Flags<v8::internal::JSRegExp::Flag, int>, unsigned int) [/home/driimus/.nvm/versions/node/v16.7.0/bin/node]
14: 0x10987ff v8::internal::JSRegExp::Initialize(v8::internal::Handle<v8::internal::JSRegExp>, v8::internal::Handle<v8::internal::String>, v8::internal::Handle<v8::internal::String>) [/home/driimus/.nvm/versions/node/v16.7.0/bin/node]
15: 0x120b798 v8::internal::Runtime_RegExpInitializeAndCompile(int, unsigned long*, v8::internal::Isolate*) [/home/driimus/.nvm/versions/node/v16.7.0/bin/node]
16: 0x15cddf9  [/home/driimus/.nvm/versions/node/v16.7.0/bin/node]
Aborted

Expected behavior

No response

Prisma information

Schema: https://github.com/driimus/prisma-leaks/blob/main/prisma/schema.prisma

Environment & setup

  • OS: macOS, debian
  • Database: PostgreSQL
  • Node.js version: v16.7.0 , LTS

Prisma Version

prisma                : 2.30.0
@prisma/client        : 2.30.0
Current platform      : debian-openssl-1.1.x
Query Engine (Binary) : query-engine 60b19f4a1de4fe95741da371b4c44a92f4d1adcb (at node_modules/@prisma/engines/query-engine-debian-openssl-1.1.x)
Migration Engine      : migration-engine-cli 60b19f4a1de4fe95741da371b4c44a92f4d1adcb (at node_modules/@prisma/engines/migration-engine-debian-openssl-1.1.x)
Introspection Engine  : introspection-core 60b19f4a1de4fe95741da371b4c44a92f4d1adcb (at node_modules/@prisma/engines/introspection-engine-debian-openssl-1.1.x)
Format Binary         : prisma-fmt 60b19f4a1de4fe95741da371b4c44a92f4d1adcb (at node_modules/@prisma/engines/prisma-fmt-debian-openssl-1.1.x)
Default Engines Hash  : 60b19f4a1de4fe95741da371b4c44a92f4d1adcb
Studio                : 0.422.0
Preview Features      : nApi
@driimus driimus added the kind/bug A reported bug. label Aug 29, 2021
@janpio janpio added topic: node-api formerly `nApi` bug/0-unknown Bug is new, does not have information for reproduction or reproduction could not be confirmed. labels Aug 30, 2021
@janpio
Copy link
Member

janpio commented Aug 30, 2021

Thanks for the bug report @driimus - we are looking into it.

Just to make sure I understood correctly: In your understanding our (node-api library) logging implementation is leaking memory?

By the way:

(since Prisma doesn't seem to allow the usage of pg_temp)

Is there a feature request issue for this, or the overall "Limitations" section? We really feedback value from the community and active users like you, so I would be very interested in having this spelled out a bit more. You are probably not the only one with that problem, so this might very well be interesting for us to implement or improve support for.

@pimeys
Copy link
Contributor

pimeys commented Aug 30, 2021

I will come back to this issue tomorrow, but some quick observations that I'd like to get clear before we move forward:

https://github.com/driimus/napi-jest/blob/master/tests/logged.test.ts#L14

This one spawns a new client for every test. This means we now have N channels that stream logs to the same event loop, meaning we have Rust underneath and the fastest logger on earth spamming that one node event loop with logs. Of course the node loop is a bottleneck and can't process them fast enough, so we keep them in memory on the Rust side.

Could you try the example by reusing the same client in every test?

What I also think I can try tomorrow is reducing the channel size for the logger callback, meaning if the node process doesn't process logs fast enough, we either block that thread, or just throw the log line away.

Sorry if I misunderstood something in this, I've had a very long day already :)

@driimus
Copy link
Author

driimus commented Aug 30, 2021

@janpio

Is there a feature request issue for this, or the overall "Limitations" section?

Not as of yet. I'm waiting on #4703 to explore test parallelization in more detail, since re-applying migrations plays a big part in using separate database schemas per test suite.

In your understanding our (node-api library) logging implementation is leaking memory?

It's a fairly big assumption on my part, but yes, I think the logging component causes leaks in jest test suites.

My understanding of Node-API and the query engine is very limited, but this line in particular was the main reason I started looking at how jest reacts to getting rid of the logging callback. This was entirely based on the documentation for napi_unref_threadsafe_function, which states:

the event loop running on the main thread may exit before func is destroyed

As that seemed like something that the weak-napi package used by jest might detect.


@pimeys I've played around a fair bit with instantiating a QueryEngine when I was testing changes to my stubbed out version in that repo:

  • instantiating inside test closures (with a single test inside the suite)
  • instantiating at the top-level of the entire test suite

None of those have made a difference as far as the --detectLeaks flag is concerned.

More importantly:

  • the project that I'm trying to migrate makes use of a singleton Prisma client (without any custom settings)
  • the minimal reproduction repo instantiates a single client per suite, and this step runs a single test.

I've made two more changes to the minimal reproduction repo, one for importing a client, one for removing explicit disconnects. Hopefully the additional scenarios provide more insight.

@pimeys
Copy link
Contributor

pimeys commented Aug 31, 2021

Yeah, how we handle logs is kind of painful, but we should find a solution for this problem now. I'm running a few tests to see what we can do. I'd also be interested if @Brooooooklyn knows something I don't in here...

@Brooooooklyn
Copy link

@pimeys Thanks for mention, I will dig into this problem in this week.

@pimeys
Copy link
Contributor

pimeys commented Aug 31, 2021

I'm trying to dig in as much information as I can. I'm still not sure is it on our side or in the Node API layer. What it looks like to me is how we use the threadsafe callback for logging, and when running a client in parallel from Jest, the logs are not cleaned from the heap.

@pimeys
Copy link
Contributor

pimeys commented Aug 31, 2021

Ok, one day of research behind, and coming from a perspective of a person who rarely does javascript: jest does something really weird here. -w 1 should run the tests in sequence, right? I see quite a many nodes/jests in htop, and for sure the memory goes up a lot. And, running in binary mode, I see quite a many query engines up and running when using jest (taking that memory in separate processes).

Then, again, I can do this:

const { PrismaClient } = require("@prisma/client");

async function main() {
  var prisma = new PrismaClient();

  while (true) {
    await prisma.user.deleteMany();
    const user = await prisma.user.create({
      data: {
        email: "test",
      },
    });

    console.log(user);

    await prisma.$disconnect();
  }
}

main();

And I see no growth in memory. I can also do this:

const { PrismaClient } = require("@prisma/client");

async function main() {
  while (true) {
    var prisma = new PrismaClient();
    await prisma.user.deleteMany();
    const user = await prisma.user.create({
      data: {
        email: "test",
      },
    });

    console.log(user);

    await prisma.$disconnect();
  }
}

main();

And the queries slow down quite a bit, but it still doesn't give similar RES growth compared to jest.

@janpio
Copy link
Member

janpio commented Aug 31, 2021

Thanks for the further responses @driimus. I also spent some time with this today, and have a bunch more follow up questions:

  1. About your customized engine:

    I've also tested a version of the library engine with logging disabled (repo) and did not see the issue: neither on a simple instantiation(barebones example), nor when using it in a generated prisma client (by manually replacing the path in node_modules/@prisma/client/runtime/index.js).

    I see you have 2 commits in there: https://github.com/driimus/prisma-engines/commits/test/jest-memory-leak Were both of them necessary to fix the problem you are experiencing when running this in Jest?

  2. We can see slightly bigger increase in memory usage of a script using PrismaClient in a loop (https://github.com/janpio/prisma-node-api-memory-investigation/blob/c724b4f29f2b9b2daab28f00824805bbdc40ddc1/script.js) when using Node-API vs. when not: https://github.com/janpio/prisma-node-api-memory-investigation/actions/runs/1187121305
    So there definitely is some leak, but it is pretty minor and we are investigating. This should definitely not cause any crashes when running test in a common project like yours. (We have the theory that our current implementation of the Node-API library does not really get rid of the instance after $disconnect and keeps it around for future usage - which in high numbers leads to a leak [Update: Issue created at https://github.com/Node-API/LibraryEngine: $disconnect does not free up memory / kill engine #9044]) Do you agree with that statement or am I missing something?

  3. Are all your test runner crashes with the same error message FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory?

  4. Any idea why the memory growth is going so crazy only in context of Jest?

@driimus
Copy link
Author

driimus commented Aug 31, 2021

  1. The first commit only helped get past --detectLeaks errors when instantiating a query engine. The second one got the tests in https://github.com/driimus/prisma-leaks to pass without an increase in heap usage.

  2. I haven't gotten as far as monitoring nApi behavior in a live environment. I'd consider minor leaks for long-running clients to be a separate issue, unless there's a correlation between them and the amount of data being transferred in and out.
    As for explicitly calling $disconnect: if that's the intended method of marking the engine for garbage collection, I could see that being a problem for Jest users (see 4).

  3. Pretty much.

  4. As far as I know, Jest resets imported modules between test files (there's also an even more aggressive sandboxing mechanism available through resetModules - though I haven't checked whether that could impact the issue's severity). Even if an instance of the Prisma Client is imported, there will be a separate copy per test file.
    I'm still unsure of what the ~100mb of leaked memory consists of, for the larger project. I'll try to set up a more realistic example to see if I can get a definitive answer.

@janpio
Copy link
Member

janpio commented Sep 1, 2021

4. I'm still unsure of what the ~100mb of leaked memory consists of, for the larger project.

Ok, while rereading this issue and experimenting a bit, I noticed we never saw the 100MB+ anywhere. Does this refer to the memory increase after running a large test suite? That would of course make sense then.

The memory increase I mentioned above (in the reproduction without Jest, in 2) (and reproduced in https://github.com/janpio/prisma-node-api-memory-investigation/actions/runs/1187121305) only concerns the rss, not the heap which is nicely kept in check and cleaned regularly. So the heap usage hovers around 100MB.

That lead me to create a fork of your reproduction repo, and expand a bit: Add the $disconnect() again to stop the Engines, test binary and library, and also log the rss/heap. The results are interesting/confusing:

library has the same behavior as your test setup of course. The rss is also increasing similar to our other reproduction:

      rss 127.82 MB  |  heapTotal 65.04 MB  |  heapUsed 40.85 MB  |  external 2.77 MB  |  arrayBuffers 1.25 MB  |
      rss 137.2 MB  |  heapTotal 72.57 MB  |  heapUsed 48.96 MB  |  external 2.77 MB  |  arrayBuffers 1.25 MB  |
      rss 145.07 MB  |  heapTotal 79.35 MB  |  heapUsed 55.93 MB  |  external 2.78 MB  |  arrayBuffers 1.26 MB  |
      rss 152.55 MB  |  heapTotal 86.13 MB  |  heapUsed 63.08 MB  |  external 2.79 MB  |  arrayBuffers 1.27 MB  |
      rss 155.96 MB  |  heapTotal 88.64 MB  |  heapUsed 42.17 MB  |  external 1.55 MB  |  arrayBuffers 0.04 MB  |
      rss 159.83 MB  |  heapTotal 91.18 MB  |  heapUsed 49.83 MB  |  external 1.56 MB  |  arrayBuffers 0.04 MB  |
      rss 162.5 MB  |  heapTotal 93.71 MB  |  heapUsed 56.88 MB  |  external 1.57 MB  |  arrayBuffers 0.05 MB  |
      rss 166.93 MB  |  heapTotal 98.24 MB  |  heapUsed 64 MB  |  external 1.58 MB  |  arrayBuffers 0.06 MB  |
      rss 173.74 MB  |  heapTotal 104.77 MB  |  heapUsed 71.05 MB  |  external 1.58 MB  |  arrayBuffers 0.07 MB  |
      rss 180.35 MB  |  heapTotal 111.3 MB  |  heapUsed 78.08 MB  |  external 1.59 MB  |  arrayBuffers 0.08 MB  |
      rss 187.8 MB  |  heapTotal 118.33 MB  |  heapUsed 85.19 MB  |  external 1.6 MB  |  arrayBuffers 0.08 MB  |
      rss 194.72 MB  |  heapTotal 124.61 MB  |  heapUsed 92.3 MB  |  external 1.61 MB  |  arrayBuffers 0.09 MB  |
      rss 201.52 MB  |  heapTotal 132.14 MB  |  heapUsed 99.4 MB  |  external 1.62 MB  |  arrayBuffers 0.1 MB  |
      rss 208.86 MB  |  heapTotal 138.68 MB  |  heapUsed 106.44 MB  |  external 1.62 MB  |  arrayBuffers 0.11 MB  |
      rss 215.99 MB  |  heapTotal 145.71 MB  |  heapUsed 113.5 MB  |  external 1.63 MB  |  arrayBuffers 0.11 MB  |

https://github.com/janpio/prisma-leaks/runs/3488782051?check_suite_focus=true#step:9:1

The binary one on the other hand surprisingly also shows this behavior:

      rss 109.05 MB  |  heapTotal 66.04 MB  |  heapUsed 35.01 MB  |  external 2.8 MB  |  arrayBuffers 1.25 MB  |
      rss 119.59 MB  |  heapTotal 74.07 MB  |  heapUsed 44.24 MB  |  external 2.81 MB  |  arrayBuffers 1.26 MB  |
      rss 125.76 MB  |  heapTotal 79.51 MB  |  heapUsed 36.37 MB  |  external 1.58 MB  |  arrayBuffers 0.04 MB  |
      rss 128.84 MB  |  heapTotal 82.04 MB  |  heapUsed 44.49 MB  |  external 2.8 MB  |  arrayBuffers 1.26 MB  |
      rss 132.38 MB  |  heapTotal 85.07 MB  |  heapUsed 52.17 MB  |  external 2.81 MB  |  arrayBuffers 1.26 MB  |
      rss 139.06 MB  |  heapTotal 91.85 MB  |  heapUsed 59.67 MB  |  external 2.82 MB  |  arrayBuffers 1.27 MB  |
      rss 145.77 MB  |  heapTotal 98.38 MB  |  heapUsed 67 MB  |  external 2.83 MB  |  arrayBuffers 1.28 MB  |
      rss 153.2 MB  |  heapTotal 105.66 MB  |  heapUsed 74.35 MB  |  external 2.83 MB  |  arrayBuffers 1.29 MB  |
      rss 161.36 MB  |  heapTotal 113.45 MB  |  heapUsed 81.75 MB  |  external 2.84 MB  |  arrayBuffers 1.29 MB  |
      rss 168.57 MB  |  heapTotal 119.73 MB  |  heapUsed 89.08 MB  |  external 2.85 MB  |  arrayBuffers 1.3 MB  |
      rss 175.77 MB  |  heapTotal 127.51 MB  |  heapUsed 96.41 MB  |  external 2.86 MB  |  arrayBuffers 1.31 MB  |
      rss 183.46 MB  |  heapTotal 134.54 MB  |  heapUsed 103.8 MB  |  external 2.87 MB  |  arrayBuffers 1.32 MB  |
      rss 191.1 MB  |  heapTotal 141.82 MB  |  heapUsed 111.14 MB  |  external 2.87 MB  |  arrayBuffers 1.33 MB  |
      rss 199.08 MB  |  heapTotal 149.6 MB  |  heapUsed 118.4 MB  |  external 2.88 MB  |  arrayBuffers 1.33 MB  |

https://github.com/janpio/prisma-leaks/runs/3488782120?check_suite_focus=true#step:9:1

The only difference is that for the library run, --detectLeaks triggers but does not for the binary.

I quickly graphed the rss and heapUsed numbers:

library
binary (1)

While this is certainly not optimal, in my understanding of what I am seeing the Node-API library is only slightly worse than the binary. But the general behavior is the same. Do you agree with that? Except for the detectLeaks detection, are we looking at the right things here for your original crash even? And even for detectLeaks, does that really work? (why does it not trigger for the binary test which has the same characteristics?) Or are we chasing ghosts? 👻

@Brooooooklyn
Copy link

Brooooooklyn commented Sep 3, 2021

After some investigation, I believe the behavior in https://github.com/driimus/prisma-leaks is as expected, and prisma is indeed not leak memory in this scenario, here is reasons:

  1. For the heap out of memory problem, it's easy to reproduce without prisma:

    it.each(new Array(600000000).fill(1))("doesn't leak memory", () => {
      expect.assertions(1)
      expect(1).toBeTruthy()
    })

    Running this test with jest --runInBand --detectLeaks --verbose, you can see the jest crash without any assertions output, which means the testing callback even not being executed. So this problem is nothing to do with prisma.

  2. Jest complained memory leak detected with --detectLeaks flag:

    The log_callback implementation in prisma is wrapped with QueryEngine JavaScript class, which means the log_callback will be released only when QueryEngine class recycled by GC. And because QueryEngine is re-connectable -- it should not release the log_callback even after disconnect called. The reason why jest consider the testing code is leaked is because QueryEngine is not been recycled in the test suit complete.

@pantharshit00 pantharshit00 added the team/client Issue for team Client. label Sep 13, 2021
@adarnon
Copy link

adarnon commented Oct 11, 2021

I can confirm my company's Jest CI tests started leaking like crazy after upgrading to Prisma 3.x. This was the only changed variable so there's definitely something happening. I am trying to reproduce with a simpler setup.

@dominichadfield-jelly
Copy link

We are having memory leaks in our pipeline too

@Rukko
Copy link

Rukko commented Oct 14, 2021

We too. The ram usage keep rising indefinitly after the update.

@dominichadfield-jelly
Copy link

I can confirm my company's Jest CI tests started leaking like crazy after upgrading to Prisma 3.x. This was the only changed variable so there's definitely something happening. I am trying to reproduce with a simpler setup.

I downgraded a sandbox from 3.x to 2.30.x and it resolved the issue immediately. When I then added the napi preview feature, the problem reappeared.

@Brooooooklyn
Copy link

We have two ways to resolve this issue in jest testing scenario (Note. This issue should not appear in production application since there are few connections in production app.)

  1. Change the behavior of $disconnect(), clean up ThreadSafeFunction in $disconnect function. Which means we could not reconnect after $disconnect() called.
  2. Provide a new method like $dispose for teardown the ThreadSafeFunction, and the client could not reconnect again after this API is called.

@pantharshit00
Copy link
Contributor

I like the idea of $dispose. Lets see what others have to say here.

@reubenporterjisc
Copy link

We are experiencing very high memory usage too, actually 1.4gb after just a single test suite (around 60 integration tests).

@Ustice
Copy link

Ustice commented Feb 10, 2022

@garrensmith Thanks for pointing out where the problem was. This issue was blocking our adoption of Prisma. I created a PR for a fix. It refactors the way that the beforeExit is handled so that there aren't any undead references to the engines., allowing garbage collection.

There are failing tests in src/__tests__/MigrateDiff.test.ts, but it looks like that is a bugged test? I'm not familiar enough with the overall codebase to know.

@driimus I'm pretty sure that the real leak is from here. There are references to the engines in process event handlers. While jest isolates modules (mostly), references to process can and do leak.

@Jolg42
Copy link
Member

Jolg42 commented Feb 11, 2022

Note: src/__tests__/MigrateDiff.test.ts can be flaky and return errors, you can ignore

@garrensmith
Copy link
Contributor

@Ustice thanks for looking into it a bit. The prisma client team has been doing some deep diving into this. We have found a few things. Using the engine or mocking it out doesn't affect the heap size that much. We have tried a few things around the loading and removing of the engine and it made no difference. I also created another test sample and only loaded the engine without the typescript side and then there was no heap memory increase.

So we looking in a few places to solve this. We have some ideas and doing a fair amount of experimenting with different ideas to fix this.

@exsesx
Copy link

exsesx commented Mar 8, 2022

Any updates? I currently suffer from memory leaks on my server written in Nest.js when I run tests. GitHub Actions is crashing.

@mchsk
Copy link

mchsk commented Mar 9, 2022

Thanks guys for your hard work on this issue

@matthewmueller matthewmueller modified the milestones: 3.11.0, 3.12.0 Mar 17, 2022
@matthewmueller matthewmueller modified the milestones: 3.12.0, 3.13.0 Apr 6, 2022
@danstarns danstarns mentioned this issue Apr 29, 2022
@ogroppo
Copy link

ogroppo commented May 26, 2022

Every test that has even a console.log(prisma) with --detectLeaks is outputting Your test suite is leaking memory. Please ensure all references are cleaned. (@prisma/client 3.14), am I right in thinking something like $destroy is missing?

@revmischa
Copy link

I'm running isolated tests in prisma using vitest and https://www.npmjs.com/package/@chax-at/transactional-prisma-testing
It mostly works

@ethancadoo
Copy link

I am still experiencing OOM errors when using '--runInBand' even with the #14174 fix.

I can avoid the OOM by using multiple workers ('-w 4') but then jest gets stuck with the last 1 or 2 tests and never exits despite using '--forceExit'. If I rerun only the tests that it gets stuck on, then those tests complete just fine.

This is making it impossible to run our test suite in our CI pipeline. Please keep this issue open.

@aqrln
Copy link
Member

aqrln commented Jul 28, 2022

@ethancadoo ouch. Is there any chance you could provide us with a reproduction or more details? What is your Node.js version? Which other CLI flags except --runInBand do you pass to Jest? Are there any interesting details about your test setup?

@SevInf
Copy link
Contributor

SevInf commented Jul 28, 2022

@ethancadoo are you running jest with --expose-gc flag by any chance? If so, there is a separate issue on v8 side, try running the tests without this flag. If it you are not using it (or removing it does not help), please, provide a reproduction — cases, reported here should be fixed as of 4.1.0.

@aqrln
Copy link
Member

aqrln commented Jul 28, 2022

Also, which OOM do you refer to? Does Node.js terminate itself with "Allocation failed - JavaScript heap out of memory" error, or is the process killed by the OS because it is out of physical memory?

The latter often happens when --max-old-space-size is manually set to a too high value in previous attempts to work around JavaScript memory leaks.

@driimus
Copy link
Author

driimus commented Jul 28, 2022

This is making it impossible to run our test suite in our CI pipeline. Please keep this issue open.

I haven't tried any real v4 tests yet, but here's a test run for v4.1, which pertains to this specific issue https://github.com/driimus/prisma-leaks/actions/runs/2699611305 (take your pick of node version, schema size, test file count).

Unless you have a significant amount of test files, the problem might lie somewhere else in your tests, as the loss now seems to be insignificant at ~2-3MB per file.

@andrewmclagan
Copy link

We are experiencing memory leaks with:

  • Node 18
  • Prisma 4.11.0

When we inspect our heap its littered with incrementally expanding Prisma strings. This crashes our CI pipeline.

@janpio
Copy link
Member

janpio commented Mar 14, 2023

Please open a new issue and provide additional information. Your leak is 99.9% not the same we fixed via this issue a year ago. Thanks.

@svet-tp
Copy link

svet-tp commented May 31, 2023

@andrewmclagan did you create a new issue? We're seeing the same thing

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug/0-unknown Bug is new, does not have information for reproduction or reproduction could not be confirmed. kind/bug A reported bug. team/client Issue for team Client. tech/engines Issue for tech Engines. tech/typescript Issue for tech TypeScript. topic: node-api formerly `nApi` topic: performance/memory
Projects
None yet