Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Invalid argument: LMDBStore.getBinaryFast #164

Open
mischnic opened this issue Apr 27, 2022 · 35 comments
Open

Invalid argument: LMDBStore.getBinaryFast #164

mischnic opened this issue Apr 27, 2022 · 35 comments

Comments

@mischnic
Copy link
Contributor

mischnic commented Apr 27, 2022

Occasionally, we get this error on CI (and if it does occur, it's always on the same test)

  1) cache
       should support moving the project root:
     Error: Invalid argument
      at LMDBStore.getBinaryFast (/Users/runner/work/parcel/parcel/node_modules/lmdb/read.js:43:12)
      at LMDBStore.getBinary (/Users/runner/work/parcel/parcel/node_modules/lmdb/read.js:124:27)
      at LMDBStore.get (/Users/runner/work/parcel/parcel/node_modules/lmdb/read.js:137:17)
      at LMDBCache.get (/Users/runner/work/parcel/parcel/packages/core/cache/src/LMDBCache.js:50:27)
      at Transformation.readFromCache (/Users/runner/work/parcel/parcel/packages/core/core/src/Transformation.js:518:43)
      at Transformation.runPipelines (/Users/runner/work/parcel/parcel/packages/core/core/src/Transformation.js:274:40)
      at Transformation.run (/Users/runner/work/parcel/parcel/packages/core/core/src/Transformation.js:186:21)
      at Child.handleRequest (/Users/runner/work/parcel/parcel/packages/core/workers/src/child.js:199:11)

in this case, on this PR: parcel-bundler/parcel#7995

Any idea where this might be coming from? The parameter that Parcel passes to store.get(..) should always be a string

@kriszyp
Copy link
Owner

kriszyp commented Apr 27, 2022

I could probably make a good guess at which line of code it is coming from in LMDB (https://github.com/DoctorEvidence/lmdb-js/blob/master/dependencies/lmdb/libraries/liblmdb/mdb.c#L7738), but there a few assertions that can trigger that, so not sure which is the actual cause or how that occurred.

Do you know if this has this ever occurred before, or is it new to a more recent version of lmdb-js/parcel?

What I can do is add some code to expand the details of these error messages, so the errors are little more informative though.

@mischnic
Copy link
Contributor Author

I think I've first seen this on CI sometime in the last ~2 weeks. And no user appears to have actually run into this so far.

@kriszyp
Copy link
Owner

kriszyp commented Apr 28, 2022

Ok, I published a v2.3.7 with more detailed error messages when there are failures in gets (or more likely in renewing the transaction for a get), if you want to see if that helps provide more info.

@kriszyp
Copy link
Owner

kriszyp commented Apr 28, 2022

It looks like it failed again, but I am not sure how to access the error/stack info that you posted, from the build results?

@mischnic
Copy link
Contributor Author

The failing benchmark job is unrelated, so all tests passed. But I'll report back if I see this error again in the future

@mischnic
Copy link
Contributor Author

  1) cache
       should support moving the project root:
     Invalid argument: MDB\_BAD\_RSLOT: Invalid reuse of reader locktable slot
  Error: Invalid argument: MDB_BAD_RSLOT: Invalid reuse of reader locktable slot
      at LMDBStore.getBinaryFast (/home/runner/work/parcel/parcel/node_modules/lmdb/read.js:43:12)
      at LMDBStore.getBinary (/home/runner/work/parcel/parcel/node_modules/lmdb/read.js:124:27)
      at LMDBStore.get (/home/runner/work/parcel/parcel/node_modules/lmdb/read.js:137:17)
      at LMDBCache.get (/home/runner/work/parcel/parcel/packages/core/cache/src/LMDBCache.js:50:27)
      at Transformation.readFromCache (/home/runner/work/parcel/parcel/packages/core/core/src/Transformation.js:518:43)
      at Transformation.runPipelines (/home/runner/work/parcel/parcel/packages/core/core/src/Transformation.js:274:40)
      at Transformation.run (/home/runner/work/parcel/parcel/packages/core/core/src/Transformation.js:186:21)
      at Child.handleRequest (/home/runner/work/parcel/parcel/packages/core/workers/src/child.js:199:11)

@kriszyp
Copy link
Owner

kriszyp commented Apr 29, 2022

Thank you for the update @mischnic, this is certainly an error/code-branch I have never seen happen in LMDB before! I'm curious, based on the test name and the fact that both times it triggered the error: does this test involve actually moving the location of the LMDB file/directory, possibly while it is in use?

I have a couple ideas for how this error could possibly be induced. One is that somehow the file mutexes/locks become ineffective in locking (maybe due to some file manipulation). The other idea a little more concrete: I believe if you open an LMDB database in one thread, read from it, then open the same LMDB database in another thread but using a different path/string (even if it is different string, but relative vs absolute to same location), this will generate a new LMDB environment, and then if you close that database, it will will clear any reader slots with the current process id (same among all threads), which would indeed create a bad slot for an active reader in another thread. In lmdb-js I try to avoid creating duplicate LMDB environments (since LMDB explicitly forbids that as dangerous for this reason), but perhaps not trying hard enough.

@mischnic
Copy link
Contributor Author

mischnic commented Apr 29, 2022

does this test involve actually moving the location of the LMDB file/directory, possibly while it is in use?

It's definitely moved. There are no explicit calls to the "old" lmdb db (at the old location) after the move (unless something wasn't flushed of course).

So I think we should be calling await db.close() when the Parcel build finishes (currently we just open it at the start and that's it)? That should solve this rename situation and also ensure that everything was flushed (which would be good in general, depending on how violently the user of Parcel kills the current process afterwards).

@kriszyp
Copy link
Owner

kriszyp commented Apr 29, 2022

It's definitely moved

(I did actually briefly look at the code, but didn't try to figure out what ncp really did :) )
Ok, then there are a couple thing I will do:

  • Put in place better file identification for protecting against double environment db access, using actual file inode instead of just paths.
  • Now that I know it is a bad reader slot error, there are actually two types of slot errors and will add messaging to distinguish a double usage of a slot and premature clearing of a slot.

So I think we should be calling await db.close() when the Parcel build finishes

You could see if it changes anything, but, at least in theory, shouldn't be necessary. lmdb-js tries pretty hard, setting up listeners to process.on('exit') and clean up hooks, to ensure data is really flushed before a process exits. But like you said, somewhat dependent on how violently it was exited (should catch/flush process.exit, uncaught errors, event queue finishing, but not segfaults, for example).

@kriszyp
Copy link
Owner

kriszyp commented May 2, 2022

Ok, these updates are published in v2.3.8.

@pieh
Copy link

pieh commented May 10, 2022

I don't know how helpful those could be but after we (gatsby) bumped from 2.2 to 2.3.10 we do see more context on occasional MDB_BAD_RSLOT errors:

Common "shape" of those errors:

@kriszyp
Copy link
Owner

kriszyp commented May 10, 2022

@pieh A few questions about this: Are you thinking this is the same issue (reported in same place) as #153 (just with better error messaging)? You don't think this is a new regression with 2.3.10, right? (just expanded messaging for the error?) And with these tests, you are creating child processes (or is that specific to the second report)? Are there any worker threads involved? I believe the root cause in this ticket originally was multiple LMDB database/env instances being created for the same database file; do you think that is possible in this situation? (the fix that was applied for this ticket was trying to ensure that is checking for existing database based on dev/inode of the file, I don't know if maybe that is unreliable on some OSes). Is it possible there are multiple versions of lmdb-js in this test? And I assume you don't have any way of reproducing this locally at this point do you?

@pieh
Copy link

pieh commented May 10, 2022

Are you thinking this is the same issue (reported in same place) as #153 (just with better error messaging)?

Honestly I have no idea. It's possible. It's not clear to me wether MDB_PROBLEM is now replaced with more specific errors with more details (like MDB_BAD_RSLOT)?

The #153 have reports from our full e2e runs, the ones I linked are unit tests so possibly are easier to figure out due to less parts involved generally in unit tests. I decided to add comment here instead of #153 because I saw MDB_BAD_RSLOT actually reported in here and not in #153 (we are not moving data files tho, so there is some difference at least)

You don't think this is a new regression with 2.3.10, right? (just expanded messaging for the error?)

Too early for me to say if we see errors at increased rate now, so can't speculate if there is "new regression" yet. I mostly wanted to show the "expanded messaging" in hope that maybe You can give me some clues what to debug from this point to get more information that is actually usable for you or for us to change our implementation if this is likely because we set up things wrong.

And with these tests, you are creating child processes (or is that specific to the second report)?

Some tests do create child process, but not all failed ones do - in particular in both linked runs it has "worker can access node created in main process" failing test that is creating a child process with fork. Then test's main process .put() something and then we try to .get() it from child process. Those fail with error like Error: Invalid argument: MDB_BAD_RSLOT: Invalid reuse of reader locktable slot: The reader lock pid 918 doesn't match env pid 513 (which I will try to debug more, at very least will want to log if those pid s are making sense or not).

On the other hand the example of failing tests without child process involved result in error mentioning pid 0 (which I think it's quite interesting and potentially relevant?):

Error: Invalid argument: MDB_BAD_RSLOT: Invalid reuse of reader locktable slot: The reader lock pid 0 doesn't match env pid 513
    at resetCursor (/home/circleci/project/packages/gatsby/node_modules/lmdb/read.js:271:17)
    at RangeIterable.iterable.iterate (/home/circleci/project/packages/gatsby/node_modules/lmdb/read.js:288:5)
    at RangeIterable.[Symbol.iterator] (/home/circleci/project/packages/gatsby/node_modules/lmdb/util/RangeIterable.js:63:31)
    at RangeIterable.result.iterate (/home/circleci/project/packages/gatsby/node_modules/lmdb/util/RangeIterable.js:16:42)
    at RangeIterable.[Symbol.iterator] (/home/circleci/project/packages/gatsby/node_modules/lmdb/util/RangeIterable.js:63:31)
    at RangeIterable.result.iterate (/home/circleci/project/packages/gatsby/node_modules/lmdb/util/RangeIterable.js:16:42)
    at RangeIterable.source (/home/circleci/project/packages/gatsby/node_modules/lmdb/util/RangeIterable.js:63:31)
    at GatsbyIterable.node (/home/circleci/project/packages/gatsby/src/datastore/common/iterable.ts:24:23)
    at Object.<anonymous>.next (<anonymous>)
    at addInferredType (/home/circleci/project/packages/gatsby/src/schema/infer/index.js:92:16)

Are there any worker threads involved?

We don't use worker threads ourselves, but will look up if there is anything hidden in deps (for example, I remember jest using child processes historically, they had worker threads variant via opt-in for a while and maybe they just made it default 🤷 )

I believe the root cause in this ticket originally was multiple LMDB database/env instances being created for the same database file; do you think that is possible in this situation? (the fix that was applied for this ticket was trying to ensure that is checking for existing database based on dev/inode of the file, I don't know if maybe that is unreliable on some OSes).

Good question - the problems might be our unit tests setup. I will poke around and try to see if running our tests result in multiple open() for same file location.

We also do have our own wrapper to prevent multiple open dbs, so I will look into that as well.

Is it possible there are multiple versions of lmdb-js in this test?

Those unit tests shouldn't use different lmdb versions, but will verify

And I assume you don't have any way of reproducing this locally at this point do you?

We don't have reliable way to reproduce it, it is flaky, which leads me to believe there are timing conditions required to reproduce which I don't understand yet.

@kriszyp
Copy link
Owner

kriszyp commented May 10, 2022

So to summarize my understanding of potential causes of bad reader slots:

  • If multiple database instances are opened for the same database file, one instance can clear reader slots that are in use by the other (causing the reader lock pid to be 0, like reported, and I think another process could easily then grab that slot ending up with non-matching pids as reported in the other run). However, lmdb-js (tries to) tracks database instances to prevent this from happening (so you can safely call open() on the same database). Parcel's test had managed to subvert this by renaming the database while in use (hence the switch to dev/inode based matching), but maybe there is still a lingering issue.
  • If file locks themselves aren't working, than the mutexes used by LMDB to safely update the reader slots (as well as handle write locks) could fail and (occasionally) end up with improperly shared slots with these types of errors. LMDB docs do warn about that with remote file systems; I suppose the consideration here is if circleci uses a remote file system? (it seems unlikely, but just throwing out possibilities).

@wardpeet
Copy link

Unsure if this is useful but this is another trace I got

13:00:01 PM:
ERROR Invalid argument: The dbi 10 was out of range for the number of dbis (9)

13:00:01 PM:
  Error: Invalid argument: The dbi 10 was out of range for the number of dbis (9  )

13:00:01 PM:
13:00:01 PM:
  

13:00:01 PM:
  - read.js:144 LMDBStore.get

13:00:01 PM:
  - read.js:20 LMDBStore.getString

13:00:01 PM:

13:00:01 PM:
    [www]/[lmdb]/read.js:144:22

13:00:01 PM:
13:00:01 PM:
  - cache-lmdb.ts:69 GatsbyCacheLmdb.get
13:00:01 PM:
    [www]/[lmdb]/read.js:20:17
13:00:01 PM:
    [www]/[gatsby]/src/utils/cache-lmdb.ts:69:25

@kriszyp
Copy link
Owner

kriszyp commented May 12, 2022

I think this is a potentially useful clue. It is actually challenging to conceive of how this type "out of range" error could occur; I have never seen it before and the dbi range never decrements, so doesn't immediately seem like it could be reduced to a range of 9 after previously being increased to 11+. However, I think this actually could be possible as a race condition of simultaneously opening databases from different threads using read-only transactions (which is used for opening dbs as of v2.1, for #100). And it appears this error log does show two interleaving errors (due to either two threads or processes).

I believe that gatsby does not use threads (just child processes), however, I do believe that jest does use multiple threads by default (https://github.com/facebook/jest/blob/main/packages/jest-config/src/getMaxWorkers.ts#L28). And if this is thread-related, I think makes sense, as I understand these errors only occurring in gatsby's tests (and not actual production usage).

I believe I can address this particular "out of range" error, assuming my theory is correct. This doesn't address the bad reader slot error, but perhaps a good clue, as multiple threads certainly provides a more plausible situation for multiple database instances to be opened (although still not sure exactly how).

@kriszyp
Copy link
Owner

kriszyp commented May 18, 2022

Published version v2.4.0 with changes to address the dbi "out of range" error.

@wardpeet
Copy link

I can still reproduce this error with

lmdb@2.2.4, lmdb@2.3.10, lmdb@^2.0.2, lmdb@^2.2.6, lmdb@^2.4.2:
  version "2.4.2"
  resolved "https://registry.yarnpkg.com/lmdb/-/lmdb-2.4.2.tgz#eefd082ac3570bca88a8f149df12ea20fbf40b29"
  integrity sha512-dgqoGgHl/lzPobumxIsagVy7JXBAdFJv74avJTC733lb6d/RiCrzjm5YOUyCjhKnCNTNlNPGBP6/C1gZelUwlA==
  dependencies:
    msgpackr "^1.5.4"
    node-addon-api "^4.3.0"
    node-gyp-build-optional-packages "5.0.2"
    ordered-binary "^1.2.4"
    weak-lru-cache "^1.2.2"
  optionalDependencies:
    "@lmdb/lmdb-darwin-arm64" "2.4.0"
    "@lmdb/lmdb-darwin-x64" "2.4.0"
    "@lmdb/lmdb-linux-arm" "2.4.0"
    "@lmdb/lmdb-linux-arm64" "2.4.0"
    "@lmdb/lmdb-linux-x64" "2.4.0"
    "@lmdb/lmdb-win32-x64" "2.4.0"

I haven't gotten a small reproduction yet and I can only reproduce it on GCP Kubernetes.

key: 'transformer-remark-markdown-node-dependencies-294dca95-4202-5a3a-82c6-783147ed8679-gatsby-remark-responsive-iframegatsby-remark-autolink-headersgatsby-remark-code-titlesgatsby-remark-imagesgatsby-remark-prismjsgatsby-remark-smartypantsgatsby-remark-http-to-https-',\\n  err: Error: Invalid argument: The dbi 9 was out of range for the number of dbis (9)\\n      at LMDBStore.getString (/usr/src/app/www/node_modules/lmdb/read.js:19:17)\\n      at LMDBStore.get (/usr/src/app/www/node_modules/lmdb/read.js:143:22)\\n      at GatsbyCacheLmdb.get (/usr/src/app/www/node_modules/gatsby/src/utils/cache-lmdb.ts:70:4)\\n      at getCacheWithNodeDependencyValidation (/usr/src/app/www/node_modules/gatsby-transformer-remark/extend-node-type.js:193:40)\\n      at getHTML (/usr/src/app/www/node_modules/gatsby-transformer-remark/extend-node-type.js:443:32)\\n      at resolver (/usr/src/app/www/node_modules/gatsby-transformer-remark/extend-node-type.js:583:18)\\n      at wrappedTracingResolver (/usr/src/app/www/node_modules/gatsby/src/schema/resolvers.ts:683:20)\\n      at resolveField (/usr/src/app/www/node_modules/graphql/execution/execute.js:464:18)\\n      at executeFields (/usr/src/app/www/node_modules/graphql/execution/execute.js:292:18)\\n      at collectAndExecuteSubfields (/usr/src/app/www/node_modules/graphql/execution/execute.js:748:10)\\n      at completeObjectValue (/usr/src/app/www/node_modules/graphql/execution/execute.js:738:10)\\n      at completeValue (/usr/src/app/www/node_modules/graphql/execution/execute.js:590:12)\\n      at resolveField (/usr/src/app/www/node_modules/graphql/execution/execute.js:472:19)\\n      at executeFields (/usr/src/app/www/node_modules/graphql/execution/execute.js:292:18)\\n      at collectAndExecuteSubfields (/usr/src/app/www/node_modules/graphql/execution/execute.js:748:10)\\n      at completeObjectValue (/usr/src/app/www/node_modules/graphql/execution/execute.js:738:10) {\\n    code: 22

@kriszyp
Copy link
Owner

kriszyp commented May 23, 2022

You aren't by any chance using any readOnly databases are you?

@wardpeet
Copy link

No, we initiate multiple caches per plugin:
https://github.com/gatsbyjs/gatsby/blob/master/packages/gatsby/src/utils/cache-lmdb.ts

private static getStore(): RootDatabase {
    if (!GatsbyCacheLmdb.store) {
      GatsbyCacheLmdb.store = open({
        name: `root`,
        path: path.join(process.cwd(), `.cache/${cacheDbFile}`),
        compression: true,
        maxDbs: 200,
      })
    }
    return GatsbyCacheLmdb.store
  }

  private getDb(): Database {
    if (!this.db) {
      this.db = GatsbyCacheLmdb.getStore().openDB({
        name: this.name,
        encoding: this.encoding,
      })
    }
    return this.db
  }

We also have another database

function getRootDb(): RootDatabase {
  if (!rootDb) {
    if (!fullDbPath) {
      throw new Error(`LMDB path is not set!`)
    }

    if (!globalThis.__GATSBY_OPEN_ROOT_LMDBS) {
      globalThis.__GATSBY_OPEN_ROOT_LMDBS = new Map()
    }
    rootDb = globalThis.__GATSBY_OPEN_ROOT_LMDBS.get(fullDbPath)
    if (rootDb) {
      return rootDb
    }

    rootDb = open({
      name: `root`,
      path: fullDbPath,
      compression: true,
    })

    globalThis.__GATSBY_OPEN_ROOT_LMDBS.set(fullDbPath, rootDb)
  }
  return rootDb
}

function getDatabases(): ILmdbDatabases {
  if (!databases) {
    // __GATSBY_OPEN_LMDBS tracks if we already opened given db in this process
    // In `gatsby serve` case we might try to open it twice - once for engines
    // and second to get access to `SitePage` nodes (to power trailing slashes
    // redirect middleware). This ensure there is single instance within a process.
    // Using more instances seems to cause weird random errors.
    if (!globalThis.__GATSBY_OPEN_LMDBS) {
      globalThis.__GATSBY_OPEN_LMDBS = new Map()
    }
    databases = globalThis.__GATSBY_OPEN_LMDBS.get(fullDbPath)
    if (databases) {
      return databases
    }

    const rootDb = getRootDb()
    databases = {
      nodes: rootDb.openDB({
        name: `nodes`,
        // FIXME: sharedStructuresKey breaks tests - probably need some cleanup for it on DELETE_CACHE
        // sharedStructuresKey: Symbol.for(`structures`),
        // @ts-ignore
        cache: {
          // expirer: false disables LRU part and only take care of WeakRefs
          // this way we don't retain nodes strongly, but will continue to
          // reuse them if they are loaded already
          expirer: false,
        },
      }),
      nodesByType: rootDb.openDB({
        name: `nodesByType`,
        dupSort: true,
      }),
      metadata: rootDb.openDB({
        name: `metadata`,
        useVersions: true,
      }),
      indexes: rootDb.openDB({
        name: `indexes`,
        // TODO: use dupSort when this is ready: https://github.com/DoctorEvidence/lmdb-store/issues/66
        // dupSort: true
      }),
    }
    globalThis.__GATSBY_OPEN_LMDBS.set(fullDbPath, databases)
  }
  return databases
}

@kriszyp
Copy link
Owner

kriszyp commented May 24, 2022

I still don't have any ideas for how this could occur. I think this error would be the result of using an old read transaction to access a database that was opened after the read transaction had begun or renewed, but in lmdb-js read txns are reset after opening any database. However, I have added some more detailed information to the error messages in case that reveals anything about the state that I didn't expect. Let me know if you need a publish for that.

Also, just a couple assumptions: if I understand correctly, you are using (multiple) worker threads, but not multiple processes. And I don't know if this is testable without threads just in case that could be related?

@wardpeet
Copy link

If you can publish a canary, I'm happy to do so or if you have a guide how to compile myself that's helpful too

@kriszyp
Copy link
Owner

kriszyp commented May 24, 2022

Its a safe, minor patch and version numbers are cheap, so went ahead and published this v2.4.3.

@kriszyp
Copy link
Owner

kriszyp commented May 24, 2022

And just FYI, as far as how to directly use the source/compile, I think you would set a package version override, and directly use a commit/branch, like "lmdb": "DoctorEvidence/lmdb-js#<commit-id>". With that, the install script for lmdb-js (assuming your CI runs npm install or something like that), should automatically detect the absence of prebuilds and try to build itself. However, this is indeed fragile, if anything is missing from the toolchain (including git, C++ compliler, python, etc.) it will fail (the primary reason why we provide prebuilds). But easy enough to publish new versions.

kriszyp added a commit that referenced this issue May 29, 2022
@wardpeet
Copy link

wardpeet commented May 30, 2022

Still getting an error with 1.4.5 - I disabled all child_procs from spawning so it's a single process

13:51:31 PM:
  err: Error: Invalid argument: The dbi 10 was out of range for the number of dbis (txn: 9 id: 30, env: 9 txnid: 30)

13:51:31 PM:
not finished run static queries - 0.835s

13:51:31 PM:
    code: 22

13:51:31 PM:
      at runMicrotasks ()

13:51:31 PM:
  - read.js:143 LMDBStore.get

13:51:31 PM:
    [www]/[lmdb]/read.js:19:17

@kriszyp
Copy link
Owner

kriszyp commented May 30, 2022

I have been able to create a test case that reproduces this error. For me this occurs when there are a large number of async writes or async transactions running while opening a database (with the same root database). Does this sound like it could possibly be similar to your situation?

kriszyp added a commit that referenced this issue May 30, 2022
@wardpeet
Copy link

Yeah it could definilty be the case. We can open the cache earlier, now we open it on first write.

@wardpeet
Copy link

wardpeet commented Jun 2, 2022

@kriszyp can I try your latest commit or isn't that the full fix yet?

@kriszyp
Copy link
Owner

kriszyp commented Jun 2, 2022

Sorry, I realize I have been slow with this, as I felt like this change was a bit more than a patch release since it involves some changes to the ordering of sync and async transactions, and was working on putting together a v2.5 release for this, that included some other work. Consequently, I don't think the commit will work on its own, but I will try to build/publish a v2.5-beta this morning, if it builds ok, and you can try that.

@wardpeet
Copy link

wardpeet commented Jun 2, 2022

No need to apologize :) We're very grateful for your work here! 🥇

kriszyp added a commit that referenced this issue Jun 2, 2022
@kriszyp
Copy link
Owner

kriszyp commented Jun 9, 2022

I guess I updated the #153 ticket, but not this one, but the updates in v2.5.x will hopefully address this.

@Keller18306
Copy link

2.6.8@data-v1 (when works with worker_threads)

Error: Invalid argument
    at LMDBStore.getStats (/home/admin/myrz/node_modules/lmdb/read.js:672:23)
    at LMDB.getTotalCount (/home/admin/myrz/src/lmdb/index-wrappered.ts:29:29)
    at /home/admin/myrz/src/routes/countLines.ts:12:39
    at _handler (/home/admin/myrz/src/http/index.ts:46:38)
    at asyncUtilWrap (/home/admin/myrz/node_modules/express-async-handler/index.js:3:20)
    at Layer.handle [as handle_request] (/home/admin/myrz/node_modules/express/lib/router/layer.js:95:5)
    at trim_prefix (/home/admin/myrz/node_modules/express/lib/router/index.js:328:13)
    at /home/admin/myrz/node_modules/express/lib/router/index.js:286:9
    at param (/home/admin/myrz/node_modules/express/lib/router/index.js:365:14)
    at param (/home/admin/myrz/node_modules/express/lib/router/index.js:376:14)

@Keller18306
Copy link

I have tryied to install version greater then 2.6.9, but its compiling with error:

root@srv1:/home/admin/myrz# npm install lmdb@2.7.1 -f --build-from-source --use_data_v1=true
Debugger attached.
npm WARN using --force Recommended protections disabled.
npm ERR! code 1
npm ERR! path /home/admin/myrz/node_modules/lmdb
npm ERR! command failed
npm ERR! command sh -c node-gyp-build-optional-packages
npm ERR! make: вход в каталог «/home/admin/myrz/node_modules/lmdb/build»
npm ERR!   CXX(target) Release/obj.target/lmdb/src/lmdb-js.o
npm ERR!   CC(target) Release/obj.target/lmdb/dependencies/lmdb/libraries/liblmdb/midl.o
npm ERR!   CC(target) Release/obj.target/lmdb/dependencies/lmdb/libraries/liblmdb/chacha8.o
npm ERR!   CC(target) Release/obj.target/lmdb/dependencies/lz4/lib/lz4.o
npm ERR!   CXX(target) Release/obj.target/lmdb/src/writer.o
npm ERR!   CXX(target) Release/obj.target/lmdb/src/env.o
npm ERR!   CXX(target) Release/obj.target/lmdb/src/compression.o
npm ERR!   CXX(target) Release/obj.target/lmdb/src/ordered-binary.o
npm ERR!   CXX(target) Release/obj.target/lmdb/src/misc.o
npm ERR! make: выход из каталога «/home/admin/myrz/node_modules/lmdb/build»
npm ERR! Debugger attached.
npm ERR! Debugger attached.
npm ERR! gyp info it worked if it ends with ok
npm ERR! gyp info using node-gyp@9.4.0
npm ERR! gyp info using node@18.17.1 | linux | x64
npm ERR! gyp info find Python using Python version 3.11.2 found at "/usr/bin/python3"
npm ERR! gyp info spawn /usr/bin/python3
npm ERR! gyp info spawn args [
npm ERR! gyp info spawn args   '/usr/lib/node_modules/npm/node_modules/node-gyp/gyp/gyp_main.py',
npm ERR! gyp info spawn args   'binding.gyp',
npm ERR! gyp info spawn args   '-f',
npm ERR! gyp info spawn args   'make',
npm ERR! gyp info spawn args   '-I',
npm ERR! gyp info spawn args   '/home/admin/myrz/node_modules/lmdb/build/config.gypi',
npm ERR! gyp info spawn args   '-I',
npm ERR! gyp info spawn args   '/usr/lib/node_modules/npm/node_modules/node-gyp/addon.gypi',
npm ERR! gyp info spawn args   '-I',
npm ERR! gyp info spawn args   '/root/.cache/node-gyp/18.17.1/include/node/common.gypi',
npm ERR! gyp info spawn args   '-Dlibrary=shared_library',
npm ERR! gyp info spawn args   '-Dvisibility=default',
npm ERR! gyp info spawn args   '-Dnode_root_dir=/root/.cache/node-gyp/18.17.1',
npm ERR! gyp info spawn args   '-Dnode_gyp_dir=/usr/lib/node_modules/npm/node_modules/node-gyp',
npm ERR! gyp info spawn args   '-Dnode_lib_file=/root/.cache/node-gyp/18.17.1/<(target_arch)/node.lib',
npm ERR! gyp info spawn args   '-Dmodule_root_dir=/home/admin/myrz/node_modules/lmdb',
npm ERR! gyp info spawn args   '-Dnode_engine=v8',
npm ERR! gyp info spawn args   '--depth=.',
npm ERR! gyp info spawn args   '--no-parallel',
npm ERR! gyp info spawn args   '--generator-output',
npm ERR! gyp info spawn args   'build',
npm ERR! gyp info spawn args   '-Goutput_dir=.'
npm ERR! gyp info spawn args ]
npm ERR! Debugger attached.
npm ERR! Waiting for the debugger to disconnect...
npm ERR! gyp info spawn make
npm ERR! gyp info spawn args [ 'BUILDTYPE=Release', '-C', 'build' ]
npm ERR! ../src/writer.cpp: In member function ‘void WriteWorker::Write()’:
npm ERR! ../src/writer.cpp:380:13: warning: unused variable ‘retries’ [-Wunused-variable]
npm ERR!   380 |         int retries = 0;
npm ERR!       |             ^~~~~~~
npm ERR! ../src/writer.cpp:381:9: warning: label ‘retry’ defined but not used [-Wunused-label]
npm ERR!   381 |         retry:
npm ERR!       |         ^~~~~
npm ERR! ../src/writer.cpp: In member function ‘Napi::Value EnvWrap::startWriting(const Napi::CallbackInfo&)’:
npm ERR! ../src/writer.cpp:500:21: warning: variable ‘status’ set but not used [-Wunused-but-set-variable]
npm ERR!   500 |         napi_status status;
npm ERR!       |                     ^~~~~~
npm ERR! ../src/writer.cpp: In member function ‘int WriteWorker::WaitForCallbacks(MDB_txn**, bool, uint32_t*)’:
npm ERR! ../src/writer.cpp:117:18: warning: ‘start’ may be used uninitialized [-Wmaybe-uninitialized]
npm ERR!   117 |         uint64_t start;
npm ERR!       |                  ^~~~~
npm ERR! ../src/env.cpp: In member function ‘virtual void SyncWorker::Execute()’:
npm ERR! ../src/env.cpp:151:21: warning: unused variable ‘retries’ [-Wunused-variable]
npm ERR!   151 |                 int retries = 0;
npm ERR!       |                     ^~~~~~~
npm ERR! ../src/env.cpp:152:17: warning: label ‘retry’ defined but not used [-Wunused-label]
npm ERR!   152 |                 retry:
npm ERR!       |                 ^~~~~
npm ERR! ../src/env.cpp: In member function ‘int EnvWrap::openEnv(int, int, const char*, char*, Compression*, int, int, mdb_size_t, int, char*)’:
npm ERR! ../src/env.cpp:312:25: warning: variable ‘enckey’ set but not used [-Wunused-but-set-variable]
npm ERR!   312 |                 MDB_val enckey;
npm ERR!       |                         ^~~~~~
npm ERR! ../src/env.cpp: In function ‘napi_value__* getSharedBuffer(napi_env, napi_callback_info)’:
npm ERR! ../src/env.cpp:500:72: warning: format ‘%llu’ expects argument of type ‘long long unsigned int’, but argument 3 has type ‘size_t’ {aka ‘long unsigned int’} [-Wformat=]
npm ERR!   500 |                 fprintf(stderr, "Getting invalid shared buffer size %llu from start: %llu to %end: %llu", size, start, end);
npm ERR!       |                                                                     ~~~^                                  ~~~~
npm ERR!       |                                                                        |                                  |
npm ERR!       |                                                                        long long unsigned int             size_t {aka long unsigned int}
npm ERR!       |                                                                     %lu
npm ERR! ../src/env.cpp:500:89: warning: format ‘%llu’ expects argument of type ‘long long unsigned int’, but argument 4 has type ‘char*’ [-Wformat=]
npm ERR!   500 |                 fprintf(stderr, "Getting invalid shared buffer size %llu from start: %llu to %end: %llu", size, start, end);
npm ERR!       |                                                                                      ~~~^                       ~~~~~
npm ERR!       |                                                                                         |                       |
npm ERR!       |                                                                                         long long unsigned int  char*
npm ERR!       |                                                                                      %s
npm ERR! ../src/env.cpp:500:95: warning: format ‘%e’ expects argument of type ‘double’, but argument 5 has type ‘char*’ [-Wformat=]
npm ERR!   500 |                 fprintf(stderr, "Getting invalid shared buffer size %llu from start: %llu to %end: %llu", size, start, end);
npm ERR!       |                                                                                              ~^                        ~~~
npm ERR!       |                                                                                               |                        |
npm ERR!       |                                                                                               double                   char*
npm ERR!       |                                                                                              %s
npm ERR! ../src/env.cpp:500:103: warning: format ‘%llu’ expects a matching ‘long long unsigned int’ argument [-Wformat=]
npm ERR!   500 |                 fprintf(stderr, "Getting invalid shared buffer size %llu from start: %llu to %end: %llu", size, start, end);
npm ERR!       |                                                                                                    ~~~^
npm ERR!       |                                                                                                       |
npm ERR!       |                                                                                                       long long unsigned int
npm ERR! In file included from ../src/misc.cpp:5:
npm ERR! /root/.cache/node-gyp/18.17.1/include/node/node_version.h:96: warning: "NAPI_VERSION" redefined
npm ERR!    96 | #define NAPI_VERSION 9
npm ERR!       | 
npm ERR! In file included from /root/.cache/node-gyp/18.17.1/include/node/node_api.h:12,
npm ERR!                  from ../../node-addon-api/napi.h:4,
npm ERR!                  from ../src/lmdb-js.h:8,
npm ERR!                  from ../src/misc.cpp:1:
npm ERR! /root/.cache/node-gyp/18.17.1/include/node/js_native_api.h:20: note: this is the location of the previous definition
npm ERR!    20 | #define NAPI_VERSION 8
npm ERR!       | 
npm ERR! ../src/misc.cpp: In member function ‘virtual void ReadWorker::Execute()’:
npm ERR! ../src/misc.cpp:205:34: warning: unused variable ‘env’ [-Wunused-variable]
npm ERR!   205 |                         MDB_env* env = mdb_txn_env(txn);
npm ERR!       |                                  ^~~
npm ERR! ../src/misc.cpp: In member function ‘virtual void ReadWorker::OnOK()’:
npm ERR! ../src/misc.cpp:229:27: warning: unused variable ‘gets’ [-Wunused-variable]
npm ERR!   229 |                 uint32_t* gets = start;
npm ERR!       |                           ^~~~
npm ERR! ../src/misc.cpp: In function ‘void do_read(napi_env, void*)’:
npm ERR! ../src/misc.cpp:303:50: error: ‘MDB_REMAP_CHUNKS’ was not declared in this scope
npm ERR!   303 |         if (data.mv_size > 4096 && !(env_flags & MDB_REMAP_CHUNKS)) {
npm ERR!       |                                                  ^~~~~~~~~~~~~~~~
npm ERR! ../src/misc.cpp: At global scope:
npm ERR! ../src/misc.cpp:250:12: warning: ‘next_buffer_id’ defined but not used [-Wunused-variable]
npm ERR!   250 | static int next_buffer_id = -1;
npm ERR!       |            ^~~~~~~~~~~~~~
npm ERR! make: *** [lmdb.target.mk:156: Release/obj.target/lmdb/src/misc.o] Ошибка 1
npm ERR! gyp ERR! build error 
npm ERR! gyp ERR! stack Error: `make` failed with exit code: 2
npm ERR! gyp ERR! stack     at ChildProcess.onExit (/usr/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:203:23)
npm ERR! gyp ERR! stack     at ChildProcess.emit (node:events:514:28)
npm ERR! gyp ERR! stack     at ChildProcess._handle.onexit (node:internal/child_process:291:12)
npm ERR! gyp ERR! System Linux 5.15.74-1-pve
npm ERR! gyp ERR! command "/usr/bin/node" "/usr/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
npm ERR! gyp ERR! cwd /home/admin/myrz/node_modules/lmdb
npm ERR! gyp ERR! node -v v18.17.1
npm ERR! gyp ERR! node-gyp -v v9.4.0
npm ERR! gyp ERR! not ok 
npm ERR! Waiting for the debugger to disconnect...
npm ERR! Waiting for the debugger to disconnect...

@Keller18306
Copy link

I cannot use v2, bacause old database is too large. But... How, can i migrate that to v2?

root@srv1:/home/admin/myrz# npm install lmdb@2.8.5 -f --build-from-source
Debugger attached.
npm WARN using --force Recommended protections disabled.

added 2 packages, changed 7 packages, and audited 139 packages in 28s

8 packages are looking for funding
  run `npm fund` for details

found 0 vulnerabilities
Waiting for the debugger to disconnect...
root@srv1:/home/admin/myrz# ts-node .
Debugger attached.
/home/admin/myrz/node_modules/lmdb/open.js:139
        let rc = env.open(options, flags, jsFlags);
              ^
Error: MDB_INVALID: File is not an LMDB file
    at Object.open (/home/admin/myrz/node_modules/lmdb/open.js:139:15)
    at LMDB.createDb (/home/admin/myrz/src/lmdb/index-wrappered.ts:175:26)
    at new LMDB (/home/admin/myrz/src/lmdb/index-wrappered.ts:21:28)
    at Object.<anonymous> (/home/admin/myrz/src/routes/countLines.ts:5:14)
    at Module._compile (node:internal/modules/cjs/loader:1256:14)
    at Module.m._compile (/usr/lib/node_modules/ts-node/src/index.ts:1618:23)
    at Module._extensions..js (node:internal/modules/cjs/loader:1310:10)
    at Object.require.extensions.<computed> [as .ts] (/usr/lib/node_modules/ts-node/src/index.ts:1621:12)
    at Module.load (node:internal/modules/cjs/loader:1119:32)
    at Function.Module._load (node:internal/modules/cjs/loader:960:12) {
  code: -30793
}
Waiting for the debugger to disconnect...

@kriszyp
Copy link
Owner

kriszyp commented Sep 4, 2023

The referenced commit should fix the compilation for lmdb v1. With that you should be able to write a script that can read with one version of LMDB and then write data using a the newer version of LMDB to migrate the data. Let me know if you need a publish.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants