Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Intermittent failures of CLI commands for large schemas (M2 Mac - 4.10.1) #17869

Open
zackdotcomputer opened this issue Feb 10, 2023 · 41 comments
Labels
bug/1-unconfirmed Bug should have enough information for reproduction, but confirmation has not happened yet. kind/bug A reported bug. team/client Issue for team Client. team/schema Issue for team Schema. topic: apple silicon topic: arm topic: cli topic: large schema topic: macos topic: wasm

Comments

@zackdotcomputer
Copy link

zackdotcomputer commented Feb 10, 2023

Bug description

When running prisma CLI commands via an npm script, I receive intermittent fatal errors:

> prisma migrate deploy

assertion failed [block != nullptr]: BasicBlock requested for unrecognized address
(BuilderBase.h:550 block_for_offset)
 sh: line 1: 61824 Abort trap: 6           npm run prisma:migrate

or

> prisma generate

Environment variables loaded from .env
Prisma schema loaded from prisma/schema.prisma
assertion failed [block != nullptr]: BasicBlock requested for unrecognized address
(BuilderBase.h:550 block_for_offset)
 sh: line 1: 62099 Abort trap: 6           npm run prisma:generate

If I rm -rf the client directory and downgrade back to 4.9.0, they stop occurring.

How to reproduce

  • On an M2 Macbook Pro
  • Starting with the https://github.com/prisma/prisma-examples/tree/latest/typescript/rest-nextjs-api-routes example project
  • I copied over the existing schema from the company project (sorry - proprietary but I can describe it qualities below)
  • Confirmed it failed
  • Simplified the schema
  • Confirmed it failed a lower percentage of the time

It seems to be related to:

  1. The use of extendedWhereUnique (Preview feature feedback: Extended where unique  #15837) - disabling this dropped the failure rate dramatically
  2. The number of models - the schema I'm using is 1000+ lines long and includes 54 interrelated models. Dropping it even just to 50 models cause the failure rate to fall from ~90% to ~50%.
  3. Prisma 4.10 - reverting to Prisma 4.9 (and deleting the codegen'd client, since that contains the new apple chip Rust binary) caused the failure rate to return to zero.

Expected behavior

I would expect the command to predictably succeed or fail.

I would expect the command to succeed if there are no errors in the schema.

Prisma information

generator client {
  provider        = "prisma-client-js"
  previewFeatures = ["extendedWhereUnique"]
}

datasource db {
  provider = "postgresql"
  url      = "postgresql://postgres:password@127.0.0.1:5434/addition_wealth_test"
}

// Apologies but I cannot share the full prisma.schema - about 1k lines of company code follows

Environment & setup

  • OS: Mac OS Ventura 13.2
  • Database: PostgresQL
  • Node.js version: 18.13.0

Prisma Version

Environment variables loaded from .env
prisma                  : 4.10.1
@prisma/client          : 4.10.1
Current platform        : darwin
Query Engine (Node-API) : libquery-engine aead147aa326ccb985dcfed5b065b4fdabd44b19 (at node_modules/@prisma/engines/libquery_engine-darwin.dylib.node)
Migration Engine        : migration-engine-cli aead147aa326ccb985dcfed5b065b4fdabd44b19 (at node_modules/@prisma/engines/migration-engine-darwin)
Format Wasm             : @prisma/prisma-fmt-wasm 4.10.1-1.80b351cc7c06d352abe81be19b8a89e9c6b7c110
Default Engines Hash    : aead147aa326ccb985dcfed5b065b4fdabd44b19
Studio                  : 0.481.0
Preview Features        : extendedWhereUnique
@zackdotcomputer zackdotcomputer added the kind/bug A reported bug. label Feb 10, 2023
@janpio
Copy link
Member

janpio commented Feb 10, 2023

We had another report of this error message via Slack today. I pointed that user here and hope they will comment with a description of their environment and if their experience matches what was happening for you.

Initially, this looks weird, as we do not have any C code in our project. Googling for the first line of the error message returns these 2 random issues from other projects: PowerShell/PowerShell#17655 + ethereum/solidity#13523 I see no overlap with what you describe here though, so am a bit clueless.

@janpio
Copy link
Member

janpio commented Feb 13, 2023

The user from Slack provided some more environment information:

OS: Mac OS Ventura 13.1
Database: MySQL
Node.js version: 18.12.1
CPU: M1Pro

They also included this link: quarto-dev/quarto-cli#2296 There a similar error message was caused by a project using a Rust launcher - and they fixed it by reverting to a bash script.

@janpio
Copy link
Member

janpio commented Feb 13, 2023

@zackdotcomputer Can you maybe set DEBUG=* as env var and see if you can reproduce it while that is on?

With that output we could maybe pinpoint which part of our code throws this - we have 2 big Rust blocks (one via a Wasm module, the other via a Node-API library or binary - and knowing which one would be super useful). Thanks!

@janpio janpio added bug/1-unconfirmed Bug should have enough information for reproduction, but confirmation has not happened yet. topic: cli topic: apple silicon topic: macos team/schema Issue for team Schema. team/client Issue for team Client. labels Feb 13, 2023
@zackdotcomputer
Copy link
Author

Sure thing @janpio - when I get into work I will gather that info for you.

@net-tech
Copy link

net-tech commented Feb 13, 2023

I'm having this same issue in a monorepo, but only sometimes when running prisma generate. I have 15 models. There's about a 50/50 chance of it occurring for me.

Prisma: v4.10.1
(Also on a M2 Mac) MacOS: Ventura 13.2
NodeJS: v18.14.0

@zackdotcomputer
Copy link
Author

@janpio here's the full-printout from that run:

> addition-api@0.1.0 prisma:generate
> prisma generate

  prisma:engines  binaries to download libquery-engine, migration-engine +0ms
  prisma:loadEnv  project root found at /Users/zack/code/addition/api/package.json +0ms
  prisma:tryLoadEnv  Environment variables loaded from /Users/zack/code/addition/api/.env +0ms
Environment variables loaded from .env
Prisma schema loaded from prisma/schema.prisma
  prisma:getConfig  Using getConfig Wasm +0ms
assertion failed [block != nullptr]: BasicBlock requested for unrecognized address
(BuilderBase.h:550 block_for_offset)
 [1]    84017 abort      npm run prisma:generate

@zackdotcomputer
Copy link
Author

Following up - based on the theory that it might be getConfig I tried some other commands in the CLI, but I got the same failure inside of a different Wasm binary:

$ npx prisma validate
  prisma:engines  binaries to download libquery-engine, migration-engine +0ms
  prisma:loadEnv  project root found at /Users/zack/code/addition/api/package.json +0ms
  prisma:tryLoadEnv  Environment variables loaded from /Users/zack/code/addition/api/.env +0ms
Environment variables loaded from .env
Prisma schema loaded from prisma/schema.prisma
  prisma:getDMMF  Using getDmmf Wasm +0ms
  prisma:getDMMF  Using given datamodel +1ms
assertion failed [block != nullptr]: BasicBlock requested for unrecognized address
(BuilderBase.h:550 block_for_offset)
 [1]    86360 abort      DEBUG=* npx prisma validate

@zackdotcomputer
Copy link
Author

Ok diving deeper - sorry for multiple posts, but figured that was the best way to catalog what I find for the team - I was able to use the above stack traces and some console debug logs to pinpoint the last TS line touched before the crashes. In both cases it was a call to prismaFmt. In one case, this call and in another case this one. In the compiled code, you can even follow these a couple steps deeper into the JS bootstrap code and find the precise line that is crashing is the call to the WasmInstance object for get_dmmf or get_config (or some other rare third crash I haven't pinpointed yet because it doesn't have any meaningful debug logs).

So with high confidence I can guess that something in the instantiation or the running of the WASM instance is causing a crash here. However, we're at the edge of my ability to dive deeper, so I'll have to hand this off to you @janpio.

If you can track down the underlying issue (which, I assume, is actually with Rust or WasmBindgen) and fix with the providing company, then that is great! If not, then perhaps we could at least get a quick fix to disable use of the WASM binaries for the getDmmf and getConfig steps on M1/M2 chips for now?

@zackdotcomputer
Copy link
Author

I THINK I SOLVED IT!

Looking through the issues you posted above @janpio, I noticed that the Quarto issue referenced having the Rosetta x64 -> aarch64 translation layer inside of your build stack seemed to be related to the crash.

I also looked at Apple's Console.app and noticed that it had been capturing all of the crashes and, according to it, libRosetta was a parent task for the crash, which implied that something in my stack was still in x64.

I ran node -e 'console.log(process.arch)' and confirmed that node itself was running in an x64 environment! (On an M1/2 chip it should print out arm64.)

I wiped out node from my system (remove from brew, remove manually installed, remove installed via nvm) and reinstalled (in my case from NVM) being sure that it selected the arm64 binary. I then wiped out node_modules so that Prisma (and all other modules with compiled engines) would need to redownload the right engine. One quick reinstall later, I have been able to run the problematic calls ~20 times in a row without a crash. As an added bonus, they're also about 40% faster now!

I'll leave this open in case there's something you want to investigate still, but on my end I think the issue is resolved.

@janpio
Copy link
Member

janpio commented Feb 13, 2023

Thanks for doing the investigation for us @zackdotcomputer :D

Both getConfigand getDmmf are handled by the same module @prisma/prisma-fmt-was, so what you found makes sense. We also just recently migrated getDmmf to that in 4.10.0, so this happening more than before makes sense.

I ran node -e 'console.log(process.arch)' and confirmed that node itself was running in an x64 environment! (On an M1/2 chip it should print out arm64.)

I wiped out node from my system (remove from brew, remove manually installed, remove installed via nvm) and reinstalled (in my case from NVM) being sure that it selected the arm64 binary.

Does that mean that node -e 'console.log(process.arch)' would not output the correct arm64?

In quarto-dev/quarto-cli#2420 (comment), a sub issue of the linked quarto one, there is a discussion around uname return the wrong value but sysctl returning the correct one instead. That could be relevant.

@zackdotcomputer
Copy link
Author

Yeah @janpio - asking Node to print out its architecture showed that it was running in an emulated x64 arch, not the arm64 one. I think we can close this one now. I'll just leave that tip for anyone who runs into this in the future since we now easily top the Google search results for this error 😅

@janpio
Copy link
Member

janpio commented Feb 13, 2023

Considering we have 2 more people that reported this problem, I'll reopen this already and see if they can confirm your fix actually.

We at Prisma might also want to catch this case ourselves - assuming it is reproducible (which requires us to get to the crash ourselves by somehow installing Node the way you automatically already had it. 🤔 )

@janpio janpio reopened this Feb 13, 2023
@net-tech
Copy link

Another way I think this may happen is if you somehow have two of the same node versions installed (which is somehow the case for me).

iTerm2 output

$ ~: node -v
v18.14.0
$ ~: node -e 'console.log(process.arch)'
arm64

Output in VSCode in my project

$ kiai: node -v
v18.14.0
$ kiai: node -e 'console.log(process.arch)'
x64

@zackdotcomputer
Copy link
Author

Interesting @net-tech - I wonder if that's actually one install but it's a "universal binary" which contains both architectures? Could you run which node in each environment and see if they're pointing to different files on disk?

@net-tech
Copy link

net-tech commented Feb 13, 2023

Same files on the disk it seems

VSCode

$ kiai: which node
/usr/local/bin/node

iTerm2

$ ~: which node
/usr/local/bin/node

Really wish someone made a node uninstaller that uninstalled node from every corner of your computer. I'm not sure how I uninstalled node back in the day but I have tons of versions, some installed with npm, some with n(the node version manager).

@zackdotcomputer
Copy link
Author

zackdotcomputer commented Feb 13, 2023

@janpio I was able to go back to having the issue using the following steps, which someone on the Prisma team could use to reproduce:

  1. Run arch -x86_64 bash to switch to bash in x64 mode.
  2. Install Node in this shell using nvm. It needs to be a different version than is already installed on your system, in my case I am using 18.14 normally, so I ran nvm install 18.13. By running this in the x64 shell, nvm will detect that and install an x64 version of node.
  3. You can now close the x64 terminal. Switching to your "tainted" version of node in nvm will run in x64 mode no matter where you run it. You can verify this by switching to it and running the above arch-print command.
  4. Delete your node_modules folder and reinstall while using tainted node. This is to clear out any arm64 binaries Prisma has downloaded.
  5. Make a package.json script to run prisma validate so you're sure to use the local copy of prisma and don't get tripped up by a global or npx cached copy of it.
  6. Repeatedly spam validate until it fails. For my ~1k line schema with several experimental features enabled, it fails about 1 in 4 times.

Or, as @net-tech has shown, apparently if you install the universal binary from Node's website, it will just reflect the environment of the host program. This might allow you to switch into x64 node just using the arch command above but I haven't tested that.

@janpio
Copy link
Member

janpio commented Feb 17, 2023

Thanks. With the description we understand what is happening - it's quite obviously the "wrong" node running - but we are not super sure what we should do about it really. Should we try to educate you? It seems that usually this does not cause any problems to you - just in the case of Rust based binaries it sometimes leads to this weird error message :/

@zackdotcomputer
Copy link
Author

Yeah agreed that this is not really Prisma's bug - it's a user confusion issue or an Apple migration issue, combined with an unfortunate interaction between Rust and the x64 Rosetta emulator. If you can think of a way to easily notify the user of this, then I think being able to intercept and warn about it would be helpful education for users. My first-thought ideas for how to detect this would be:

  1. process.arch not equal to the return value from the command arch
  2. Sysctl machdep.cpu.brand_string contains Apple but process.arch is not arm64

Ultimately, though, I suspect that finding this thread via Google will help educate as well, and so the impact and lift of making this change is probably pretty low. If you want to mark this as resolved, I think that would be a reasonable prioritization.

@victorhs98
Copy link

Had same issua on an m1 mac, fixed it by changing node version to 19.7.0. next time i run it, it auto downloaded arm specific version of some files.

@a-eid
Copy link

a-eid commented Mar 14, 2023

@victorhs98 for some reason it does not work for me when using 19.7.0

@elie222
Copy link

elie222 commented Mar 17, 2023

I also had this issue on an M2 mac. The problem was that node was on x64. If you install again via npm it should be on arch64.

Check with this command:

node -e 'console.log(process.arch)'

@vertis
Copy link

vertis commented Mar 28, 2023

Migrated from a previous Intel mac with homebrew -> fish -> asdf -> node and was running into this issue. Going and reinstalling everything to be arm64 solved the issue.

I don't think this is necessarily the responsibility of prisma, just a gotcha for M1/M2 mac owners that upgrade. The error message is particularly unfriendly, but the solution is to make sure this comes up in search and then people will discover the need to fix their node install (with dependencies).

@rskvazh
Copy link

rskvazh commented Apr 17, 2023

I also had this issue on an M2 mac in docker container with "Use Rosetta for x86/amd64 emulation on Apple Silicon" turned on. Repeats every 4-5 docker build node js project for amd64 arch (lambda). Without rosetta docker very slow on my mac :(

@DaichiShirakawa
Copy link

DaichiShirakawa commented May 1, 2023

Hi guys, I resolved this problem with re-installing arm64 node via nodenv!
Thank you <3

# Change my terminal to arm64
$ arch -arm64e /bin/zsh  
$ uname -mp
arm64 arm

# Re-install my node on darwin-arm64 mode
$ nodenv uninstall 18.15.0
$ nodenv install 18.15.0
Downloading node-v18.15.0-darwin-arm64.tar.gz...
...

# Check node arch
$ node -e 'console.log(process.arch)'
arm64

# Re-install node_modules
$ rm -rf ./node_modules
$ npm i

# After that my Prisma commands succeeds 100%!

@andyjy
Copy link
Contributor

andyjy commented May 25, 2023

I ran node -e 'console.log(process.arch)' and confirmed that node itself was running in an x64 environment! (On an M1/2 chip it should print out arm64.)

Big thanks for the diagnosis @zackdotcomputer - gave me the immediate solution after I hit this while updating from Node 16 -> 18. Turns out the root cause was I somehow had the x86 version of VSCode installed on my M1 Mac 🤦‍♂️ - which in turn led to having the x86 version of node installed when I used nvm within a VSCode terminal.

@yss14
Copy link

yss14 commented Jul 18, 2023

@rskvazh Did you find any solution or workaround? Having the same problem on M1 when building docker images with Use Rosetta for x86/amd64 emulation on Apple Silicon and passing the --platform linux/amd64 flag to docker build command.
We recently upgraded our docker images from FROM node:14-slim to FROM node:18-slim and upgraded prisma from 4.9.0 to 4.15.0.
Calls to prisma generate randomly fail (at least 1 out of 4 calls). Running node -e 'console.log(process.arch)' inside the docker image correctly outputs x64 and prisma itself seems to have the correct binaries installed.

libquery_engine-debian-openssl-3.0.x.so.node
migration-engine-debian-openssl-3.0.x

@remorses
Copy link

You can fix this issue when using Docker on Apple Silicon disabling this feature in Docker Desktop
Screenshot 2023-07-20 at 13 02 55

@MincePie
Copy link

MincePie commented Aug 5, 2023

I'm having the same problem (mac m2), with a single version of node in use.

which node
/usr/local/opt/node@18/bin/node
mem@MacBook-Pro lnmv2 % node -v
v18.14.1
mem@MacBook-Pro lnmv2 %

node -e 'console.log(process.arch)'
x64
arch
i386

I have deleted node_modules and .next and reinstalled.

My error message is:

assertion failed [block != nullptr]: BasicBlock requested for unrecognized address
(BuilderBase.h:550 block_for_offset)

I removed and resinstalled node (brew) but can't find the flag setting to make it use arm. I have tried installing a separate version of homebrew that accepts arm. I am stuck at that step because it needs rosetta 2 to be installed before node (with arm) can be installed. When I try to install rosetta 2, I get an error that says Installing Rosetta 2 on this system is not supported. The help on this error says that I can solve this by unchecking the box in the terminal application that uses rosetta 2. I've done this, and restarted vscode, and still can't install rosetta.

I tried adding binary targets to the generator as follows, but I still cant get prisma to work on m2 mac:

generator client {
  provider = "prisma-client-js"
    binaryTargets = ["native", "darwin-arm64"]

}

thank you

@bodinsamuel
Copy link

Just for people looking for a solution:
if you are using iterm, you might running it with rosetta, and it can create issues. After searching I come up with this:

  • go to Applications
  • duplicate "iTerm.app"
  • Right Click > Get Info
  • Make sure "open with rosetta" is unchecked
  • Open this iterm
  • $ arch should output arm64
  • $ node -e 'console.log(process.arch) should output arm64
    • If not, reinstall node (nvm un/install)
  • Should be better now

@swalahamani
Copy link

swalahamani commented Oct 31, 2023

I THINK I SOLVED IT!

Looking through the issues you posted above @janpio, I noticed that the Quarto issue referenced having the Rosetta x64 -> aarch64 translation layer inside of your build stack seemed to be related to the crash.

I also looked at Apple's Console.app and noticed that it had been capturing all of the crashes and, according to it, libRosetta was a parent task for the crash, which implied that something in my stack was still in x64.

I ran node -e 'console.log(process.arch)' and confirmed that node itself was running in an x64 environment! (On an M1/2 chip it should print out arm64.)

I wiped out node from my system (remove from brew, remove manually installed, remove installed via nvm) and reinstalled (in my case from NVM) being sure that it selected the arm64 binary. I then wiped out node_modules so that Prisma (and all other modules with compiled engines) would need to redownload the right engine. One quick reinstall later, I have been able to run the problematic calls ~20 times in a row without a crash. As an added bonus, they're also about 40% faster now!

I'll leave this open in case there's something you want to investigate still, but on my end I think the issue is resolved.

Thanks @zackdotcomputer, this resolved the issue!

@adarnon
Copy link

adarnon commented Nov 15, 2023

This issue is happening to me when I'm trying to cross-compile a Dockerfile that builds a Node project with Prisma (Next.js).
It seems like the problem is really Prisma's problem - cross-compilation on M1 with Rosetta for x64 fails consistently.

image

@Ebiam
Copy link

Ebiam commented Nov 20, 2023

If hope this can help someone :

I had the same error on my macbook pro using the M1 max.

I tried to add these two field in my model :

fidelity_activated Boolean @default(false)
game_activated Boolean @default(false)

Then when I did npx prisma migrate dev, the migration applied correctly.
But when it tried to regenerate the client, or if I called npx prisma generate after, I got the same error:

Environment variables loaded from .env
Prisma schema loaded from prisma/schema.prisma
assertion failed [block != nullptr]: BasicBlock requested for unrecognized address
(BuilderBase.h:550 block_for_offset)
 [1]    37991 abort      npx prisma generate

To quick fix this:

  1. I did a migration to remove the fields. It worked.
  2. I modified the original migration by renaming the field by a shorter name, so that when I use this migration, it won't happen again
  3. I did a final migration where I added again the field with a shorter name :
fidelity_enabled Boolean @default(false)
game_enabled Boolean @default(false)

And it worked !

@travelerr
Copy link

Confirmed, also ran into this is on the apple M3 pro chip. @zackdotcomputer's fix worked
#17869 (comment)

good link here on removing node manually, from homebrew and NVM.
https://macpaw.com/how-to/uninstall-node-mac

then a reinstall with nvm and everything works as it should

@chad3814
Copy link

chad3814 commented Jan 2, 2024

You can fix this issue when using Docker on Apple Silicon disabling this feature in Docker Desktop Screenshot 2023-07-20 at 13 02 55

while that works, it makes building x86_64 containers so slow. There must be a difference somewhere of how it's checking the arch, and maybe that can be switched to how node itself is checking.

@Licodeao
Copy link

Licodeao commented Feb 1, 2024

I have the same issue. But I look it solved above comments!

@ltbittner
Copy link

+1 I'm running into this issue building a docker image on an M1 Mac using --platform=linux/amd64, happens about 50% of the time with or without Rosetta enabled.

@PierrickI3
Copy link

Wiping out my node_modules folder like previously suggested did the trick for me

@PierrickI3
Copy link

For those deploying to AWS Fargate, it now supports arm64 architecture (https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-arm64.html). You no longer need to change the architecture when deploying.

@thisleenoble
Copy link

Documenting this for my own benefit as much as anyone else as I know I'm going to end up finding myself here again in the near future. Some of this may be cargo-culty in as much as I don't know if all the steps are required to make it work but this is what I did on my last successful run at it.

I'm running an M1 Pro but trying to build a Remix JS / Prisma application docker image that's run on a linux server running Portainer. The build process seems particularly fragile. I'd successfully deployed at least three times but then this morning it stopped working again and I found myself back here. Piecing together from several answers above, I tried building from start to finish under different node architectures but both x64 and arm64 would fail at one stage or other for seemingly different reasons. This time it worked though and this is what I did:

rm -rf ./node_modules

Quit Terminal app and locate the app. Get info on it and change it to use Rosetta. Open Terminal.

Confirm architecture is x64 with node -e 'console.log(process.arch)'

The version of node the project is pinned at is 18.16 with the .nvmrc file. Looking inside ~/.nvm/.cache/bin I could see two versions of 18.16.0 installed. One for arm64 and one for x64. Deleted both. Also deleted ~/.nvm/versions/node/v18.16.0

nvm install v18.16.0

Now I have the -darwin-x64 version installed.

% nvm use
% npm install
% npm run build:css && remix build
% npx prisma generate

Then I removed the node version again, and the cached x64 version.

Switch back to arm64 by quitting terminal, changing it back to native and relaunching (or using a different app [PHP Storm] terminal that's already reporting arm64.

% nvm install v18.16.0
% nvm use
% npx prisma generate
% docker build -t projectTag .
% docker-compose up -d
% docker build -t projectname/staging --platform linux/amd64

Then ran the push command to get it up to EC2 for distribution. I won't detail that command as it's probably not pertinent.
Like I say, no idea if all this stuff is necessary. Most of the docker commands remain black boxes to me but it might help someone else, or future me.

@pashuka
Copy link

pashuka commented May 7, 2024

Hi guys, I resolved this problem with re-installing arm64 node via nodenv! Thank you <3

# Change my terminal to arm64
$ arch -arm64e /bin/zsh  
$ uname -mp
arm64 arm

# Re-install my node on darwin-arm64 mode
$ nodenv uninstall 18.15.0
$ nodenv install 18.15.0
Downloading node-v18.15.0-darwin-arm64.tar.gz...
...

# Check node arch
$ node -e 'console.log(process.arch)'
arm64

# Re-install node_modules
$ rm -rf ./node_modules
$ npm i

# After that my Prisma commands succeeds 100%!

My nodenv do not correctly pick arm64 after chenging shell to arm64 step. So, i made patch inside nodenv builder or inside bash script /usr/local/bin/node-build in the platform detector method:

#  arch="$(uname -m)"
  arch="arm64"

it work like a charm. compilation speed increased in 70% or more.

@asontha
Copy link

asontha commented May 14, 2024

If you're using nvm and migrated recently from x86 arch to M1/M2, you can also just do the following:

# Uninstall node version you were using
$ nvm uninstall <VERSION>

# Reinstall it
$ nvm install <VERSION>

# Check that it's now on arm64 
$ node -e 'console.log(process.arch)'
arm64

Fixed this issue for me. Should only work for node versions 16 and higher. Also very likely if you did a straight through migration of your dev setup from an x86 mac beforehand.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug/1-unconfirmed Bug should have enough information for reproduction, but confirmation has not happened yet. kind/bug A reported bug. team/client Issue for team Client. team/schema Issue for team Schema. topic: apple silicon topic: arm topic: cli topic: large schema topic: macos topic: wasm
Projects
None yet
Development

No branches or pull requests