Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error with Node:8-alpine docker image on AWS using an M5 instance type #813

Closed
raulgomis opened this issue Jul 11, 2018 · 26 comments
Closed

Comments

@raulgomis
Copy link

Problem:
Node 8 alpine image does not appear to work on AWS m5 instances.

Command:
docker container run -it node:8-alpine /bin/sh -c 'npm -g'

Terminal logs:

Error: could not get uid/gid
[ 'nobody', 0 ]

    at /usr/local/lib/node_modules/npm/node_modules/uid-number/uid-number.js:37:16
    at ChildProcess.exithandler (child_process.js:282:5)
    at emitTwo (events.js:126:13)
    at ChildProcess.emit (events.js:214:7)
    at maybeClose (internal/child_process.js:925:16)
    at Socket.stream.socket.on (internal/child_process.js:346:11)
    at emitOne (events.js:116:13)
    at Socket.emit (events.js:211:7)
    at Pipe._handle.close [as _onclose] (net.js:557:12)
TypeError: Cannot read property 'get' of undefined
    at errorHandler (/usr/local/lib/node_modules/npm/lib/utils/error-handler.js:205:18)
    at /usr/local/lib/node_modules/npm/bin/npm-cli.js:83:20
    at cb (/usr/local/lib/node_modules/npm/lib/npm.js:224:22)
    at /usr/local/lib/node_modules/npm/lib/npm.js:262:24
    at /usr/local/lib/node_modules/npm/lib/config/core.js:81:7
    at Array.forEach (<anonymous>)
    at /usr/local/lib/node_modules/npm/lib/config/core.js:80:13
    at f (/usr/local/lib/node_modules/npm/node_modules/once/once.js:25:25)
    at afterExtras (/usr/local/lib/node_modules/npm/lib/config/core.js:178:20)
    at Conf.<anonymous> (/usr/local/lib/node_modules/npm/lib/config/core.js:236:22)
    at /usr/local/lib/node_modules/npm/node_modules/uid-number/uid-number.js:39:14
    at ChildProcess.exithandler (child_process.js:282:5)
    at emitTwo (events.js:126:13)
    at ChildProcess.emit (events.js:214:7)
    at maybeClose (internal/child_process.js:925:16)
    at Socket.stream.socket.on (internal/child_process.js:346:11)
/usr/local/lib/node_modules/npm/lib/utils/error-handler.js:205
  if (npm.config.get('json')) {
                 ^

TypeError: Cannot read property 'get' of undefined
    at process.errorHandler (/usr/local/lib/node_modules/npm/lib/utils/error-handler.js:205:18)
    at emitOne (events.js:116:13)
    at process.emit (events.js:211:7)
    at process._fatalException (bootstrap_node.js:378:26)

Strace logs:

read(8, "\1\0\0\0\0\0\0\0", 1024)       = 8
stat("/usr/local/lib/node_modules/npm/node_modules/uid-number", {st_mode=S_IFDIR|0755, st_size=101, ...}) = 0
stat("/usr/local/lib/node_modules/npm/node_modules/uid-number/get-uid-gid.js", {st_mode=S_IFREG|0755, st_size=644, ...}) = 0
lstat("/usr/local/lib/node_modules/npm/node_modules/uid-number/get-uid-gid.js", {st_mode=S_IFREG|0755, st_size=644, ...}) = 0
socketpair(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC, 0, [12, 13]) = 0
socketpair(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC, 0, [14, 15]) = 0
socketpair(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC, 0, [16, 17]) = 0
pipe2([18, 19], O_CLOEXEC)              = 0
rt_sigprocmask(SIG_SETMASK, ~[RTMIN RT_1 RT_2], [], 8) = 0
read(4, "*", 1)                         = 1
rt_sigaction(SIGCHLD, {sa_handler=0x557d5d900ad0, sa_mask=~[RTMIN RT_1 RT_2], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7fda9929f722}, NULL, 8) = 0
write(5, "*", 1)                        = 1
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
rt_sigprocmask(SIG_BLOCK, ~[], [], 8)   = 0
fork()                                  = 21
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
close(19)                               = 0
read(18, "", 4)                         = 0
close(18)                               = 0
close(13)                               = 0
ioctl(12, FIONBIO, [1])                 = 0
close(15)                               = 0
ioctl(14, FIONBIO, [1])                 = 0
close(17)                               = 0
ioctl(16, FIONBIO, [1])                 = 0
epoll_ctl(3, EPOLL_CTL_ADD, 12, {EPOLLIN, {u32=12, u64=12}}) = 0
epoll_ctl(3, EPOLL_CTL_ADD, 14, {EPOLLIN, {u32=14, u64=14}}) = 0
epoll_ctl(3, EPOLL_CTL_ADD, 16, {EPOLLIN, {u32=16, u64=16}}) = 0
epoll_wait(3, [{EPOLLIN, {u32=16, u64=16}}], 1024, -1) = 1
read(16, "[ 'nobody', 0 ]\n", 65536)    = 16
epoll_wait(3, [{EPOLLIN|EPOLLHUP, {u32=12, u64=12}}], 1024, -1) = 1
read(12, "", 65536)                     = 0
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_KILLED, si_pid=21, si_uid=0, si_status=SIGSEGV, si_utime=4, si_stime=1} ---
read(4, "*", 1)                         = 1
write(7, "\250\210a^}U\0\0\21\0\0\0\0\0\0\0", 16) = 16
write(5, "*", 1)                        = 1
rt_sigreturn({mask=[]})                 = 0
epoll_ctl(3, EPOLL_CTL_DEL, 12, 0x7ffef7ad1bec) = 0
close(12)                               = 0
epoll_wait(3, [{EPOLLIN|EPOLLHUP, {u32=16, u64=16}}, {EPOLLIN|EPOLLHUP, {u32=14, u64=14}}, {EPOLLIN, {u32=6, u64=6}}], 1024, -1) = 3
read(16, "", 65536)                     = 0
epoll_ctl(3, EPOLL_CTL_DEL, 16, 0x7ffef7ad21ac) = 0
close(16)                               = 0
read(14, "", 65536)                     = 0
epoll_ctl(3, EPOLL_CTL_DEL, 14, 0x7ffef7ad21ac) = 0
close(14)                               = 0
read(6, "\250\210a^}U\0\0\21\0\0\0\0\0\0\0", 512) = 16
wait4(21, [{WIFSIGNALED(s) && WTERMSIG(s) == SIGSEGV && WCOREDUMP(s)}], WNOHANG, NULL) = 21
rt_sigprocmask(SIG_SETMASK, ~[RTMIN RT_1 RT_2], [], 8) = 0
read(4, "*", 1)                         = 1
rt_sigaction(SIGCHLD, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fda9929f722}, NULL, 8) = 0
write(5, "*", 1)                        = 1
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
write(11, "Error: could not get uid/gid\n[ '"..., 509Error: could not get uid/gid
[ 'nobody', 0 ]

    at /usr/local/lib/node_modules/npm/node_modules/uid-number/uid-number.js:37:16
    at ChildProcess.exithandler (child_process.js:282:5)
    at emitTwo (events.js:126:13)
    at ChildProcess.emit (events.js:214:7)
    at maybeClose (internal/child_process.js:925:16)
    at Socket.stream.socket.on (internal/child_process.js:346:11)
    at emitOne (events.js:116:13)
    at Socket.emit (events.js:211:7)
    at Pipe._handle.close [as _onclose] (net.js:557:12)
) = 509
uname({sysname="Linux", nodename="dba91fdc3e5d", ...}) = 0
uname({sysname="Linux", nodename="dba91fdc3e5d", ...}) = 0
write(11, "TypeError: Cannot read property "..., 1054TypeError: Cannot read property 'get' of undefined
    at errorHandler (/usr/local/lib/node_modules/npm/lib/utils/error-handler.js:205:18)
    at /usr/local/lib/node_modules/npm/bin/npm-cli.js:83:20
    at cb (/usr/local/lib/node_modules/npm/lib/npm.js:224:22)
    at /usr/local/lib/node_modules/npm/lib/npm.js:262:24
    at /usr/local/lib/node_modules/npm/lib/config/core.js:81:7
    at Array.forEach (<anonymous>)
    at /usr/local/lib/node_modules/npm/lib/config/core.js:80:13
    at f (/usr/local/lib/node_modules/npm/node_modules/once/once.js:25:25)
    at afterExtras (/usr/local/lib/node_modules/npm/lib/config/core.js:178:20)
    at Conf.<anonymous> (/usr/local/lib/node_modules/npm/lib/config/core.js:236:22)
    at /usr/local/lib/node_modules/npm/node_modules/uid-number/uid-number.js:39:14
    at ChildProcess.exithandler (child_process.js:282:5)
    at emitTwo (events.js:126:13)
    at ChildProcess.emit (events.js:214:7)
    at maybeClose (internal/child_process.js:925:16)
    at Socket.stream.socket.on (internal/child_process.js:346:11)
) = 1054
uname({sysname="Linux", nodename="dba91fdc3e5d", ...}) = 0
uname({sysname="Linux", nodename="dba91fdc3e5d", ...}) = 0
writev(2, [{iov_base="", iov_len=0}, {iov_base="/usr/local/lib/node_modules/npm/"..., iov_len=114}], 2/usr/local/lib/node_modules/npm/lib/utils/error-handler.js:205
  if (npm.config.get('json')) {
                 ^
) = 114
writev(2, [{iov_base="\n", iov_len=1}, {iov_base="TypeError: Cannot read property "..., iov_len=276}], 2
TypeError: Cannot read property 'get' of undefined
    at process.errorHandler (/usr/local/lib/node_modules/npm/lib/utils/error-handler.js:205:18)
    at emitOne (events.js:116:13)
    at process.emit (events.js:211:7)
    at process._fatalException (bootstrap_node.js:378:26)) = 277
writev(2, [{iov_base="\n", iov_len=1}, {iov_base=NULL, iov_len=0}], 2
) = 1
futex(0x7fda964e59e4, FUTEX_WAKE_PRIVATE, 1) = 1
futex(0x557d5e6185a4, FUTEX_WAKE_PRIVATE, 1) = 1
munmap(0x7fda964e6000, 8400896)         = 0
munmap(0x7fda95ce3000, 8400896)         = 0
munmap(0x7fda954e0000, 8400896)         = 0
munmap(0x7fda94cdd000, 8400896)         = 0
exit_group(7)                           = ?
+++ exited with 7 +++

@chorrell
Copy link
Contributor

Can you provide the output of docker version?

@raulgomis
Copy link
Author

docker version

Client:
 Version:      18.03.1-ce
 API version:  1.37
 Go version:   go1.9.4
 Git commit:   3dfb8343b139d6342acfd9975d7f1068b5b1c3d3
 Built:        Thu May 24 22:21:27 2018
 OS/Arch:      linux/amd64
 Experimental: false
 Orchestrator: swarm

Server:
 Engine:
  Version:      18.03.1-ce
  API version:  1.37 (minimum version 1.12)
  Go version:   go1.9.4
  Git commit:   7390fc6/18.03.1-ce
  Built:        Thu May 24 22:22:43 2018
  OS/Arch:      linux/amd64
  Experimental: false

@LaurentGoderre
Copy link
Member

Is it me or this seems like an error specific to npm?

@raulgomis
Copy link
Author

raulgomis commented Jul 24, 2018

I think there is something between node / alpine (musl) and the new Hyperv on AWS M5s:

docker run -it node:8-alpine sh -c "node -e 'process.setgid(0)'"

returns a "Segmentation fault (core dumped)" in M5s.

So, in summary, those are all tests in an AWS M5 instance:

Test 1: setgid node

docker run -it node:8-alpine sh -c "node -e 'process.setgid(0)'" -> "Segmentation fault"
docker run -it node:10-alpine sh -c "node -e 'process.setgid(0)'" ->"Segmentation fault" (although in node:10-alpine is not displayed, but segmentation fault occurs)

Test 2: setgid npm

docker container run -it node:8-alpine /bin/sh -c 'npm -g' and docker container run -it node:10-alpine /bin/sh -c 'npm -g'

Error: could not get uid/gid
[ 'nobody', 0 ]

    at /usr/local/lib/node_modules/npm/node_modules/uid-number/uid-number.js:37:16
    at ChildProcess.exithandler (child_process.js:282:5)
    at emitTwo (events.js:126:13)
    at ChildProcess.emit (events.js:214:7)
    at maybeClose (internal/child_process.js:925:16)
    at Socket.stream.socket.on (internal/child_process.js:346:11)
    at emitOne (events.js:116:13)
    at Socket.emit (events.js:211:7)
    at Pipe._handle.close [as _onclose] (net.js:557:12)
TypeError: Cannot read property 'get' of undefined
    at errorHandler (/usr/local/lib/node_modules/npm/lib/utils/error-handler.js:205:18)
    at /usr/local/lib/node_modules/npm/bin/npm-cli.js:83:20
    at cb (/usr/local/lib/node_modules/npm/lib/npm.js:224:22)
    at /usr/local/lib/node_modules/npm/lib/npm.js:262:24
    at /usr/local/lib/node_modules/npm/lib/config/core.js:81:7
    at Array.forEach (<anonymous>)
    at /usr/local/lib/node_modules/npm/lib/config/core.js:80:13
    at f (/usr/local/lib/node_modules/npm/node_modules/once/once.js:25:25)
    at afterExtras (/usr/local/lib/node_modules/npm/lib/config/core.js:178:20)
    at Conf.<anonymous> (/usr/local/lib/node_modules/npm/lib/config/core.js:236:22)
    at /usr/local/lib/node_modules/npm/node_modules/uid-number/uid-number.js:39:14
    at ChildProcess.exithandler (child_process.js:282:5)
    at emitTwo (events.js:126:13)
    at ChildProcess.emit (events.js:214:7)
    at maybeClose (internal/child_process.js:925:16)
    at Socket.stream.socket.on (internal/child_process.js:346:11)
/usr/local/lib/node_modules/npm/lib/utils/error-handler.js:205
  if (npm.config.get('json')) {
                 ^

TypeError: Cannot read property 'get' of undefined
    at process.errorHandler (/usr/local/lib/node_modules/npm/lib/utils/error-handler.js:205:18)
    at emitOne (events.js:116:13)
    at process.emit (events.js:211:7)
    at process._fatalException (bootstrap_node.js:378:26)

Test 3: setgid c++

This works:

#include <stdio.h>
#include <fcntl.h>
#include <unistd.h>
#include <grp.h>

int main(int argc, char *argv[]) {
    printf("Hello! This is a test program.\n");
    int res = setgid(0);
    fprintf(stderr,"%d\n", res);
    return 0;
}

@raulgomis
Copy link
Author

raulgomis commented Jul 24, 2018

Root cause seems to be: https://wiki.musl-libc.org/functional-differences-from-glibc.html#Thread_stack_size

There three possible solutions:

  1. Talk to Alpine teams to fix it. There were some discussions already: https://github.com/voidlinux/void-packages/issues/4147
  2. Fix it in the node docker alpine image as follows: https://github.com/jubel-han/dockerfiles/blob/master/node/Dockerfile
  3. Set default npm_config_unsafe_perm=true in the docker image as a workaround until it's fixed.

@LaurentGoderre
Copy link
Member

Can you try node 10? I've read that this was fixed in Alpine 3.7

@raulgomis
Copy link
Author

It still fails in node:10-alpine (alpine 3.8)

Details:

[root@ip-10-0-0-207 ec2-user]# sudo docker run -it --entrypoint sh node:10-alpine
/ # node
> process.setuid(0)
Segmentation fault

@chorrell
Copy link
Contributor

Alternatively, you should switch to the slim (Debian) variant until this get's fixe upstream by the Alpine team.

@carlosmmelo
Copy link

I followed the lib/stack-fix.so you provided on the dockerfile link above.

And also the Debien non root version (Jessie)

I'm still getting this issue after upgraded my instance.

Anyone else has another workaround? this is been very annoying.

@dysinger
Copy link

dysinger commented Oct 4, 2018

Issue #3 is still prematurely closed with lots of "me too" comments

@tianon
Copy link
Contributor

tianon commented Oct 17, 2018

Looks like this is the root cause of the Alpine half of docker-library/ghost#157 -- interestingly, it's only failing on certain architectures for us (and we've narrowed the reproducer down to simply running node /usr/local/lib/node_modules/npm/node_modules/uid-number/get-uid-gid.js, which should segfault on affected platforms).

@franklouwers
Copy link

It's not just "platforms", it's more "environments".

Eg on "64bit Intel/amd", I have three systems I tried this on:

  • Centos 7.6 (most recent kernel), amd64 (Intel Xeon Gold), on Digital Ocean: segfault
  • Centos 7.6 (most recent kernel), amd64, unknown CPU on Amazon AWS: no segfault
  • MacOS, using the LinuxKit linux emulation, amd64 on my laptop: no segfault

The strange thing is: on that DO node where I can trigger the segfault, I can rebuild the image, based on https://raw.githubusercontent.com/nodejs/docker-node/86b9618674b01fc5549f83696a90d5bc21f38af0/8/alpine/Dockerfile (which should be the latest node:8-alpine Dockerfile), and then it doesn't crash !? So it seems to be an incompatibility when building the image...

As this buggers me quite a lot (I am an instructor and one of the courses I teach happen to have a lot of node-alpine docker examples in them), I am more than happy to test things out, help debug etc. Unfortunately, my musl/glibc C powers are not strong enough to suggest an actual solution...

Anything I can do / help / ...?

@franklouwers
Copy link

btw: I also suggest that we change the title of this issue, as it is not AWS M5-onl

@pieterlukasse
Copy link

@raulgomis I have the same issue! When running this:

docker run -it --entrypoint sh node:10-alpine
/ # node
> process.setuid(0)

I get the following on Ubuntu:

undefined

but get segmentation fault on Amazon Linux:

Segmentation fault

@trevorlinton
Copy link

FWIW we're running on an r5.xlarge. r5.large and tried a c5d.large and experiencing the same issue. Finally switched to using m4.large and the problem stopped. I should say this was on the EXACT same AMI from AWS (amzn-ami-hvm-2018.03.0.20181129-x86_64-gp2 (ami-01e24be29428c15b2)).

Some how the kernal must be picking up something from the host hypervisor and a kernel setting may be contributing to why this only occurs on certain systems.

@trevorlinton
Copy link

Also, tried c5.xlarge and m5.xlarge and both also didnt work, but m5a.xlarge did.

@MateusPD
Copy link

MateusPD commented Oct 2, 2019

I am also having this issue on 10.16.0-alpine. This error started yesterday for me

@gja
Copy link

gja commented Oct 15, 2019

We've also been facing this for from node:10.15-alpine, compiling on quay.io since today morning.

Worked around this by adding npm config set unsafe-perm true to our build script.

@amauryaE3156
Copy link

i am also running with this issue from today morning. having below versions:
Node: v8.17.0
npm: 6.13.4
Docker: Docker version 19.03.5, build 633a0ea838
Docker Compose: docker-compose version 1.13.0, build 1719ceb
Python: Python 2.7.17

issue:
Step 13/18 : RUN npm install -g yarn
---> Running in 62643c8ed0ee
Error: could not get uid/gid
[ 'nobody', 0 ]

at /usr/local/lib/node_modules/npm/node_modules/uid-number/uid-number.js:37:16
at ChildProcess.exithandler (child_process.js:282:5)
at emitTwo (events.js:126:13)
at ChildProcess.emit (events.js:214:7)
at maybeClose (internal/child_process.js:925:16)
at Process.ChildProcess._handle.onexit (internal/child_process.js:209:5)

TypeError: Cannot read property 'get' of undefined
at errorHandler (/usr/local/lib/node_modules/npm/lib/utils/error-handler.js:205:18)
at /usr/local/lib/node_modules/npm/bin/npm-cli.js:83:20
at cb (/usr/local/lib/node_modules/npm/lib/npm.js:224:22)
at /usr/local/lib/node_modules/npm/lib/npm.js:262:24
at /usr/local/lib/node_modules/npm/lib/config/core.js:81:7
at Array.forEach ()
at /usr/local/lib/node_modules/npm/lib/config/core.js:80:13
at f (/usr/local/lib/node_modules/npm/node_modules/once/once.js:25:25)
at afterExtras (/usr/local/lib/node_modules/npm/lib/config/core.js:178:20)
at Conf. (/usr/local/lib/node_modules/npm/lib/config/core.js:236:22)
at /usr/local/lib/node_modules/npm/node_modules/uid-number/uid-number.js:39:14
at ChildProcess.exithandler (child_process.js:282:5)
at emitTwo (events.js:126:13)
at ChildProcess.emit (events.js:214:7)
at maybeClose (internal/child_process.js:925:16)
at Process.ChildProcess._handle.onexit (internal/child_process.js:209:5)
/usr/local/lib/node_modules/npm/lib/utils/error-handler.js:205
if (npm.config.get('json')) {
^

TypeError: Cannot read property 'get' of undefined
at process.errorHandler (/usr/local/lib/node_modules/npm/lib/utils/error-handler.js:205:18)
at emitOne (events.js:116:13)
at process.emit (events.js:211:7)
at process._fatalException (bootstrap_node.js:378:26)
ERROR: Service 'block-explorer' failed to build: The command '/bin/sh -c npm install -g yarn' returned a non-zero code: 7

@pieterlukasse
Copy link

Maybe this helps: https://github.com/opentargets/webapp/blob/34f66d7383e6e4459280826133269d440fc3d5f4/Dockerfile

It seems to be working, with node:8 alpine and yarn install steps.

@amauryaE3156
Copy link

thank you pieter, i am geeting this issue please give pointer to solve it.

error:
yarn install v1.12.3
info No lockfile found.
[1/5] Validating package.json...
[2/5] Resolving packages...
warning fabric-client > hoek@4.2.1: This version has been deprecated in accordance with the hapi support policy (hapi.im/support). Please upgrade to the latest version to get the best features, bug fixes, and security patches. If you are unable to upgrade at this time, paid support is available for older versions (hapi.im/commercial).
warning eslint > file-entry-cache > flat-cache > circular-json@0.3.3: CircularJSON is in maintenance only, flatted is its successor.
[3/5] Fetching packages...
[4/5] Linking dependencies...
warning "eslint-plugin-jsx-a11y > axobject-query@2.1.1" has incorrect peer dependency "eslint@^5 || ^6".
warning "eslint-plugin-react > eslint-plugin-eslint-plugin@2.1.0" has incorrect peer dependency "eslint@>=5.0.0".
[5/5] Building fresh packages...
success Saved lockfile.
Done in 33.81s.
Removing intermediate container 15509799e850
---> 6fc9a86e87ed
Step 23/42 : COPY . /var/www/
---> 00dac800a3d0
Step 24/42 : RUN yarn run full-install
---> Running in 31afa823f018
yarn run v1.12.3
error Command "full-install" not found.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
ERROR: Service 'block-explorer' failed to build: The command '/bin/sh -c yarn run full-install' returned a non-zero code: 1

@pieterlukasse
Copy link

amauryaE3156: maybe check the project I linked. Try to build that one and check what "full-install" means there (maybe it is a custom config?)

fadeltd added a commit to fadeltd/lint-review that referenced this issue Jan 1, 2020
pditommaso added a commit to seqeralabs/nf-tower that referenced this issue Feb 17, 2020
@x-yuri
Copy link

x-yuri commented Feb 24, 2020

Fixed in nodejs@12.4.0?

$ cat /etc/issue
Debian GNU/Linux 9 \n \l

$ uname -a
Linux jj1 4.9.0-11-amd64 #1 SMP Debian 4.9.189-3+deb9u2 (2019-11-11) x86_64 GNU/Linux

$ docker --version
Docker version 19.03.5, build 633a0ea838

$ docker run --rm -it node:12.3.1-alpine sh -c 'cat /etc/issue && node -e "process.setgid(0)" && echo done'
Welcome to Alpine Linux 3.9
Kernel \r on an \m (\l)
Segmentation fault (core dumped)

$ docker run --rm -it node:12.4.0-alpine sh -c 'cat /etc/issue && node -e "process.setgid(0)" && echo done'
Welcome to Alpine Linux 3.9
Kernel \r on an \m (\l)
done

$ docker run --rm -it node:13.0.0-alpine sh -c 'cat /etc/issue && node -e "process.setgid(0)" && echo done'
Welcome to Alpine Linux 3.10
Kernel \r on an \m (\l)
done

@nschonni
Copy link
Member

nschonni commented Apr 3, 2020

Closing since Node 8 support ended 4 months ago

@yosifkit
Copy link
Contributor

This is still a problem on node 10.19 and 10.20 (see the last two issues linked here, waveformhealth/virtualvisit-web#1 and docker-library/official-images#7962). One uses the node:10:20 image and the other installs the node package from Alpine. The last node:11-alpine (v11.15.0) image also fails, but it does not happen on node:12.16-alpine (v12.16.3). It would be nice to see a fix applied for node:10-alpine, but since Alpine isn't really supported by Node itself I can see that it may be a low priority.

In case someone wants to go digging to figure out why it works in newer versions (and if that could be applied to 10), here is what I found out:

  • Easiest reproducer is: docker run -it --rm node:version-alpine node -e 'process.setgid(0)' and check the exit code (139)
    • to see Segmentation fault (core dumped), start an interactive sh and then run node -e 'process.setgid(0)'
      $ docker run -it --rm node:10-alpine sh
      / # node --version
      v10.20.1
      / # node -e 'process.setgid(0)'
      Segmentation fault (core dumped)
  • fails on arm64v8 host from packet c1.large.arm.xda
    • users above also report failures on r5.xlarge, r5.large, c5d.large , c5.xlarge
    • but these work: m5a.xlarge m4.large
  • Changing stack size by adding the below to the Dockerfile didn't help (verified with objdump -p main):
     && export LDFLAGS="-Wl,-z,stack-size=2097152" 
     && export CXXFLAGS="-Wl,-z,stack-size=2097152"
  • using muslstack also didn't work
  • this Dockerfile works fine after swapping the FROM to node:10-alpine
    • I am uncertain what makes this LD_PRELOAD C file work different than the Alpine provided way to change the stack size.

@x-yuri
Copy link

x-yuri commented May 12, 2020

@yosifkit It somehow resolved with node@12.4.0. You can try to inspect the changes.

ebox86 added a commit to ebox86/cdk-ecs-fargate-ghost that referenced this issue May 19, 2020
mpepping pushed a commit to mpepping/docker-cyberchef that referenced this issue Feb 1, 2021
* Cleanup and restructure Dockerfile

Use build and app stage

* Add container build CI

* Use normal node image

The alpine image has a bug which causes an error 'Error: could not get uid/gid'
It is fixed in later node versions, see nodejs/docker-node#813 (comment)

* Use node 10, 11 is EOL and 12 not supported

* git is already installed

* Run grunt with npx

* Run as node user

* Add release and latest build

* Declare exact base image

* Enable image push

Co-authored-by: rxcket <rxcket@users.noreply.github.com>
wizonesolutions added a commit to LBNL-ETA/BEDES-Manager that referenced this issue Apr 2, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests