New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pnpm monorepo docker support #1637
Comments
If you mean, when preparing the servers to be dockerized, then no. Just running
No way to do it programmatically at the moment but we can move this logic to a separate package Regarding the virtual-store/store, I had an idea that might be related. I'd like to move the tarball files out from the store. That would give us several benefits, some of which:
Another thing I was thinking about, pnpm could have a feature that would pack all the needed tarballs by a repo into a local repo-specific registry mirror. That registry mirror could be commited with the repo. As a result, the git repo become completly independent. |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
We have successfully built a docker build and deployment infrastructure on top of our pnpm workspace projects. Package Structure
Each of these folders is included in our
Docker Layers Pnpm - handles the vm requirements (node and pnpm)
Install - handles the top-level pnpm install. We've kept the file content light:
Lint - allows linting to be controlled separately from other build concerns to maximize caching
Test - allows testing to be controlled separately from other build concerns to maximize caching (our legacy stack is tested on a separate image that isn't worth exploring here)
With those layers decoupled, we can build our application (deployable) images without additional cruft:
Note that each decoupled layer inherits from the base
It is important to re-emphasize that the folder As with all things docker, composition is your friend. I hope these examples are useful. We have a few additional layers between |
@kaidjohnson Thanks for sharing! This is really useful information! |
Hi, @kaidjohnson I might be missing something but your production image still pack the whole node_modules folder, right ? Edit ; ok scratch that, it's written right there |
@RDeluxe While the install image does have all the package deps, we only end up copying the build outputs to the final application images, so those images do not include node_modules. |
That's true. If we needed to run a nodejs container rather than nginx, we would probably not discard the pnpm store in the install layer and we probably would rerun a |
Hello, FROM node:12-alpine as pnpm
ENV PNPM_VERSION 5.4 # Control pnpm version dependency explicitly
RUN apk --no-cache add curl
RUN curl -sL https://unpkg.com/@pnpm/self-installer | node
FROM pnpm as install
ARG NPM_TOKEN
ARG PACKAGE_NAME=""
ENV NPM_CONFIG_LOGLEVEL error
WORKDIR /app
COPY pnpm-lock.yaml .
COPY pnpm-workspace.yaml .
COPY pnpmfile.js .
COPY *.json ./
RUN echo "//registry.npmjs.org/:_authToken=${NPM_TOKEN}" > ~/.npmrc
COPY libraries/libA/package.json libraries/libA/package.json
# COPY all libs ...
COPY packages/${PACKAGE_NAME}/package.json packages/${PACKAGE_NAME}/package.json
FROM install as builder
ARG PACKAGE_NAME=""
WORKDIR /app
# install dependencies for the selected package and its dependencies (direct and non-direct)
RUN pnpm install -r --reporter=append-only --ignore-scripts --filter @mymodule/${PACKAGE_NAME}...
COPY libraries/ libraries/
COPY packages/ packages/
RUN pnpm run build --filter @mymodule/${PACKAGE_NAME}...
FROM install as runner
ARG PACKAGE_NAME=""
ARG PORT=3000
ENV NPM_CONFIG_LOGLEVEL warn
ENV NODE_ENV production
ENV PORT ${PORT}
EXPOSE ${PORT}
WORKDIR /app
RUN pnpm install -rP --no-optional --frozen-lockfile --reporter=append-only --shamefully-hoist --filter @mymodule/${PACKAGE_NAME}... && \
pnpm store prune && \
rm -rf ~/.pnpm-store
COPY --from=builder /app/libraries/ ./libraries/
WORKDIR /app/packages/${PACKAGE_NAME}
COPY --from=builder /app/packages/${PACKAGE_NAME}/dist/ ./dist/
CMD ["node", "dist/main.js"]
Even if I put the
The only solution so far is to manually add all missing dependencies in the readPackage hook but it's far from ideal ! What do you thing ? How to use shamefully-hoist option ? Thanks !! |
I have the feeling that the flag recursive-install = false
shamefully-hoist = true and it's now working ! (I have also changed a little the Dockerfile if anyone is interested) EDIT: In fact I still got the same issue when I include the pnpm-lock.yaml file inside the build |
i had the same problem with yarn before so i created yarn-isolate-workspace package that took care of that exact problem. that works fine except for the lock-file that is much harder to create from yarn other than that it has some good practices for dockerfiles such as separating workspaces srcless files so u could install all workspaces dependencies and then copy the src files. |
@Madvinking the isolation solution is great! |
I have released a new command in pnpm v7.4.0-4 called You can deploy a project from a workspace by selecting the project with filter and telling the output directory, for example:
|
Looks interesting. How do you imagine it to be used? Does it pick the workspace projects used by the package passed via |
I assumed you would build it before deployment, so devDependencies are not installed in the target directory. If this is not a correct assumption, I am fine changing it. |
Yeah, good point, maybe as an opt-in, would be handy? |
ok |
it would be really handy if sorry, if that's a bit offtopic, have added a new issue for it #4980 |
Having an opt-in for devDependencies would indeed be pretty handy when using |
|
I'm starting work on Dockerizing my pnpm monorepo. Hopefully when I'm done we can have a recipe for how to deploy a monorepo containing multiple services sharing multiple packages as separate Docker images that build quickly after changes are made, and are lightweight.
Plan
Assumption: You don't care about publishing packages to npm during a deploy.
Goal: Only build the services whose dependencies have changed.
git diff
.package.json
).pnpm ...pkg-with-changes
).babel.config.js
, every file will need rebuilding. Need to include the babel cache key in the decision to rebuild a package or not. The root package should use thefiles
prop to define all files that are needed for a full build..docker-trees
), filtering out any packages not in this service's dependency tree.docker build -f <path-to-dockerfile> .
from the filtered repo temp dir. (The Dockerfile could be shared).Perf (local dev and CI)
docker build
accepts a tar as a context. (If you provide a path likedocker build .
)Perf (during local dev)
node_modules.cache
...or should we force a rebuild just to be safe.--use-store-server
speed up multiple Docker installs? Can a server be shared outside of a Docker container?shared-workspace-shrinkwrap
?Perf - prevent all root package dependencies (
node_modules
) being included in every service buildpeer deps
The root package contains peer deps. If there is a large peer dep required by only two packages, it will be in the root package, and included in every Docker image. This is not good.
We only want peer deps that are used by packages in our partial repo tree. So we could filter the root package.json - using a custom pnpmfile perhaps...
All peer deps should be explitly defined (if they are not, we use the pnpmfile.js to declare peer deps). Sometimes we cheat though, and just install them in the root package which makes them resolvable. E.g.
knex
has an optional peer dep of:mssql
,pg
, etc. These should be added as peer deps to the knex package. Or installed in the root.We can use
--strict-peer
to ensure that all peers are satisfied. If this is not the case, then include all deps from the root package.json.tools
Sometimes there are tools in the root package like @babel/core.
Perf - stripping unnecessary files
We want the tests included during ci testing, but then when deploying we want to remove them. Use build containers. Should simulate the docker structure in
.docker-trees
dir.Perf - caching
node_modules
between buildsIdeal scenario:
Pnpm runs a headless install
The virtual store (
nm/.registry.npmjs.org
) contains all packages needed in repowe just need to symlink sub-packages
If a package is missing from the virtual store (
nm/.registry...
):all packages are already in the global pnpm store
only the file copying from store to
node_modules
needs to run.https://stackoverflow.com/a/40322275/130910
Here they say you have to use absolute paths with same structure on host and on container.
Options for global store persistence:
~/.pnpm-store/
.Avoiding copying from global store to
node_modules
each install:Virtual Store (
node_modules/.registry...
) caching:node_modules
.Plan - npm publishing
b. If you want some packages published to npm and for them to be installed from npm when building your image.
I guess Lerna's approach works.
Pnpm apis to use
CI notes
verify-store-integrity
for CI.The text was updated successfully, but these errors were encountered: