Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Demonstrating Native Addons #38

Closed
wants to merge 3 commits into from
Closed

WIP: Demonstrating Native Addons #38

wants to merge 3 commits into from

Conversation

CMCDragonkai
Copy link
Member

@CMCDragonkai CMCDragonkai commented May 4, 2022

Description

In helping solve the snapshot isolation problem in MatrixAI/js-db#18, we needed to lift the hood and go into the C++ level of nodejs.

To do this, I need to have a demonstration of how native addons can be done in our demo lib here.

There are 2 ecosystems for building native addons:

  • prebuild
  • node-pre-gyp

Of the 2, the prebuild ecosystem is used by UTP and leveldb. So we will continue using that. Advantages from 2016 was commented here: prebuild/prebuild#159

The basic idea is that Node supports a "NAPI" system that enables node applications to call into C++. So it's a the FFI system of NodeJS. It's also a bidirectional FFI as C++ code can call back into the NodeJS JS functions.

The core library is node-gyp. In the prebuild ecosystem is wrapped with node-gyp-build, which you'll notice is the one that we already using in this repo. The main feature here is the ability to supply prebuilt binaries instead of expecting the end-user to always compile from source.

Further details here: https://nodejs.github.io/node-addon-examples/build-tools/prebuild (it also compares it to node-pre-gyp).

The node-gyp-build has to be a dependency, not devDependencies, because it is used during runtime to automatically find the built shared-object/dynamic library and to load it.

It looks like this:

import nodeGypBuild from 'node-gyp-build';
const bindings = nodeGypBuild('./path/to/dir/containing/gyp/file');
bindings.someNativeFunction()

Internally nodeGypBuild ends up calling the require() function inside NodeJS. Which supports the ability to load *.node binaries (which is the shared-object that is compiled using the NAPI C++ headers). See: https://github.com/prebuild/node-gyp-build/blob/2e982977240368f8baed3975a0f3b048999af40e/index.js#L6

The require is supplied by the NodeJS runtime. If you execute the JS with a different runtime, they may support the commonjs standard, and thus understand the require calls, but they may be compatible with native modules that are compiled with NAPI headers. This is relevant since, you also have to load the binary that matches your OS libraries and CPU architecture. It's all dynamic linking under the hood. This is also why you use node-gyp-build which automates some of this lookup procedure.

As a side-note about bundlers. Bundlers are often used part of the build process that targets web-platforms. Since the web platform does not understand require calls, bundlers will perform some sort of transclusion. This is also the case when ES6 import targets files on disk. Details on this process is here: https://github.com/evanw/esbuild/blob/master/docs/architecture.md#notes-about-linking. Bundlers will often call this "linking", and when targetting web-platforms, this is basically a form of static linking since JS running in browsers cannot load JS files from disk. This is also why in some cases, one should replace native addons with WASM instead, as bundlers can support static linking of WASM (which are cross-platform) into a web-bundle. But some native addons depend on OS features (like databases with persistence), and fundamentally cannot be converted into WASM binaries. In the future, our crypto code would make sense to turn into WASM binaries. But DB code is likely to always be native, as they have to be persistent. As the web develops can gains extra features, then eventually it may be possible that all native code can be done via WASM (but this may be a few years off).

Now the native module itself is just done with a C++ file like index.cpp. We should prefer using .cpp and .h as the most portable extensions.

Additionally, there must be binding.gyp file that looks like this:

{
  "targets": [{
    "target_name": "somename",
    "include_dirs": [
      "<!(node -e \"require('napi-macros')\")"
    ],
    "sources": [ "./index.cpp" ]
  }]
}

Basically another configuration file that configures node-gyp and how it should be compiling the C++ code. The target_name specifies the name of the addon file, so the output result will be somename.node. The sources are self-explanatory. The include_dirs entries have the ability to execute shell commands, in this case, it is using node -e to execute a script that will return some string that is a path to C++ headers that will be included during compilation.

The C++ code needs to use the NAPI headers, however there's a macro library that makes writing NAPI addons easier: https://github.com/hyperdivision/napi-macros. I've seen this used in the utp-native and classic-level.

The C++ code may look like this:

#include <node_api.h>
#include <napi-macros.h>

NAPI_METHOD(times_two) {
  NAPI_ARGV(1)
  NAPI_ARGV_INT32(number, 0)

  number *= 2;

  NAPI_RETURN_INT32(number)
}

NAPI_INIT() {
  NAPI_EXPORT_FUNCTION(times_two)
}

This ends up exporting a native module containing the times_two function that multiples a number by 2, and returns an int32 number.

It's also important that node-gyp-build is setup as a install script in the package.json:

  "scripts": {
    "install": "node-gyp-build"
  }

This means when you run npm install (which is used to install all the dependencies for a NPM package, or to install a specific NPM package), it will run the node-gyp-build durin the installation process.

This means that currently in our utils.nix node2nixDev expression still requires the npm install command. This used to exist, however I removed it during #37 thinking it had no effect. But it was confirmed by svanderburg/node2nix#293 (comment) that the npm install command is still run in order to execute build scripts. And node-gyp-build is now part of the installation process. We should include: https://github.com/svanderburg/node2nix/blob/8264147f506dd2964f7ae615dea65bd13c73c0d0/nix/node-env.nix#L380-L387 with all the necessary flags and parameters too. We may be able to make it work if we hook our build command prior to npm install. I imagine that this should be possible since the npm rebuild command is executed prior. So we need to investigate this.

In order to make this all work, our Nix environment is going to need all the tools for source compilation. Now according to https://github.com/nodejs/node-gyp#on-unix we will need python3, make and gcc. Our shell.nix naturally has make and gcc because we are using pkgs.mkShell which must extend from stdenv.mkDerivation. However python3 will be needed as well.

The node2nix has some understanding of native dependencies (this is why it also brings in python in its generated derivation svanderburg/node2nix#281), and I believe it doesn't actually build from source (except in some overridden dependencies).

Some npm dependencies are brought in via nixpkgs nodePackages because node2nix derivation isn't enough to build them (because they have complex native dependencies). Such as node-gyp-build itself or vercel's pkg. This is also why I had to provide nodePackages.node-gyp-build in our buildInputs overrides in utils.nix. It is important that any dependencies acquired via nixpkgs must be the same version we use in our package.json. And this is the case for:

    "node-gyp-build": "4.4.0"
    "pkg": "5.6.0",

Ideally we won't need to do this our own native packages if js-db ends up forking classic-level or leveldown. I think this trick is only relevant in our "build tools" and not our runtime dependencies.

The remaining problem is cross-compilation, as this only enables building from source if you are on NixOS and/or using Nix. Windows and MacOS will require their own setup. Since our development environment is all Nix focused, we don't have to worry about those, but for end-users who may want to rebuild from scratch, they will need to setup their development environent based on information in https://github.com/nodejs/node-gyp. A more pressing question is how we in our Nix development environment will be capable of cross-platform native addons for distribution.

This is where the prebuild ecosystem comes in and in particular https://github.com/prebuild/prebuildify-cross. This is used in leveldb to enable them to build for different platforms, and then save these cross-compiled objects. These objects are then hosted on GitHub releases, and automatically downloaded upon installation for downstream users. In the case they are not downloadable, they are then built from source. https://github.com/Level/classic-level/blob/f4cabe9e6532a876f6b6c2412a94e8c10dc5641a/package.json#L21-L26

However in our Nix based environment, I wonder if we can avoid using docker to do cross compilation, and instead use Nix to provide all the tooling to do cross-compilation. We'll see how this plays out eventually.

Some additional convenience commands now:

# install the current package and install all its dependencies, and build them ALL from source
npm install --build-from-source
# install a specific dependency and build it from source
npm install classic-level --build-from-source
# runs npm build on current package and all dependencies, and also recompiles all C++ addons
npm rebuild
# runs npm build on current package and all dependencies, and specifically recompiles sqlite3 package which has C++ addon
npm rebuild --build-from-source=sqlite3

Issues Fixed

Tasks

  • 1. Integrate node-gyp-build
  • 2. Create a native module exporting a demo functions like addOne for primitives and setProperty for reference-passing procedure and makeArray for heap allocation
  • 3. Fix the nix expressions to support node-gyp-build and other build scripts, and see if we can eliminate our postInstall hook, by relying on package.json hooks instead
  • 4. Integrate prebuildify to precompile binaries and host them on our git release... but this depends on whether typescript-demo-lib is used as a library or as an application, if used as an application, then the pkg builds is used, if used as a library, then one must install the native binary from the same github release, this means the native binary must be part of the same release page.
    • The pkg integration may just be a matter of setting the assets path in package.json to the local prebuilds directory.
    • See the scripts that other projects used WIP: Demonstrating Native Addons #38 (comment)
    • Ensure that compiled native addons in nix have their rpath removed, because nodejs addons shouldn't have an rpath set, and this enables them to be portable
  • [ ] 5. Cross compilation, prebuildify-cross or something else that uses Nix - we must use CI/CD to do cross compilation (not sure about other architectures like ARM)
  • 6. Update the @typescript-eslint packages to match js-db to avoid the warning message.
  • 7. Add typescript typings to a native module
  • [ ] 8. Update README.md to indicate the 2 branches of typescript-demo-lib, the main and the native branch, where the native branch indicates how to build native addons - this will be done in a separate repo: https://github.com/MatrixAI/TypeScript-Demo-Lib-Native based off https://gitlab.com/MatrixAI/Employees/matrix-team/-/issues/8#note_885403611
  • 9. Migrate changes to https://github.com/MatrixAI/TypeScript-Demo-Lib-Native and request mac access to it the repository on gitlab. This branch will just be for development first. The changes here are too significant to keep within the same template repository.
  • 10. The pkg bundle can receive optimisation on which prebuild architectures it bundles, right now it bundles all architectures, when the target architecture implies only a single architecture is required. This can slim the final output pkg so it's not storing random unnecessary things. This may mean that pkg requires dynamic --config to be generated.
  • 11. See if nix-build ./release.nix -A application can be use prebuilds/ directory as well, as this can unify with pkg. That way all things can use prebuilds/ directory. But we would want to optimise it with task 10.
  • [ ] 12. Ensure that npm test can automatically run general tests, and platform-specific tests if detected on the relevant platform - this can be done in polykey as a script
  • 13. Automatic npm publish for prerelease and release based on staging and master branches, add these to the CI/CD jobs
  • 14. Ensure that integration CI/CD jobs are passing by running the final executable with all the bundled prebuilt binaries

Future Tasks

Final checklist

  • Domain specific tests
  • Full tests
  • Updated inline-comment documentation
  • Lint fixed
  • Squash and rebased
  • Sanity check the final build

@ghost
Copy link

ghost commented May 4, 2022

👆 Click on the image for a new way to code review
  • Make big changes easier — review code in small groups of related files

  • Know where to start — see the whole change at a glance

  • Take a code tour — explore the change with an interactive tour

  • Make comments and review — all fully sync’ed with github

    Try it now!

Review these changes using an interactive CodeSee Map

Legend

CodeSee Map Legend

@CMCDragonkai CMCDragonkai self-assigned this May 4, 2022
@CMCDragonkai
Copy link
Member Author

In the Node-API docs:

Node-API is a C API that ensures ABI stability across Node.js versions and different compiler levels. A C++ API can be easier to use. To support using C++, the project maintains a C++ wrapper module called node-addon-api. This wrapper provides an inlineable C++ API. Binaries built with node-addon-api will depend on the symbols for the Node-API C-based functions exported by Node.js. node-addon-api is a more efficient way to write code that calls Node-API. Take, for example, the following node-addon-api code. The first section shows the node-addon-api code and the second section shows what actually gets used in the addon.

So NAPI is actually a C-based API. Whereas node-addon-api is what enables you to write C++ instead.

That means leveldown and classic-level is just using C, and not C++. I'm guessing if we wanted to use C++, we would actually have to also bring in node-addon-api.

I wonder then why it's called binding.cc used in classic-level, and not just binding.c. Perhaps the reason is that although it is using NAPI's C API, the underlying code is still C++, and you only need node-addon-api if you wanted to have utilities to help with writing C++ native addons. So maybe you don't really need it.

@CMCDragonkai
Copy link
Member Author

Using vscode, there's some nice features that can be done with .vscode/c_cpp_properties.json. However it's not very automatic. I had to manually add the nodejs headers path there.

{
    "configurations": [
        {
            "name": "Linux",
            "includePath": [
                "${workspaceFolder}/**",
                "/nix/store/zl4bvsqfxyx5vn9bbhnrmbmpfvzqj4gd-nodejs-16.14.2/include/**"
            ],
            "defines": [],
            "compilerPath": "/nix/store/58pwclg9yr437h0pfgrnbd0jis8fqasd-gcc-wrapper-11.2.0/bin/gcc",
            "cStandard": "gnu17",
            "cppStandard": "gnu++17",
            "intelliSenseMode": "linux-gcc-x64"
        }
    ],
    "version": 4
}

Until this is figured out arrterian/nix-env-selector#68, this is a project-by-project things.

@CMCDragonkai
Copy link
Member Author

CMCDragonkai commented May 5, 2022

Some additional findings:

  1. The npm upstream package depends on node-gyp package.
  2. The node-gyp package ends up being installed as part of the Node nix store path. Because nodejs brings in npm and npm brings in node-gyp.
  3. The node-gyp binary is found by npm run scripts because it exists in /nix/store/...-nodejs-16.14.2/lib/node_modules/npm/bin/node-gyp-bin/node-gyp.
  4. However the node-gyp executable itself is not found when in the nix-shell, this is because the the ${nodejs}/bin does not actually contain the node-gyp path.
  5. This is why node-gyp-build works inside a run script, but not inside our nix-shell.
  6. The node-gyp is overridden in nixpkgs and its expression is:
    node-gyp = super.node-gyp.override {
      nativeBuildInputs = [ pkgs.makeWrapper ];
      # Teach node-gyp to use nodejs headers locally rather that download them for>
      # This is important when build nodejs packages in sandbox.
      postInstall = ''
        wrapProgram "$out/bin/node-gyp" \
          --set npm_config_nodedir ${nodejs}
      '';
    };
  7. This means node-gyp if brought in via nixpkgs, already has the nodedir set correctly to the where the include headers are.
  8. We will bring in node-gyp as dev dependency. And this will make our node-gyp-build actually run the node-gyp installed in our package.
  9. However we will not prioritise the nixpkgs version in our $PATH for shell.nix as it is not relevant.
  10. There's nothing special about node-gyp-build as a package. The only reason to bring it in from nixpkgs is to ensure that we are using the same version. However this can be enforced just by setting the version in the in the package.json to match what Nix provides, rather than overriding the dependency. But the same could be said for pkg. The only reason we are using utils.pkg and pkgs.node-gyp-build is that these are used inside the nix expressions to build things, and we just wanted ensure we had the same version in our nix-shell. Note that pkg has overrides applied, that we originally thought could be fixed, but didn't work last time.
  11. Therefore, instead prioritising it in the $PATH, we can just use an npm install --save-dev --save-exact and npm install --save --save-exact commands to propagate our nix dependency versions in the package.json. As a bonus, the npm install also ends up installing all the other node modules.

The relevant lifecycle script is install. When npm install runs, it also runs the install script. By default if a binding.gyp file exists in the project root, it will run node-gyp rebuild. However since we have overridden it with node-gyp-build, that means it decides whether to run node-gyp rebuild or to download precompiled binaries. You can skip the check for precompiled binaries by doing node-gyp-build --build-from-source, or npm install --build-from-source.

A side note about npm rebuild. The npm rebuild has nothing to do with npm run rebuild. The rebuild script is just classic-level addition. The npm rebuild command actually rebuilds from source all dependencies. I believe this command is intended to be used when the node version itself has been updated, and you want to rebuild ALL native addons. However I believe most of the time you don't need to run this, as npm install --build-from-source should suffice.

A problem right now is that upon running our 2 npm install --save-dev --save ... npm install --save --save-exact ... commands it no longer runs the lifecycle install script. Only npm install itself runs the script. And without running the lifecycle script, it also means that node-gyp-build doesn't run, and thus this does not run the node-gyp rebuild command, which means the C++ code doesn't get built.

So the result is to add 3 commands, the first 2 does --package-lock-only. The third does the final npm install.

@CMCDragonkai
Copy link
Member Author

Therefore if we only want to recompile our own C++ addon, then we must run node-gyp rebuild directly. It will force recompile our own native addon without doing anything else, even if the build has already been built.

Now this leaves a build/ directory in our project root. I'm not sure if this is configurable, but this does seem the build/ directory is then reserved for this usage. This directory may need to exist so that we can actually run tests, and run our distribution.

I may then link up this rebuilding call to the npm run build as well.

@CMCDragonkai
Copy link
Member Author

GTherefore if we only want to recompile our own C++ addon, then we must run node-gyp rebuild directly. It will force recompile our own native addon without doing anything else, even if the build has already been built.

Now this leaves a build/ directory in our project root. I'm not sure if this is configurable, but this does seem the build/ directory is then reserved for this usage. This directory may need to exist so that we can actually run tests, and run our distribution.

I may then link up this rebuilding call to the npm run build as well.

@CMCDragonkai
Copy link
Member Author

Because the build/ directory is auto generated by node-gyp configure. Then it can be ignored. And I notice this being done in classic-level too.

Now node-gyp-build is supposed to be used with prebuildify, so that would be the next thing I investigate. But now I need to test if might built native addon works for testing and bin execution.

@CMCDragonkai
Copy link
Member Author

The tests work and also running the executable works. Now to see how prebuildify works, and to see if npm install works for this as a library.

@CMCDragonkai
Copy link
Member Author

The changes here are likely to be quite significant, so it's going to be worthwhile to keep this as a separate top level branch as native. And just cherry pick common changes between the 2.

@CMCDragonkai
Copy link
Member Author

In order to ensure that our CI/CD during testing is using the same node-gyp structure, we have to change our nix-shell usage in the CI/CD to just run the normal shell.

I originally thought I could just bring in -p nodejs nodejs.python nodePackages.node-gyp, but the node-gyp getting executed by node-gyp-build still isn't aware of the nodedir setting. This means I've changed it to just running nix-shell so it can use the shellHook inside our shell.nix. Now it can stop downloading the nodejs headers since Nix already has it.

@CMCDragonkai
Copy link
Member Author

CMCDragonkai commented May 6, 2022

Something I've noticed that that deep strict equality doesn't work on arrays and objects returned from the native module.

I'm not sure why, but even assert.deepStrictEqual cannot properly find the difference.

But otherwise, using JSON.stringify returns what you'd expect.

So there must be some difference between objects created in JS land, or objects created in n-api, and returned, that prevents deep strict equality from working.

This affects jest testing and any usage of assert.deepStrictEqual.

Raised issue on jest: jestjs/jest#12814

@CMCDragonkai
Copy link
Member Author

CMCDragonkai commented May 6, 2022

For some quick debugging we can use:

for(size_t i = 0; i < bufferSize; i++) {
  printf("%#x ", buffer[i]);
}

But I'm still getting refamiliarised with C/C++ code here... and I suspect a better debugging experience can be had by using a debugger in the future.

Now during jest testing, the printf statements don't get outputted by jest for some reason. So it only shows up when using ts-node.

Furthermore, programming at NAPI level is really quite low level. The NAPI macros are not really comprehensive, there's quite a few areas where it is lacking. It's a C focused API. So if we really doing hardcore C++, it's much better to use https://github.com/nodejs/node-addon-api. The alternative is rust.

@CMCDragonkai
Copy link
Member Author

We now have 5 example NAPI functions:

  1. Addition
  2. Multiplication
  3. Array creation
  4. Object creation
  5. Set property on object

I think this should allow us to proceed to the next stage.

@CMCDragonkai
Copy link
Member Author

The TS interface is added. All we needed to do is define the interface in index.ts, and then assign the type to native when exporting it.

@CMCDragonkai
Copy link
Member Author

Now on task 3.

  1. Fix the nix expressions to support node-gyp-build and other build scripts, and see if we can eliminate our postInstall hook, by relying on package.json hooks instead

We need to first integrate our node-gyp-build mechanism into our nix-build and most likely node2nix derived expression.

The goal is so that nix-build ./release.nix -A application continues to work, in that it should have all the build tools to compile from source and produce a working nix package.

I wonder that since the resulting node binary is kept under the build/ directory, will this end up also in nix store output?

@CMCDragonkai
Copy link
Member Author

Search through the current node2nix produced master version of typescript-demo-lib, I can see that the .nodebinaries are all stored inside the nix store path. In fact they are all under a directory calledprebuilds. These are then placed in the root of each package (along with the package.json`.

Also take note that the .node binaries exist for all of the architectures. For example under the fd-lock package:

                  │   ├── fd-lock
                  │   │   ├── binding.cc
                  │   │   ├── binding.gyp
                  │   │   ├── example.js
                  │   │   ├── .github
                  │   │   │   ├── FUNDING.yml
                  │   │   │   └── workflows
                  │   │   │       └── build-release.yml
                  │   │   ├── index.js
                  │   │   ├── LICENSE
                  │   │   ├── package.json
                  │   │   ├── prebuilds
                  │   │   │   ├── darwin-arm64
*                 │   │   │   │   └── node.napi.node
                  │   │   │   ├── darwin-x64
*                 │   │   │   │   └── node.napi.node
                  │   │   │   ├── linux-arm
*                 │   │   │   │   └── node.napi.node
                  │   │   │   ├── linux-arm64
*                 │   │   │   │   ├── electron.napi.node
*                 │   │   │   │   └── node.napi.node
                  │   │   │   ├── linux-x64
*                 │   │   │   │   └── node.napi.node
                  │   │   │   ├── win32-ia32
*                 │   │   │   │   └── node.napi.node
                  │   │   │   └── win32-x64
*                 │   │   │       └── node.napi.node
                  │   │   ├── README.md
                  │   │   └── test.js

We can see here that the compiled executables are all called node.napi.node (it appears target naming doesn't matter here).

But they are also in architecture-specific directories. In particular os-arch. Like darwin-arm64, and linux-x64, and win32-ia32.

In one particular case, we also have electron.napi.node, which must be loaded if we were running in electron.

This makes fd-lock a useful package to observe how prebuildify is actually being used. I can see that from its release page, their releases correspond to both npm tags, and have all the v1.2.0-windows-all.tar and other tarballs which contain everything that would be untarred into the prebuilds directory.

Furthermore in fd-lock, the actual building of all the cross-platform binaries is done via the github workflows, as prebuildify itself is not capable of cross-compilation. So it's basically how we do it with pkg.

This also reveals to us that node2nix is also just downloading the precompiled binaries without bothering to actually compile from source, but if it did, it would likely not have the precompiled binaries simply downloaded. This is interesting because of svanderburg/node2nix#190.

@CMCDragonkai
Copy link
Member Author

CMCDragonkai commented May 6, 2022

For leveldown it looks like this:

                  │   │   ├── prebuilds
                  │   │   │   ├── android-arm
*                 │   │   │   │   └── node.napi.armv7.node
                  │   │   │   ├── android-arm64
*                 │   │   │   │   └── node.napi.armv8.node
                  │   │   │   ├── darwin-x64+arm64
*                 │   │   │   │   └── node.napi.node
                  │   │   │   ├── linux-arm
*                 │   │   │   │   ├── node.napi.armv6.node
*                 │   │   │   │   └── node.napi.armv7.node
                  │   │   │   ├── linux-arm64
*                 │   │   │   │   └── node.napi.armv8.node
                  │   │   │   ├── linux-x64
*                 │   │   │   │   ├── node.napi.glibc.node
*                 │   │   │   │   └── node.napi.musl.node
                  │   │   │   ├── win32-ia32
*                 │   │   │   │   └── node.napi.node
                  │   │   │   └── win32-x64
*                 │   │   │       └── node.napi.node

Alot more android flavours are available. And even a choice between glibc and musl.

And for UTP:

                  │   │   ├── prebuilds
                  │   │   │   ├── darwin-arm64
*                 │   │   │   │   └── node.napi.node
                  │   │   │   ├── darwin-x64
*                 │   │   │   │   └── node.napi.node
                  │   │   │   ├── linux-x64
*                 │   │   │   │   └── node.napi.node
                  │   │   │   ├── win32-ia32
*                 │   │   │   │   └── node.napi.node
                  │   │   │   └── win32-x64
*                 │   │   │       └── node.napi.node

@CMCDragonkai
Copy link
Member Author

Ok the next step is to introduce prebuildify and to see how our CI/CD and nix expressions can handle it. I suspect that running the nix-build should be a building from source, while if downstream packages were to install this package, then they would get the binaries from prebuilds as we see with leveldown...

Interesting discussion here svanderburg/node2nix#187 with the ability to slim the resulting docker image which probably contain nodejs and many other build time dependencies.

@CMCDragonkai
Copy link
Member Author

Ok so prebuildify is not packaged under nodePackages atm, so you cannot get the same version as in our nix expressions in package.json. However if we want to use it as a CLI in our nix-build, then we will need to push it into nodePackages to maintain the same version. Otherwise, putting it into our run scripts will also work for now.

@CMCDragonkai
Copy link
Member Author

CMCDragonkai commented May 6, 2022

So basically run this:

prebuildify --napi --strip # builds all NAPI binaries
prebuildify --napi --strip --arch=ia32 # builds 32 bit NAPI binaries

The default prebuildify --napi --strip will build things according to the host platform. Cross compilation is only possible if the underlying tools support it. And at the moment, they don't really support it. So I just add 2 run scripts:

    "prebuild": "prebuildify --napi --strip",
    "prebuild-ia32": "prebuildify --napi --strip --arch=ia32",

And these along with potentially other things will just be executed within the CI/CD to produce all the relevant .node binaries.

There's probably a structure expected to be uploaded to the release page.

Furthermore it appears that the prebuilds directory should be kept in the npm distribution.

However it is expected that the prebuilds directory would contain all the relevant targets.

»» ~/Projects/tmp/randomdir/node_modules/fd-lock
 ♖ tree .                                                         (master) pts/3 20:43:35
.
├── binding.cc
├── binding.gyp
├── example.js
├── index.js
├── LICENSE
├── package.json
├── prebuilds
│   ├── darwin-arm64
│   │   └── node.napi.node
│   ├── darwin-x64
│   │   └── node.napi.node
│   ├── linux-arm
│   │   └── node.napi.node
│   ├── linux-arm64
│   │   ├── electron.napi.node
│   │   └── node.napi.node
│   ├── linux-x64
│   │   └── node.napi.node
│   ├── win32-ia32
│   │   └── node.napi.node
│   └── win32-x64
│       └── node.napi.node
├── README.md
└── test.js

8 directories, 16 files

Whereas for me, I only have:

[nix-shell:~/Projects/TypeScript-Demo-Lib]$ tree prebuilds/
prebuilds/
└── linux-x64
    └── node.napi.node

1 directory, 1 file

So the question is, how do we get all the other compiled binaries, if we are developing on Linux, and we want to publish our package?

The realisation here is that, you don't actually need to publish the binaries to GitHub releases. The usage of prebuildify and node-gyp-build is such that the binaries are all just uploaded directly to npm package repository, thus an npm install ends up installing the binaries, and there's no separate download step.

@CMCDragonkai
Copy link
Member Author

CMCDragonkai commented May 6, 2022

It appears that it is expected that the developer just goes and fetches all the prebuild binaries for all the platforms from a CI system, and then puts it all together manually into the prebuilds directory before calling npm publish.

Publishing is mildly more complicated, because npm publish must be done after compiling and fetching prebuilt binaries (typically in CI).

https://github.com/prebuild/prebuild-install

Crucial information is located here: https://github.com/prebuild/prebuild-install as prebuildify is an evolution of prebuild-install.

This basically means after the CI/CD has built everything and pushed it to the release page, you then download them and put them together into the prebuilds directory.

This seems rather clunky, I'd prefer that we have a single command to assemble the prebuilds directory. This solution prebuild/prebuildify#63 might be helpful for doing this too.


If we are going to do this, this means our CI/CD has to run on all the different platforms the prebuildify commands. Meaning that PRs which contain new features will result in new compilations on each platform.

It also means npm publish should ultimately be done by the CI/CD instead of the developer...

@CMCDragonkai
Copy link
Member Author

The CI/CD workflow used by prebuildify ecosystem is ultimately explained here: https://github.com/mafintosh/prebuildify-ci

Basically the proposed flow is:

  1. Create a new NPM tag with npm version ...
  2. Do a git push && git push --tags to push the tags to the repo on GitHub
  3. The CI/CD is now triggered on new version/tag to prebuildify on each of their virtual environments: Linux, Mac, Windows... maybe even ARM.. .etc. (Not sure about android/ios).
  4. The CI/CD publishes the prebuilds to the GH release page (and the release page corresponds to the new tag)
  5. Once all the CI/CD builds are done, on your local dev env, use prebuildify-ci download (which needs to be configured for download capability), and that downloads all of them to the ./prebuilds directory
  6. Now you can do npm publish which will actually take the ./dist AND the ALL of the ./prebuilds binaries together and put them into the NPM package

Right now we cannot use prebuildify-ci because its workflow doesn't match our Gitlab CI/CD architecture.

This workflow does have some problems:

  1. It is manual, requiring an asynchronous step of waiting CI/CD to finish before finally downloading and doing a npm publish.
  2. It gathers ALL the binary dependencies even the ones that are unnecessary into the same npm package, and that means users end up downloading useless binaries for their host platforms.

The solution to 1. is to make use of the CI/CD features more, such that npm publish itself is part of the CI/CD process. The only problem is that publishing occurs with a specific publish token, and now the bot must have the right authentication token to publish on behalf of MatrixAI.

The solution to 2. involves these new developments:

Basically the idea is that each platform build goes into their own NPM package, and optionalDependencies in package.json is used to download the correct one. I'm not sure how it works entirely so we shall see.

There's an overall problem with distributing binaries. The lack of determinisim/reproducibility and the supply chain verification. These binaries are just put into the npm package without any oversight, and no source code verification can be done. It does seem that the existence of binaries is only for convenience, and a good build should be done from source.

@CMCDragonkai
Copy link
Member Author

Getting CI/CD to do everything will require the more sophisticated release pipeline that was first specced out in our strategy letter 3 https://gitlab.com/MatrixAI/Employees/matrix-team/-/issues/8#note_885403611.

This might be complicated if we are going to maintain 2 branches for TypeScript-Demo-Lib. I'm starting to think this should be a separate repository so TypeScript-Demo-Lib-Native. We would have to request another repository to be added to the gitlab mac beta.

@CMCDragonkai CMCDragonkai force-pushed the native branch 3 times, most recently from 2f69c75 to ea62a14 Compare May 9, 2022 05:42
@CMCDragonkai
Copy link
Member Author

I'm going to push this to https://github.com/MatrixAI/TypeScript-Demo-Lib-Native later at the end, and then submit request to https://gitlab.com/gitlab-com/macos-buildcloud-runners-beta/-/issues

@CMCDragonkai
Copy link
Member Author

For arm64 binary on macos:

Due to the mandatory code signing requirement, before the
executable is distributed to end users, it must be signed.
Otherwise, it will be immediately killed by kernel on launch.
An ad-hoc signature is sufficient.
To do that, run pkg on a Mac, or transfer the executable to a Mac
and run "codesign --sign - ", or (if you use Linux)
install "ldid" utility to PATH and then run pkg again

The ldid command seems strange, and not available normally, so we could do this with codesign --sign later.

@CMCDragonkai
Copy link
Member Author

The final integration runs are failing. I suppose because when using pkg, we are using absolute paths for the assets. We may need to change those to relative paths instead.

@CMCDragonkai
Copy link
Member Author

CMCDragonkai commented May 17, 2022

I noticed that the windows integration run command doesn't fail even if when the command fails. I guess the ForEach is rather forgiving?

$ Get-ChildItem -File ./builds/*-win-* | ForEach {& $_.FullName}
pkg/prelude/bootstrap.js:1833
      throw error;
      ^
Error: No native build was found for platform=win32 arch=x64 runtime=node abi=93 uv=1 libc=glibc node=16.14.2
    loaded from: C:\snapshot\typescript-demo-lib\node_modules\leveldown
    at Function.load.path (C:\snapshot\typescript-demo-lib\node_modules\node-gyp-build\index.js:60:9)
    at load (C:\snapshot\typescript-demo-lib\node_modules\node-gyp-build\index.js:22:30)
    at Object.<anonymous> (C:\snapshot\typescript-demo-lib\node_modules\leveldown\binding.js:1:43)
    at Module._compile (node:internal/modules/cjs/loader:1103:14)
    at Module._compile (pkg/prelude/bootstrap.js:1894:32)
    at Object.Module._extensions..js (node:internal/modules/cjs/loader:11[57](https://gitlab.com/MatrixAI/open-source/typescript-demo-lib/-/jobs/2468682176#L57):10)
    at Module.load (node:internal/modules/cjs/loader:981:32)
    at Function.Module._load (node:internal/modules/cjs/loader:822:12)
    at Module.require (node:internal/modules/cjs/loader:1005:19)
    at Module.require (pkg/prelude/bootstrap.js:1812:31)

Either that or pkg nodejs executable is not returning the correct exit code. We should check that on matrix-win-1.

See: https://gitlab.com/MatrixAI/open-source/typescript-demo-lib/-/jobs/2468682176

@CMCDragonkai
Copy link
Member Author

The integration runs are still failing. I need to debug this locally, by seeing why our bundled executable isn't running.

Right now both windows and macos takes quite a bit a time to startup, we could aid this by changing the cache:

  • For chocolatey we can change the choco cache with choco config set cacheLocation fullPath.
  • For macosx HOMEBREW_CACHE which can be set to "$(pwd)/tmp/Homebrew".

This might be enough to proceed. Nix caching is even more difficult since fundamentally it's a network cache, and we aren't able to get gitlab to cache /nix/store since it is outside the project root. However by using our own s3 caching and the usage of the nixos upstream cache, the difference is probably marginal, and most importantly we can cache the base foundation in our docker image itself. So I'm wary of installing nix directly on the macos runner, but it's worth checking out to compare too.

@CMCDragonkai
Copy link
Member Author

CMCDragonkai commented May 18, 2022

It turns out xcode-select --install is a GUI program, so there's no point running:

xcode-select --install || true

Cause if xcode wasn't installed we would end up with this problem anyway.

The automatic ways of install of xcode is actually quite complicated https://apple.stackexchange.com/questions/107307/how-can-i-install-the-command-line-tools-completely-from-the-command-line (and very flaky, none of it is intended to be automatic), so for now we will simply expect that the gitlab runner already has xcode installed and we don't need to install them ourselves.

I found this out by trying to install the new brew on the newly update matrix-mac-1.

@CMCDragonkai
Copy link
Member Author

CMCDragonkai commented May 18, 2022

The integration runs are still failing. I need to debug this locally, by seeing why our bundled executable isn't running.

Right now both windows and macos takes quite a bit a time to startup, we could aid this by changing the cache:

  • For chocolatey we can change the choco cache with choco config set cacheLocation fullPath.
  • For macosx HOMEBREW_CACHE which can be set to "$(pwd)/tmp/Homebrew".

This might be enough to proceed. Nix caching is even more difficult since fundamentally it's a network cache, and we aren't able to get gitlab to cache /nix/store since it is outside the project root. However by using our own s3 caching and the usage of the nixos upstream cache, the difference is probably marginal, and most importantly we can cache the base foundation in our docker image itself. So I'm wary of installing nix directly on the macos runner, but it's worth checking out to compare too.

The caching didn't really save that much time for both windows and macos.

In fact, I don't think choco is caching anything. It seems to redownload each time. See discussion: chocolatey/choco#2698 and chocolatey/choco#2134

Therefore I'm removing these:

    # Chocolatey cache is only used by the windows runner
    - ./tmp/choco

and

    - choco config set cacheLocation "${env:CI_PROJECT_DIR}\tmp\choco"

Homebrew marginally improves perf by using the cache, but it's not significant.

@CMCDragonkai
Copy link
Member Author

I noticed that the windows integration run command doesn't fail even if when the command fails. I guess the ForEach is rather forgiving?

$ Get-ChildItem -File ./builds/*-win-* | ForEach {& $_.FullName}
pkg/prelude/bootstrap.js:1833
      throw error;
      ^
Error: No native build was found for platform=win32 arch=x64 runtime=node abi=93 uv=1 libc=glibc node=16.14.2
    loaded from: C:\snapshot\typescript-demo-lib\node_modules\leveldown
    at Function.load.path (C:\snapshot\typescript-demo-lib\node_modules\node-gyp-build\index.js:60:9)
    at load (C:\snapshot\typescript-demo-lib\node_modules\node-gyp-build\index.js:22:30)
    at Object.<anonymous> (C:\snapshot\typescript-demo-lib\node_modules\leveldown\binding.js:1:43)
    at Module._compile (node:internal/modules/cjs/loader:1103:14)
    at Module._compile (pkg/prelude/bootstrap.js:1894:32)
    at Object.Module._extensions..js (node:internal/modules/cjs/loader:11[57](https://gitlab.com/MatrixAI/open-source/typescript-demo-lib/-/jobs/2468682176#L57):10)
    at Module.load (node:internal/modules/cjs/loader:981:32)
    at Function.Module._load (node:internal/modules/cjs/loader:822:12)
    at Module.require (node:internal/modules/cjs/loader:1005:19)
    at Module.require (pkg/prelude/bootstrap.js:1812:31)

Either that or pkg nodejs executable is not returning the correct exit code. We should check that on matrix-win-1.

See: https://gitlab.com/MatrixAI/open-source/typescript-demo-lib/-/jobs/2468682176

The reason for this is because pkg configuration JSON has to:

  1. Always relative paths, absolute paths will end up being put in the wrong place in the virtual filesystem
  2. The configuration JSON file itself MUST be in the same directory as the project root, and together with package.json, otherwise relative paths are resolve relative to the location of the configuration file
  3. The file must have the file extension .json. Otherwise it may load it like a javascript file.

@CMCDragonkai
Copy link
Member Author

For arm64 binary on macos:

Due to the mandatory code signing requirement, before the
executable is distributed to end users, it must be signed.
Otherwise, it will be immediately killed by kernel on launch.
An ad-hoc signature is sufficient.
To do that, run pkg on a Mac, or transfer the executable to a Mac
and run "codesign --sign - ", or (if you use Linux)
install "ldid" utility to PATH and then run pkg again

The ldid command seems strange, and not available normally, so we could do this with codesign --sign later.

I've requested it to be packaged at top level here: NixOS/nixpkgs#173467.

But there's a fork of ldid available in pkgs.ios-cross-compile.ldid.

@CMCDragonkai
Copy link
Member Author

image

@CMCDragonkai
Copy link
Member Author

CMCDragonkai commented May 19, 2022

Apple now requires mandatory code signing and mandatory notarization for all executables, application bundles, packages. This applies all iOS and MacOS devices. Any program attempting execution without a legitimate signature is automatically closed. In a nutshell these are our steps:

  1. Setup the X.509 Developer ID Application Certificate with a RSA 2048 keypair
  2. Create an application specific password for the xcrun altool program
  3. Prepare an entitlements.plist file that contains all the OS capabilities that our executable needs.
  4. Create a bin directory and put our executable into the bin directory.
  5. Use codesign on the bin/typescript-demo-lib.
  6. Create a zip archive containing the bin directory
  7. Use xcrun altool to upload the application to Apple so they can notarize the application
  8. Wait for the confirmation email from Apple
  9. Verify that our application works

Setting up the Developer ID Application Certificate

In order to do code-signing on our CLI executable, we must acquire an X.509 certificate. It is called "Developer ID Application" certificate.

This certificate is necessary to distribute applications outside of the Mac app store. This is because we are distributing this CLI executable directly out of GitHub release page.

Apple has other certificates intended for other distribution purposes. Each certificate requires its own keypair. You cannot use the same keypair for different certificates.

After using the certificate, the application must also be notarized with Apple which tells Apple what kind of capabilities the executable requires.

To acquire this certificate, we must have an account on developer.apple.com. We had already set this up back in 2020.

Although the X.509 certificate system can be done entirely in openssl. Apple's workflow is designed to be done in the GUI. So we will follow this process.

Launch the Keychain Access program.

By default there is already a <key> keypair, we will be creating a new keypair for this.

Click Keychain Access -> Certificate Assistant -> Request a Certificate From a Certificate Authority.

This will open up a panel where you can fill in:

  • User Email Address - $CompanyEmail
  • Common Name - MATRIX AI PTY. LTD.

The CA Email Address is unnecessary.

Select Saved to disk.

The keypair must be RSA 2048 dictated by Apple, this is by default, so we don't need to change anything for the keypair generation.

Once this is done, the keypair will appear in your Keychain Access program. And the CSR "Certificate Signing Request" file will appear where you saved it.

The keypair in your Keychain Access will be named according to the Common Name. This is quite ambiguous. So you need to rename both the public and private keys to MATRIX AI PTY. LTD. - Developer ID Application directly in the GUI. This is just a tag in the Keychain Access program, it will not affect anything else.

Then go https://developer.apple.com/account/resources/certificates/add and select Developer ID Application.

On the next page, select the latest Profile Type, and then provide the CSR file.

The certificate will then be generated, and you can download it to your computer.

Now you need to import this into your Keychain Access, this will import it into the default keychain, which should be login.

security import ~/Downloads/developerID_application.cer

Once this is done, you will be able to check whether the certificate was properly imported:

security find-identity

It should show the Developer ID Application: MATRIX AI PTY. LTD. ($TeamIdentifier) where $TeamIdentifier is a 10 alphanumeric characters. This is the actual Common Name string stored on the X.509 certificate. The Keychain Access GUI was lying to you!! Copy the full common name, we will need this later for code signing.

If you selected the latest profile type, it is possible that the Mac has an outdated Developer ID Certificate Authority intermediate certificate. To acquire the latest intermediate certificates, go to: https://apple.com/certificateauthority/. For the Developer ID Application certificate based on the G2 Sub-CA (Xcode 11.4.1 or later) profile, we need the Developer ID - G2 intermediate certificate. Download that, and import it with:

security import ./DeveloperIDG2CA.cer

Note that if you have the Xcode GUI application, you can setup your certificates directly there. This involves going to Preferences -> Accounts and logging into your Apple developer account, then clicking on Manage Certificates. Then click on the + symbol and select Developer ID Application. Doing this will double up the keys and certificates into your Keychain Access application, you can proceed to delete the old key and certificate. The Xcode GUI application will also download any necessary intermediate certificates.

Only a maximum of 5 developer ID certificates can be created. To revoke any of them it requires an support email to Apple. Go to https://developer.apple.com/contact/topic/select. It requires you be signed in as an Apple developer.

Additionally you can use these command line commands to manipulate the keypairs:

# Extract private key as encrypted PEM from p12 (it will now have a password)
openssl pkcs12 \
  -nocerts \
  -in './MATRIX AI PTY. LTD. - Developer ID Application.p12' \
  -out './MATRIX AI PTY. LTD. - Developer ID Application'

# Extract private key as unencrypted PEM from p12
openssl pkcs12 \
  -nocerts \
  -nodes \
  -in './MATRIX AI PTY. LTD. - Developer ID Application.p12' \
  -out './MATRIX AI PTY. LTD. - Developer ID Application'

# Generate public key from private key
openssl rsa \
  -pubout \
  -in './MATRIX AI PTY. LTD. - Developer ID Application' \
  -out './MATRIX AI PTY. LTD. - Developer ID Application.pub'

Setting up Application Specific Password for xcrun altool

The xcrun altool requires an application password. What exactly is an application password and why does altool require it?

According to https://support.apple.com/en-us/HT204397, they are just tokens that allow third party applications to access your Apple cloud resources.

In this case it appears altool or "Application Loader Tool" is considered a third party application.

Follow this instruction https://support.apple.com/en-us/HT204397#generate for generating the application password. Sign into https://appleid.apple.com, and create it under Sign-In and Security section, just name it Application Loader.

Preparing the entitlements

MacOS now follows sort of capability based security. Which means we must request capabilities from the Apple mothership that is required by our application. Based on information from electron-userland/electron-builder#3940, we need to create a file entitlements.plist with these 3 permissions:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
  <key>com.apple.security.cs.allow-jit</key>
  <true/>
  <key>com.apple.security.cs.allow-unsigned-executable-memory</key>
  <true/>
  <key>com.apple.security.cs.disable-library-validation</key>
  <true/>
</dict>
</plist>

Put this in entitlements.plist next to the executable in the same directory.

Additional entitlements may be needed if we have a more complex application, this covers typescript-demo-lib.

Setting up the bin directory

It's not possible to directly notarize just a CLI executable. Apple only accepts pkg, dmg and zip.

Normally a pkg would be ideal, as this will perform some installation instructions.

But for our pre-releases, it makes more sense to use zip, and we are distributing just the executable itself.

So we will prepare to create a zip archive just to do this notarization process.

mkdir ./bin
mv ./typescript-demo-lib ./bin

Code Signing

For the code signing process, we will sign from outside the bin directory. It is critical to do the codesigning from the parent directory.

codesign \
  --force \
  --deep \
  --options runtime \
  --entitlements ./entitlements.plist \
  --timestamp \
  --sign "Developer ID Application: MATRIX AI PTY. LTD. ($TeamIdentifier)" \
  ./bin/typescript-demo-lib

Replace the $TeamIdentifier with the 10 alphanumeric characters identifying the Apple developer account.

This will prompt you to unlock the Keychain Access. Which means it does not work unattended, you must be using the GUI.

We can inspect our signing metadata:

codesign \
  --display \
  --requirements - \
  --verbose \
  ./bin/typescript-demo-lib

We can also verify our executable:

codesign \
  --verify \
  --verbose \
  ./bin/typescript-demo-lib

Zipping

Now we can zip our archive up with the right structure:

zip typescript-demo-lib.zip ./bin/typescript-demo-lib

Notarizing

This will upload the zip archive to be notarized by Apple.

xcrun altool \
  --notarize-app \
  --type macos \
  --asc-provider "$TeamIdentifier" \
  --primary-bundle-id 'ai.matrix.typescript-demo-lib' \
  --username "$AppleDeveloperAccountEmail" \
  --password "$ApplicationLoaderPassword" \
  --file ./typescript-demo-lib.zip

The --primary-bundle-id should be using a reverse domain with the name of the application.

Replace $AppleDeveloperAccountEmail with the developer account email.

Replace $ApplicationLoaderPassword with the application specific password created.

After uploading, it should give us a response like:

No errors uploading 'typescript-demo-lib.zip'.
RequestUUID = 3af4e56f-162b-75bc-827f-7233f92bf20c

Waiting on Confirmation

Apple will send an email to $AppleDeveloperAccountEmail in 10 - 15 minutes. Once this email confirms the notarization, the application can run.

Verify Notarization

spctl \
  --assess \
  --type open \
  --context context:primary-signature \
  --verbose \
  --verbose \
  ./bin/typescript-demo-lib

Which should show:

./bin/typescript-demo-lib: accepted
source=Notarized Developer ID
origin=Developer ID Application: MATRIX AI PTY. LTD. ($TeamIdentifier)

Now try and execute it ./bin/typescript-demo-lib.

image

Stapling

Right now this means if the executable is distributed to arbitrary MacOSes, they will need to be online, because the Gatekeeper system will end up checking online to see if the executable is signed and notarized. It only does this on the first time it launches.

If we were using .app which is then distributed inside a pkg archive, then we can staple the notarisation to the pkg for example: xcrun stapler staple typescript-demo-lib.pkg. In doing this, we would need to create the pkg archive instead, use productsign on the pkg archive as well, then run the stapler after the notarization process is completed.

Unfortunately it does not work on zip archives nor the bare executable. Will have to try this later.


See also:

@CMCDragonkai
Copy link
Member Author

CMCDragonkai commented May 19, 2022

Everytime xcode is installed or updated, do this:

sudo xcode-select --reset

Otherwise the xcode command line tools might be out of date.

Automating Code Signing and Notarization in the CI/CD

The codesign command only exists on MacOS, suppose we want to run this on different MacOS (such as in our CI/CD situation), we need to first transfer both the certificate and keypair to that computer before we can use in the new MacOS.

First we must export our keypair and certificate as p12 or "Personal Information Exchange" file. This file must contain both the certificate and the private key, the public key is not necessary.

This is TBD.

The relevant resources to continue down this path are:

Basically we need to load our developed ID key into the macos runner, and then perform the above tasks unattended to be able to do code signing and notarisation automatically.

Actually it appears you don't need the main keychain password to be able to do this. Let's see.

@CMCDragonkai
Copy link
Member Author

Since I've confirmed that the staging setup works right now, I'm transitioning this PR to https://github.com/MatrixAI/TypeScript-Demo-Lib-Native

@CMCDragonkai
Copy link
Member Author

All relevant issues have been created in this repo and typescript-demo-lib-native. Remaining development will continue on MatrixAI/TypeScript-Demo-Lib-Native#2. We will be requesting access to macos for that repository too.

@CMCDragonkai
Copy link
Member Author

We will have some changes brought in here after first merging in MatrixAI/TypeScript-Demo-Lib-Native#2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant