Skip to content

Releases: NVIDIA/nvidia-container-toolkit

v1.14.1

08 Sep 13:07
Compare
Choose a tag to compare

What's Changed

  • Fixed bug where contents of /etc/nvidia-container-runtime/config.toml is ignored by the NVIDIA Container Runtime Hook.

Changes in libnvidia-container

  • Use libelf.so from elfutils-libelf-devel on RPM-based systems due to removed mageia repositories hosting pmake and bmake.

Full Changelog: v1.14.0...v1.14.1

v1.14.0

30 Aug 14:04
Compare
Choose a tag to compare

This is a promotion of the (internal) v1.14.0-rc.3 release to GA.

This release of the NVIDIA Container Toolkit adds the following features:

  • Improved support for the Container Device Interface (CDI) on Tegra-based systems
  • Simplified packaging and distribution. We now only generate .deb and .rpm packages that are compatible with all supported distributions instead of releasing distributions-specific packagfes.

NOTE: This will be the last release that includes the nvidia-container-runtime and nvidia-docker2 packages.

NOTE: This release is a unified release of the NVIDIA Container Toolkit that consists of the following packages:

The packages for this release are published to the libnvidia-container package repositories.

New Contributors

Full Changelog: v1.13.0...v1.14.0

v1.14.0-rc.3

  • Added support for generating OCI hook JSON file to nvidia-ctk runtime configure command.
  • Remove installation of OCI hook JSON from RPM package.
  • Refactored config for nvidia-container-runtime-hook.
  • Added a nvidia-ctk config command which supports setting config options using a --set flag.
  • Added --library-search-path option to nvidia-ctk cdi generate command in csv mode. This allows folders where
    libraries are located to be specified explicitly.
  • Updated go-nvlib to support devices which are not present in the PCI device database. This allows the creation of dev/char symlinks on systems with such devices installed.
  • Added UsesNVGPUModule info function for more robust platform detection. This is required on Tegra-based systems where libnvidia-ml.so is also supported.

Changes from libnvidia-container v1.14.0-rc.3

  • Generate the nvc.h header file automaticallty so that version does not need to be explicitly bumped.

Changes in the toolkit-container

  • Set NVIDIA_VISIBLE_DEVICES=void to prevent injection of NVIDIA devices and drivers into the NVIDIA Container Toolkit container.

v1.14.0-rc.2

  • Fix bug causing incorrect nvidia-smi symlink to be created on WSL2 systems with multiple driver roots.
  • Remove dependency on coreutils when installing package on RPM-based systems.
  • Create ouput folders if required when running nvidia-ctk runtime configure
  • Generate default config as post-install step.
  • Added support for detecting GSP firmware at custom paths when generating CDI specifications.
  • Added logic to skip the extraction of image requirements if NVIDIA_DISABLE_REQUIRES is set to true.

Changes from libnvidia-container v1.14.0-rc.2

  • Include Shared Compiler Library (libnvidia-gpucomp.so) in the list of compute libaries.

Changes in the toolkit-container

  • Ensure that common envvars have higher priority when configuring the container engines.
  • Bump CUDA base image version to 12.2.0.
  • Remove installation of nvidia-experimental runtime. This is superceded by the NVIDIA Container Runtime in CDI mode.

v1.14.0-rc.1

  • chore(cmd): Fixing minor spelling error. by @elliotcourant in #61
  • Add support for updating containerd configs to the nvidia-ctk runtime configure command.
  • Create file in etc/ld.so.conf.d with permissions 644 to support non-root containers.
  • Generate CDI specification files with 644 permissions to allow rootless applications (e.g. podman)
  • Add nvidia-ctk cdi list command to show the known CDI devices.
  • Add support for generating merged devices (e.g. all device) to the nvcdi API.
  • Use . pattern to locate libcuda.so when generating a CDI specification to support platforms where a patch version is not specified.
  • Update go-nvlib to skip devices that are not MIG capable when generating CDI specifications.
  • Add nvidia-container-runtime-hook.path config option to specify NVIDIA Container Runtime Hook path explicitly.
  • Fix bug in creation of /dev/char symlinks by failing operation if kernel modules are not loaded.
  • Add option to load kernel modules when creating device nodes
  • Add option to create device nodes when creating /dev/char symlinks

Changes from libnvidia-container v1.14.0-rc.1

  • Support OpenSSL 3 with the Encrypt/Decrypt library

Changes in the toolkit-container

  • Bump CUDA base image version to 12.1.1.
  • Unify environment variables used to configure runtimes.

v1.14.0-rc.2

20 Jul 11:24
Compare
Choose a tag to compare
v1.14.0-rc.2 Pre-release
Pre-release

What's Changed

  • Fix bug causing incorrect nvidia-smi symlink to be created on WSL2 systems with multiple driver roots.
  • Remove dependency on coreutils when installing package on RPM-based systems.
  • Create ouput folders if required when running nvidia-ctk runtime configure
  • Generate default config as post-install step.
  • Added support for detecting GSP firmware at custom paths when generating CDI specifications.
  • Added logic to skip the extraction of image requirements if NVIDIA_DISABLE_REQUIRES is set to true.

Changes from libnvidia-container v1.14.0-rc.2

  • Include Shared Compiler Library (libnvidia-gpucomp.so) in the list of compute libaries.

Changes in the toolkit-container

  • Ensure that common envvars have higher priority when configuring the container engines.
  • Bump CUDA base image version to 12.2.0.
  • Remove installation of nvidia-experimental runtime. This is superceded by the NVIDIA Container Runtime in CDI mode.

Full Changelog: v1.14.0-rc.1...v1.14.0-rc.2

v1.13.5

19 Jul 09:56
Compare
Choose a tag to compare

What's Changed

  • Remove dependency on coreutils when installing the NVIDIA Container Toolkit on RPM-based systems.
  • Added support for detecting GSP firmware at custom paths when generating CDI specifications.

Changes in libnvidia-container

  • Include Shared Compiler Library (libnvidia-gpucomp.so) in the list of compute libaries.

Full Changelog: v1.13.4...v1.13.5

v1.13.4

13 Jul 09:37
Compare
Choose a tag to compare

This release only bumps the CUDA Base Image version in the toolkit-container component.

What's Changed

Changes in the toolkit-container

  • Bump CUDA base image version to 12.2.0.

Full Changelog: v1.13.3...v1.13.4

v1.13.3

29 Jun 13:50
Compare
Choose a tag to compare

This is a bugfix release.

What's Changed

  • Generate CDI specification files with 644 permissions to allow rootless applications (e.g. podman).
  • Fix bug causing incorrect nvidia-smi symlink to be created on WSL2 systems with multiple driver roots.
  • Fix bug when using driver versions that do not include a patch component in their version number.
  • Skip additional modifications in CDI mode.
  • Fix loading of kernel modules and creation of device nodes in containerized use cases.

Changes in the toolkit-container

  • Allow same envars for all runtime configs

Full Changelog: v1.13.2...v1.13.3

v1.14.0-rc.1

26 Jun 13:02
Compare
Choose a tag to compare
v1.14.0-rc.1 Pre-release
Pre-release

What's Changed

  • chore(cmd): Fixing minor spelling error. by @elliotcourant in #61
  • Add support for updating containerd configs to the nvidia-ctk runtime configure command.
  • Create file in etc/ld.so.conf.d with permissions 644 to support non-root containers.
  • Generate CDI specification files with 644 permissions to allow rootless applications (e.g. podman)
  • Add nvidia-ctk cdi list command to show the known CDI devices.
  • Add support for generating merged devices (e.g. all device) to the nvcdi API.
  • Use . pattern to locate libcuda.so when generating a CDI specification to support platforms where a patch version is not specified.
  • Update go-nvlib to skip devices that are not MIG capable when generating CDI specifications.
  • Add nvidia-container-runtime-hook.path config option to specify NVIDIA Container Runtime Hook path explicitly.
  • Fix bug in creation of /dev/char symlinks by failing operation if kernel modules are not loaded.
  • Add option to load kernel modules when creating device nodes
  • Add option to create device nodes when creating /dev/char symlinks

Changes from libnvidia-container v1.14.0-rc.1

  • Support OpenSSL 3 with the Encrypt/Decrypt library

Changes in the toolkit-container

  • Bump CUDA base image version to 12.1.1.
  • Unify environment variables used to configure runtimes.

New Contributors

Full Changelog: v1.13.1...v1.14.0-rc.1

v1.13.2

19 Jun 14:06
Compare
Choose a tag to compare

This is a bugfix release.

What's Changed

  • Add nvidia-container-runtime-hook.path config option to specify NVIDIA Container Runtime Hook path explicitly.
  • Fix bug in creation of /dev/char symlinks by failing operation if kernel modules are not loaded.
  • Add option to load kernel modules when creating device nodes
  • Add option to create device nodes when creating /dev/char symlinks
  • Treat failures to open debug log files as non-fatal.

Changes from libnvidia-container v1.13.2

  • Add OpenSSL 3 support in Encrypt / Decrypt library.

Changes in the toolkit-container

  • Bump CUDA base image version to 12.1.1.

Full Changelog: v1.13.1...v1.13.2

v1.13.1

24 Apr 14:33
Compare
Choose a tag to compare

This is a bugfix release.

What's Changed

  • Update update-ldcache hook to only update ldcache if it exists.
  • Update update-ldcache hook to create /etc/ld.so.conf.d folder if it doesn't exist.
  • Fix failure when libcuda cannot be located during XOrg library discovery.
  • Fix CDI spec generation on systems that use /etc/alternatives (e.g. Debian)

Full Changelog: v1.13.0...v1.13.1

v1.13.0

12 Apr 14:19
Compare
Choose a tag to compare

This is a promotion of the v1.13.0-rc.3 release to GA.

This release of the NVIDIA Container Toolkit adds the following features:

  • Improved support for the Container Device Interface (CDI) specifications for GPU devices when using the NVIDIA Container Toolkit in the context of the GPU Operator.
  • Added the generation CDI specifications on WSL2-based systems using the nvidia-ctk cdi generate command. This is now the recommended mechanism for using GPUs on WSL2 and podman is the recommended container engine.

NOTE: This release is a unified release of the NVIDIA Container Toolkit that consists of the following packages:

The packages for this release are published to the libnvidia-container package repositories.

Full Changelog: v1.12.0...v1.13.0

v1.13.0-rc.3

  • Only initialize NVML for modes that require it when runing nvidia-ctk cdi generate.
  • Prefer /run over /var/run when locating nvidia-persistenced and nvidia-fabricmanager sockets.
  • Fix the generation of CDI specifications for management containers when the driver libraries are not in the LDCache.
  • Add transformers to deduplicate and simplify CDI specifications.
  • Generate a simplified CDI specification by default. This means that entities in the common edits in a spec are not included in device definitions.
  • Also return an error from the nvcdi.New constructor instead of panicing.
  • Detect XOrg libraries for injection and CDI spec generation.
  • Add nvidia-ctk system create-device-nodes command to create control devices.
  • Add nvidia-ctk cdi transform command to apply transforms to CDI specifications.
  • Add --vendor and --class options to nvidia-ctk cdi generate

Changes from libnvidia-container v1.13.0-rc.3

  • Fix segmentation fault when RPC initialization fails.
  • Build centos variants of the NVIDIA Container Library with static libtirpc v1.3.2.
  • Remove make targets for fedora35 as the centos8 packages are compatible.

Changes in the toolkit-container

  • Add nvidia-container-runtime.modes.cdi.annotation-prefixes config option that allows the CDI annotation prefixes that are read to be overridden.
  • Create device nodes when generating CDI specification for management containers.
  • Add nvidia-container-runtime.runtimes config option to set the low-level runtime for the NVIDIA Container Runtime

v1.13.0-rc.2

  • Don't fail chmod hook if paths are not injected
  • Only create by-path symlinks if CDI devices are actually requested.
  • Fix possible blank nvidia-ctk path in generated CDI specifications
  • Fix error in postun scriplet on RPM-based systems
  • Only check NVIDIA_VISIBLE_DEVICES for environment variables if no annotations are specified.
  • Add cdi.default-kind config option for constructing fully-qualified CDI device names in CDI mode
  • Add support for accept-nvidia-visible-devices-envvar-unprivileged config setting in CDI mode
  • Add nvidia-container-runtime-hook.skip-mode-detection config option to bypass mode detection. This allows legacy and cdi mode, for example, to be used at the same time.
  • Add support for generating CDI specifications for GDS and MOFED devices
  • Ensure CDI specification is validated on save when generating a spec
  • Rename --discovery-mode argument to --mode for nvidia-ctk cdi generate

Changes from libnvidia-container v1.13.0-rc.2

  • Fix segfault on WSL2 systems. This was triggered in the v1.12.1 and v1.13.0-rc.1 releases.

Changes in the toolkit-container

  • Add --cdi-enabled flag to toolkit config
  • Install nvidia-ctk from toolkit container
  • Use installed nvidia-ctk path in NVIDIA Container Toolkit config
  • Bump CUDA base images to 12.1.0
  • Set nvidia-ctk path in the
  • Add cdi.k8s.io/* to set of allowed annotations in containerd config
  • Generate CDI specification for use in management containers
  • Install experimental runtime as nvidia-container-runtime.experimental instead of nvidia-container-runtime-experimental
  • Install and configure mode-specific runtimes for cdi and legacy modes

v1.13.0-rc.1

  • Include MIG-enabled devices as GPUs when generating CDI specification
  • Fix missing NVML symbols when running nvidia-ctk on some platforms [#49]
  • Add CDI spec generation for WSL2-based systems to nvidia-ctk cdi generate command
  • Add auto mode to nvidia-ctk cdi generate command to automatically detect a WSL2-based system over a standard NVML-based system.
  • Add mode-specific (.cdi and .legacy) NVIDIA Container Runtime binaries for use in the GPU Operator
  • Discover all gsb*.bin GSP firmware files when generating CDI specification.
  • Align .deb and .rpm release candidate package versions
  • Remove fedora35 packaging targets

Changes in toolkit-container

  • Install nvidia-container-toolkit-operator-extensions package for mode-specific executables.
  • Allow nvidia-container-runtime.mode to be set when configuring the NVIDIA Container Toolkit

Changes from libnvidia-container v1.13.0-rc.1

  • Include all gsp*.bin firmware files if present
  • Align .deb and .rpm release candidate package versions
  • Remove fedora35 packaging targets