Skip to content

Releases: ceph/ceph-csi

Ceph-CSI v3.6.2 Release

09 Jun 09:11
Compare
Choose a tag to compare

Changelog or Highlights:

Bug Fixes:

  • Add allowPrivilegeEscalation: true to containerSecurityContext to nodeplugin daemonset

NFS

  • Delete the CephFS volume when the export is already removed

RBD

  • Use vaultAuthPath variable name in error msg
  • Support pvc-pvc clone with different sc & encryption
  • Consider rbd as default mounter if not set
  • Fix bug with missing supported_features

CephFS

  • Skip NetNamespaceFilePath if the volume is pre-provisioned

CI improvements

  • Improve logging for kubectl_retry helper
  • Fix commitlint problem
  • Prevent ERR trap inheritance for kubectl_retry

Breaking Changes

None.

Ceph-CSI v3.6.1 Release

22 Apr 13:46
Compare
Choose a tag to compare

Changelog or Highlights:

Feature:

  • Add network namespace to support pod networking for CephFS and RBD plugins.

Bug Fixes/Enhancements:

NFS

  • Add NFS provisioner & plugin sa to scc.yaml
  • Use go-ceph API for creating/deleting exports
  • Return gRPC status from CephFS CreateVolume failure

RBD

  • Fix logging in ExecuteCommandWithNSEnter
  • Check nbd tool features only for RBD driver
  • Use leases for leader election in RBD omap controller
  • Consider remote image health state for PromoteVolume

Breaking Changes

None.

Ceph-CSI v3.6.0 Release

05 Apr 15:38
Compare
Choose a tag to compare

We are excited to announce another feature packed release of Ceph CSI , v3.6.0. This is another great step towards making it possible to use enhanced features of Container Storage Interface ( CSI) with Ceph Cluster in the backend. With this release, we are introducing many brand new features and enhancements to Ceph CSI driver. Also this release enabled a smooth integration to various projects. Here are the changelog / release highlights..

Changelog and Highlights:

New Features

NFS based dynamic provisioner:

Ceph-CSI already creates CephFS volumes, that can be mounted over the native CephFS protocol. A new provisioner in Ceph-CSI can create CephFS volumes, and include the required NFS CSI parameters so that the NFS CSI driver can mount the CephFS volume over NFS. The CephFS volumes would be internally managed by the NFS provisioner, and only be exposed as NFS CSI volumes towards the consumers.

Fuse Mount recovery

Mounts managed by ceph-fuse may get corrupted by e.g. the ceph-fuse process exiting abruptly, or its parent container being terminated, taking down its child processes with it. This was an issue for FUSE based CephFS mounts performed by the Ceph CSI driver, however from this release onwards CSI driver is capable of detecting the corrupted ceph fuse mounts and it will try to remount automatically.

AWS KMS Encryption

Ceph-CSI can be configured to use Amazon STS, when kubernetes cluster is configured with OIDC identity provider to fetch credentials to access Amazon KMS. With Amazon STS and kubernetes cluster is configured with OIDC identity provider, credentials to access Amazon KMS can be fetched using oidc-token(serviceaccount token).

Quincy Support

Ceph CSI driver has been built on top of Quincy release of Ceph.

Enhancements

  • Improved RBD image flattening support: from this release onwards, only temporary intermediate clones and snapshot will be flattened. See #2190 for more details.

  • Topology aware provisioning has been revisited with this release and enhancements have been made to make it more production ready.

  • image features as optional parameter in Storage Class make the rbd images features in the storageclass parameter list as optional so that default image features of librbd can be used.

  • Added support for deep-flatten image feature: as deep-flatten is long supported in ceph and its enabled by default in the librbd, via this enhancement we are providing an option to enable it in cephcsi for the rbd images we are creating.

  • Added selinuxMount flag to enable/disable /etc/selinux host mount: selinuxMount flag has been added to enable/disable /etc/selinux host mount inside pods to support selinux-enabled filesystems

  • A new reference tracker has been introduced with this release which is a key-based implementation of a reference counter. This allows accounting in situations where idempotency must be preserved.

Bug Fixes:

  • BlockMode recalimspace request has been adjusted to avoid data loss on the reclaim space operation

  • RBD and CephFS driver has fixed an issue at node mount operation, to take care explicit permission set done by the CSI driver previous to this release which was causing unwanted pod delay.

  • RBD force promote timeout has been increased to 2 minutes to give enough time for rollback to complete.

  • Storage class map options has been corrected to ensure it works in various combinations of the input setting from the storage class and also made it flexible to work with different mounters like kernel,nbd..etc.

  • Previously, restoring a snapshot with a new PVC results with a wrong dataPoolName in case of initial volume linked
    to a storageClass with topology constraints and erasure coding. This has been fixed in this release.

  • omap deletion in DeleteSnapshot operation has been fixed with this release which helps to cleanup the omap properly once the subvolume snapshot is deleted.

Rebase

The dependencies of Ceph CSI driver are updated to latest version to consume various fixes and enhancements in the same.

E2E

Documentation

Breaking Changes

  • RBD Thick provisioning support is removed see #2795 for more details.

Release Image : docker pull quay.io/cephcsi/cephcsi:v3.6.0

Thanks to awesome Ceph CSI community for this great release πŸ‘ πŸŽ‰

Ceph-CSI v3.5.1 Release

21 Jan 17:23
Compare
Choose a tag to compare

Changelog or Highlights:

Bug Fix:

  • Log cephfs clone failure message in CreateVolumeRequest
  • Use ceph 16.2.7 as the base image
  • Fix RBD parallel PVC creation hang issue

Breaking Changes

None.

Ceph-CSI v3.5.0 Release

14 Jan 17:48
Compare
Choose a tag to compare

We are excited to announce another feature packed release of Ceph CSI , v3.5.0. This is another great step towards making it possible to use enhanced features of Container Storage Interface ( CSI) with Ceph Cluster in the backend. With this release, we are introducing many brand new features and enhancements to Ceph CSI driver. Also this release enabled a smooth integration to various projects. Here are the changelog / release highlights..

Ceph CSI 3.5.0 Release Changelog/Highlights

New features

IBM HPCS/Key Protect KMS Support

Ceph CSI added support for IBM HPCS/Key protect KMS services. This enables admins to enable PV encryption by making use of IBM key protect services in a kubernetes or openshift cluster. ( #2723)

Network Fencing

Ceph CSI now supports Network Fencing; which allows admins to blocklist any malicious clients. (#2738)

Kubernetes in-tree RBD volume migration

Ceph CSI support in-tree kubernetes volume migration to CSI driver ( kubernetes.io/rbd to rbd.ceph.csi.com) which is available with kube 1.23 release. All requests to the kubernetes in-tree provisioner will be redirected to the Ceph CSI RBD driver for its operations. Refer here for more details.

Support for Reclaimspace operation

The Ceph CSI driver has added support for csi addon's nodeReclaimSpace and controllerReclaimSpace operation while csi addons sidecar request these services from the CSI driver. (#2724 )

Ephemeral Volume

Ephermeral Volume Support have been validated with this release, With ephemeral volume support a user can specify ephemeral volumes in its pod spec and tie the lifecycle of the PVC with the POD.

RWOP PVC access mode

By advertising proper capabilities introduced in latest CSI spec 1.5, the Ceph CSI driver have been validated against RWOP PVC access mode which is introduced recently in kubernetes release.

Enhancements

Go-Ceph

Ceph CSI now uses go-ceph API for adding task to flatten image and remove image from trash instead of cmdline. This is expected to improve performance.

RBD krbd mounter

This release added RBD feature support for object-map, fast-diff ..etc with krbd mounter.

RBD nbd mounter

rbd-nbd can now support expansion of volumes, encrypted volumes and journal based mirroring. rbd-nbd log strategies can be tuned to, preserve, compress, remove on detach, read more about it here. nbd mounter utilize rbd-nbd cookie support at ceph-csi, to avoid any misconfiguration issues on nodeplugin restart, this adds to more reliable functionality of volume healer.

StorageClass Enhancements

The fixed security context can be enabled for PVs by mount options in the SC. This make it possible to specify selinux-related mount options like context.
Ceph CSI now provides a way to supply multiple mounters mapOption from storageclass, like mapOption: "kbrd:v1,v2,v3;nbd:v1,v2,v3"

Expansion of Volumes

The user can create the bigger PVC from an existing PVC and restore a snapshot to a bigger size PVC

Rebase

Along with many other dependency update of go packages which Ceph CSI uses, Ceph CSI have been rebased to make use of latest code release of kubernetes (v1.23) and also to make use of latest available sidecars.

e2e

  • rwop validation for cephfs and rbd volumes
  • added tests for bigger size rbd and cephfs Volumes
  • ephemeral validation have been enabled for rbd and cephfs in the e2e
  • test is added to validate encrypted image mount inside the nodeplugin
  • validation added for thick encrypted PVC restore
  • added tests to validate PVC restore from vaultKMS to vaulttenantSAKMS
  • intree migration tests are part of the e2e
  • ceph.conf deployment model has been accommodated in the tests
  • test cases added for pvc-pvcclone chain with depth 2
  • added tests for volume expansion, encrypted volumes with rbd-nbd mounter
  • covered tests for different accessModes and volumeModes with rbd-nbd mounter
  • added cases for snapshot restore chain with depth 2
    ...etc.

Documentation

  • design doc added for, CephFS snapshots as shallow RO volumes, in-tree migration, hpcs/key protect integration, clusterid poolid mapping,..etc
  • updated support matrix for deprecated ceph csi releases
  • updated development guide for new rules
  • updated rbd-nbd documentation with volume expansion, encryption volume support, various rbd-nbd log strategies..etc
  • support matrix update to readme
    ....etc

Breaking Changes

None

Release Image : docker pull quay.io/cephcsi/cephcsi:v3.5.0

Thanks to awesome Ceph CSI community for this great release πŸ‘ πŸŽ‰

Ceph-CSI v3.4.0 Release

29 Jul 11:06
Compare
Choose a tag to compare

We are excited to announce another feature packed release of Ceph CSI , v3.4.0. This is another great step towards making it possible to use enhanced features of Container Storage Interface ( CSI) with Ceph Cluster in the backend. With this release, we have lifted many highly usable production features ( Snapshot, Clone, Metrics..etc) to its higher level of support. Also enhancements have been done on features like Encryption, Disaster Recovery, NBD mounter, Thick Provisioning..etc. Code improvements which increase performance on various CSI operations are also part of this release. With this release Ceph CSI make use of latest versions of kubernetes , sidecar containers, go ceph library which include many bug fixes and enhancements its own.

Changelog or Highlights:

Features:

Beta:

Below features have been lifted from its Alpha support to Beta

  • Snapshot creation and deletion
  • Volume restore from snapshot
  • Volume clone support
  • Volume/PV Metrics of File Mode Volume
  • Volume/PV Metrics of Block Mode Volume

Alpha:

  • rbd-nbd volume mounter

Enhancement:

  • Restore RBD snapshot to a different Pool
  • Snapshot schedule support for RBD mirrored PVC
  • Mirroring support for thick PVC
  • Multi-Tenant support for vault encryption
  • AmazonMetadata KMS provider support
  • rbd-nbd volume healer support
  • Locking enhancement for improving POD deletion performance
  • Improvements in lock handling for snap and clone operations
  • Better thick provisioning support
  • Create CephFS subvolume with VolumeNamePrefix
  • CephFS Subvolume path addition in PV object
  • Consumption of go-ceph APIs for various CephFS controller and node operations.
  • Resize of the RBD encrypted volume
  • Better error handling for GRPC
  • Golang profiling support for debugging
  • Updated Kubernetes sidecar versions to the latest release
  • Kubernetes dependency update to v1.21.2
  • Create storageclass and secrets using helm charts

CI/E2E

  • Expansion of RBD encrypted volumes
  • Update and addition of new static golang tools
  • Kubernetes v1.21 support
  • Unit tests for SecretsKMS
  • Test for Vault with ServiceAccount per Tenant
  • E2E for user secret based metadata encryption
  • Update rook.sh and Ceph cluster version in E2E
  • Added RBD test for testing sc, secret via helm
  • Update feature gates setting from minikube.sh
  • Add CephFS test for sc, secret via helm
  • Add e2e for static PVC without imageFeature parameter
  • Make use of snapshot v1 API and client sets in e2e tests
  • Validate thick-provisioned PVC-PVC cloning
  • Adding retry support for various e2e failure scenarios
  • Refactor KMS configuration and usage

Documentation

  • Hashicorp Vault with a ServiceAccount per Tenant
  • Added documentation for Disaster Recovery
  • rbd-nbd mounter
  • Updated helm chart doc
  • Contribution guide update

Breaking Changes

None

Thanks to awesome Ceph CSI community for this great release πŸ‘ πŸŽ‰

Ceph CSI v3.2.2 Release

03 May 11:07
Compare
Choose a tag to compare

Changelog or Highlights:

Bug Fixes

Build

Breaking Changes

None.

Ceph CSI v3.3.1 Release

22 Apr 08:06
Compare
Choose a tag to compare

Changelog or Highlights:

Bug Fixes

Build

  • Update ceph to 15.2.11 #1995
  • Fix helm chart push issue #2007

RBD

  • Modified logic to check image watchers to avoid already in use issue for mirroring image #1993
  • Return crypt error for the rpc return #2005

Breaking Changes

None.

Ceph CSI v3.3.0 Release

15 Apr 13:41
Compare
Choose a tag to compare

Changelog or Highlights:

Features:

Async DR

  • A new volume replication protobuf and specification to achieve the Volume replication has been added with Ceph CSI driver.
    Ceph CSI has implemented the required GRPC services ( EnableVolumeReplication, DisableVolumeReplication, PromoteVolume, DemoteVolume, ResyncVolume..etc) for volume replication. A new sidecar controller will be deployed as part of the RBD provisioner pod which will expose the CRD to a user to interact with the Ceph cluster for DR operations. When a User creates a CR with the PVC name, the new operator will get the required pvc and PV information and send a request to the ceph csi to perform the rbd async operation.

Encryption

  • Users will be able to configure AWS KMS for Ceph-CSI volume encryption. This makes it possible to have in-flight encrypted data, and securely stored volume contents on Ceph clusters outside of the control/responsibility of the Ceph-CSI deployer. With this addition

    • users can enable volume encryption in a StorageClass
    • the CMK configured in Amazon KMS will be used for encrypting/decrypting the DEKs
    • the encrypted DEK for a volume will be stored in the volumes metadata
  • Snapshot and cloning on encrypted RBD PVCs are enabled.

Multus Support

  • Added support for network namespaces (Multus CNI)

Enhancement:

  • Update Kubernetes sidecars to latest releases
  • Update go-ceph to the latest release
  • The external snapshotter APIs are updated from v1beta1 to V1
  • Proper reuse of go ceph cluster Connections are established with this release.
  • Fixed many warnings/errors reported by static code analyzers
  • CSI driver creates a CSIDriver object, Kubernetes users can easily discover the CSI Drivers installed on their cluster (simply by issuing kubectl get CSIDriver)
  • E2E tests are added/updated with this release to make sure the stability of the code achieved on various use cases and also for new features.
  • Build utilities and dependencies are updated to latest versions.
  • CSI driver deployment yamls are updated and various helm chart fixes for snapshot controller deployment, RBAC permissions...etc are part of this release.

CI

  • Make use of ceph users created in e2e
  • Enhanced e2e logging for failure debugging
  • Track deletion of PVC and PV more closely
  • Error out in case deploying Hashicorp Vault fails
  • Added e2e for snapshot retention case/scenario
  • Updated feature gate settings from minikube
  • Verify (non)existence of keys for VaultTokensKMS
  • Pass namespace once in deletePodWithLabel()
  • Use secret with "encryptionPassphrase" for RBD tests

Documentation

  • Updated snapshot and clone documentation
  • Updated Encryption documentation for new KMS provider support and for other enhancements
  • Corrected various reference link issues on doc
  • Upgrade documentation is updated for release 3.3
  • Updated release matrix and compatibility docs
  • Various cleanups and corrections in general.

Breaking Changes

None

NOTE:

Ceph CSI repo Master branch has been renamed to Devel

Ceph CSI v3.2.1 Release

06 Jan 14:48
Compare
Choose a tag to compare

Changelog or Highlights:

Bug Fixes

Deployment

  • Fix snapshot controller deployment (#1823)

RBD

  • Fix namespace json parser (#1822)

Breaking Changes

None.