Skip to content

Releases: rook/rook

v0.9.2

26 Jan 07:04
2da0f82
Compare
Choose a tag to compare

Rook v0.9.2 is a patch release limited in scope and focusing on bug fixes.

Improvements

Ceph

  • Correctly capture and log the stderr output from child processes (#2479 #2536, @noahdesu)
  • Allow disabling setting fsgroup when mounting a volume (#2254, @travisn)
  • Allow configuration of SELinux relabeling (#2417, @allen13)
  • Correctly set the secretKey used for cephfs mounts (#2484, @galexrt)
  • Set ceph-mgr privileges to prevent the dashboard from failing on rbd mirroring settings (#2404, @travisn)
  • Correctly configure the ssl certificate for the RGW service (#2435, @syncroswitch)
  • Allow configuration of the dashboard port (#2412, @noahdesu)
  • Allow disabling of ssl on the dashboard (#2433, @noahdesu)

General

v0.9.1

02 Jan 00:54
978b032
Compare
Choose a tag to compare

Rook v0.9.1 is a patch release limited in scope and focusing on bug fixes.

Improvements

Ceph

  • Build with arch-specific Ceph base image to fix the arm64 build (#2406, @travisn)
  • Detect the correct version of the Ceph image when the crd is edited (#2353, @travisn)
  • Correct the name of the dataBlockPool parameter for the storage class of an erasure-coded pool (#2370, @galexrt)
  • Retry generating the self signed cert if the dashboard module is not ready (#2298, @travisn)
  • Set the server_addr on the prometheus and dashboard modules to avoid health errors (#2335, @travisn)
  • Documentation: Add the missing mon count to a cluster crd example (@multi-io) and add stable channel to the helm chart docs (@jbw976)

EdgeFS

v0.9.0

09 Dec 01:36
7391c10
Compare
Choose a tag to compare

Major Themes

  • Storage Providers for Cassandra, EdgeFS, and NFS were added
  • Ceph CRDs have been declared stable V1.
  • Ceph versioning is decoupled from the Rook version. Luminous and Mimic can be run in production, or Nautilus in experimental mode.
  • Ceph upgrades are greatly simplified

Action Required

  • Existing clusters that are running previous versions of Rook will need to be migrated to be compatible with the v0.9 operator and to begin using the new ceph.rook.io/v1 CRD types. Please follow the instructions in the upgrade user guide to successfully migrate your existing Rook cluster to the new release.

Notable Features

  • The minimum version of Kubernetes supported by Rook changed from 1.7 to 1.8.
  • K8s client-go updated from version 1.8.2 to 1.11.3

Ceph

  • The Ceph CRDs are now v1. The operator will automatically convert the CRDs from v1beta1 to v1.
  • Different versions of Ceph can be orchestrated by Rook. Both Luminous and Mimic are now supported, with Nautilus coming soon.
    The version of Ceph is specified in the cluster CRD with the cephVersion.image property. For example, to run Mimic you could use image ceph/ceph:v13.2.2-20181023
    or any other image found on the Ceph DockerHub.
  • The fsType default for StorageClass examples are now using XFS to bring it in line with Ceph recommendations.
  • Rook Ceph block storage provisioner can now correctly create erasure coded block images. See Advanced Example: Erasure Coded Block Storage for an example usage.
  • Service account (rook-ceph-mgr) added for the mgr daemon to grant the mgr orchestrator modules access to the K8s APIs.
  • reclaimPolicy parameter of StorageClass definition is now supported.
  • The toolbox manifest now creates a deployment based on the rook/ceph image instead of creating a pod on a specialized rook/ceph-toolbox image.
  • The frequency of discovering devices on a node is reduced to 60 minutes by default, and is configurable with the setting ROOK_DISCOVER_DEVICES_INTERVAL in operator.yaml.
  • The number of mons can be changed by updating the mon.count in the cluster CRD.
  • RBD Mirroring is enabled by Rook. By setting the number of rbd mirroring workers, the daemon(s) will be started by rook. To configure the pools or images to be mirrored, use the Rook toolbox to run the rbd mirror configuration tool.
  • Object Store User creation via CRD for Ceph clusters.
  • Ceph MON, OSD, MGR, MDS, and RGW deployments (or DaemonSets) will be updated/upgraded automatically with updates to the Rook operator.
  • Ceph OSDs are created with the ceph-volume tool when configuring devices, adding support for multiple OSDs per device. See the OSD configuration settings

NFS

  • Network File System (NFS) is now supported by Rook with a new operator to deploy and manage this widely used server. NFS servers can be automatically deployed by creating an instance of the new nfsservers.nfs.rook.io custom resource. See the NFS server user guide to get started with NFS.

Cassandra

  • Cassandra and Scylla are now supported by Rook with the rook-cassandra operator. Users can now deploy, configure and manage Cassandra or Scylla clusters, by creating an instance of the clusters.cassandra.rook.io custom resource. See the user guide to get started.

EdgeFS Geo-Transparent Storage

  • EdgeFS are supported by a Rook operator, providing high-performance and low-latency object storage system with Geo-Transparent data access via standard protocols. See the user guide to get started.

Breaking Changes

  • The Rook container images are no longer published to quay.io, they are published only to Docker Hub. All manifests have referenced Docker Hub for multiple releases now, so we do not expect any directly affected users from this change.
  • Rook no longer supports kubernetes 1.7. Users running Kubernetes 1.7 on their clusters are recommended to upgrade to Kubernetes 1.8 or higher. If you are using kubeadm, you can follow this guide to from Kubernetes 1.7 to 1.8. If you are using kops or kubespray for managing your Kubernetes cluster, just follow the respective projects' upgrade guide.

Ceph

  • The Ceph CRDs are now v1. With the version change, the kind has been renamed for the following Ceph CRDs:
    • Cluster --> CephCluster
    • Pool --> CephBlockPool
    • Filesystem --> CephFilesystem
    • ObjectStore --> CephObjectStore
    • ObjectStoreUser --> CephObjectStoreUser
  • The rook-ceph-cluster service account was renamed to rook-ceph-osd as this service account only applies to OSDs.
    • On upgrade from v0.8, the rook-ceph-osd service account must be created before starting the operator on v0.9.
    • The serviceAccount property has been removed from the cluster CRD.
  • Ceph mons are named consistently with other daemons with the letters a, b, c, etc.
  • Ceph mons are now created with Deployments instead of ReplicaSets to improve the upgrade implementation.
  • Ceph mon, osd, mgr, mds, and rgw container names in pods have changed with the refactors to initialize the daemon environments via pod InitContainers and run the Ceph daemons directly from the container entrypoint.

Minio

  • Minio no longer exposes a configurable port for each distributed server instance to use. This was an internal only port that should not need to be configured by the user. All connections from users and clients are expected to come in through the configurable Service instance.

Known Issues

Ceph

  • Upgrades are not supported to nautilus. Specifically, OSDs configured before the upgrade (without ceph-volume) will fail to start on nautilus. Nautilus is not officially supported until its release, but otherwise is expected to be working in test clusters.

v0.8.3

02 Oct 01:40
be9fd85
Compare
Choose a tag to compare

Rook v0.8.3 is a patch release limited in scope and focusing on bug fixes.

Improvements

v0.8.2

19 Sep 14:38
736f10a
Compare
Choose a tag to compare

Rook v0.8.2 is a patch release limited in scope and focusing on bug fixes.

Improvements

v0.8.1

02 Aug 05:25
51d9eed
Compare
Choose a tag to compare

Rook v0.8.1 is a patch release limited in scope and focusing on bug fixes.

Improvements

  • An upgrade guide has been authored for upgrading from the v0.8.0 release to this v0.8.1 patch release. Please refer to this new guide when upgrading to v0.8.1. (@travisn)
  • Ceph is updated to Luminous 12.2.7. (@travisn)
  • Ceph OSDs will be automatically updated by the operator when there is a change to the operator version or when the OSD configuration changes. See the OSD upgrade notes. (@travisn)
  • Ceph erasure-coded pools have the min_size set to the number of data chunks. (@galexrt)
  • Ceph OSDs will refresh their config at each startup with an init container. (@travisn)
  • Ceph OSDs will respect the placement specified in the cluster CRD. (@rootfs)
  • Ceph OSDs will use the update strategy of recreate to avoid resource contention at restart. (@galexrt)
  • Pod names for Ceph OSDs are truncated in environments with long host names (@galexrt)
  • The documentation for Rook flexvolume configuration was improved to reduce confusion and address all known scenarios and environments (@galexrt)

v0.8.0

19 Jul 00:49
d537df8
Compare
Choose a tag to compare

Major Themes

  • Framework and architecture to support general cloud-native storage orchestration, with new support for CockroachDB and Minio. More storage providers will be integrated in the near future.
  • Ceph support has been graduated to Beta maturity
  • Security model has been improved, the cluster admin now has full control over the permissions granted to Rook and the privileges required to run the operato(s) are now much more restricted.
  • Openshift is now a supported environment.

Action Required

  • Existing clusters that are running previous versions of Rook will need to be upgraded/migrated to be compatible with the v0.8 operator and to begin using the new rook.io/v1alpha2 and ceph.rook.io/v1beta1 CRD types. Please follow the instructions in the upgrade user guide to successfully migrate your existing Rook cluster to the new release, as it has been updated with specific steps to help you upgrade to v0.8.

Notable Features

  • Rook is now architected to be a general cloud-native storage orchestrator, and can now support multiple types of storage and providers beyond Ceph.
    • CockroachDB is now supported by Rook with a new operator to deploy, configure and manage instances of this popular and resilient SQL database. Databases can be automatically deployed by creating an instance of the new cluster.cockroachdb.rook.io custom resource. See the CockroachDB user guide to get started with CockroachDB.
    • Minio is also supported now with an operator to deploy and manage this popular high performance distributed object storage server. To get started with Minio using the new objectstore.minio.rook.io custom resource, follow the steps in the Minio user guide.
  • The status of Rook is no longer published for the project as a whole. Going forward, status will be published per storage provider or API group. Full details can be found in the project status section of the README.
    • Ceph support has graduated to Beta.
  • The rook/ceph image is now based on the ceph-container project's 'daemon-base' image so that Rook no longer has to manage installs of Ceph in image. This image is based on CentOS 7.
  • One OSD will run per pod to increase the reliability and maintainability of the OSDs. No longer will restarting an OSD pod mean that all OSDs on that node will go down. See the design doc.
  • Ceph tools can be run from any rook pod.
  • Output from stderr will be included in error messages returned from the exec of external tools.
  • Rook-Operator no longer creates the resources CRD's or TPR's at the runtime. Instead, those resources are provisioned during deployment via helm or kubectl.
  • IPv6 environments are now supported.
  • Rook CRD code generation is now working with BSD (Mac) and GNU sed.
  • The Ceph dashboard can be enabled by the cluster CRD.
  • monCount has been renamed to count, which has been moved into the mon spec. Additionally the default if unspecified or 0, is now 3.
  • You can now toggle if multiple Ceph mons might be placed on one node with the allowMultiplePerNode option (default false) in the mon spec.
  • Added nodeSelector to Rook Ceph operator Helm chart.

Breaking Changes

  • It is recommended to only use official releases of Rook, as unreleased versions from the master branch are subject to changes and incompatibilities that will not be supported in the official releases.
  • Removed support for Kubernetes 1.6, including the legacy Third Party Resources (TPRs).
  • Various paths and resources have changed to accommodate multiple storage providers:
    • Examples: The yaml files for creating a Ceph cluster can be found in cluster/examples/kubernetes/ceph. The yaml files that are provider-independent will still be found in the cluster/examples/kubernetes folder.
    • CRDs: The apiVersion of the Rook CRDs are now provider-specific, such as ceph.rook.io/v1beta1 instead of rook.io/v1alpha1.
    • Cluster CRD: The Ceph cluster CRD has had several properties restructured for consistency with other storage provider CRDs. Rook will automatically upgrade the previous Ceph CRD versions to the new versions with all the compatible properties. When creating the cluster CRD based on the new ceph.rook.io apiVersion you will need to take note of the new settings structure.
    • Container images: The container images for Ceph and the toolbox are now rook/ceph and rook/ceph-toolbox. The steps in the upgrade user guide will automatically start using these new images for your cluster.
    • Namespaces: The example namespaces are now provider-specific. Instead of rook-system and rook, you will see rook-ceph-system and rook-ceph.
    • Volume plugins: The dynamic provisioner and flex driver are now based on ceph.rook.io instead of rook.io
  • Minimal privileges are configured with a new cluster role for the operator and Ceph daemons, following the new security design.
    • A role binding must be defined for each cluster to be managed by the operator.
  • OSD pods are started by a deployment, instead of a daemonset or a replicaset. The new OSD pods will crash loop until the old daemonset or replicasets are removed.

Removal of the API service and rookctl tool

The REST API service has been removed. All cluster configuration is now accomplished through the
CRDs or with the Ceph tools in the toolbox.

The tool rookctl has been removed from the toolbox pod. Cluster status and configuration can be queried and changed with the Ceph tools.
Here are some sample commands to help with your transition.

rookctl Command Replaced by Description
rookctl status ceph status Query the status of the storage components
rookctl block See the Block storage and direct Block config Create, configure, mount, or unmount a block image
rookctl filesystem See the Filesystem and direct File config Create, configure, mount, or unmount a file system
rookctl object See the Object storage config Create and configure object stores and object users

Deprecations

  • Legacy CRD types in the rook.io/v1alpha1 API group have been deprecated. The types from
    rook.io/v1alpha2 should now be used instead.
  • Legacy command flag public-ipv4 in the ceph components have been deprecated, public-ip should now be used instead.
  • Legacy command flag private-ipv4 in the ceph components have been deprecated, private-ip should now be used instead.

v0.7.1

10 Mar 17:04
1ad48d3
Compare
Choose a tag to compare
v0.7.1 Pre-release
Pre-release

Rook v0.7.1 is a patch release limited in scope and focusing on bug fixes.

Improvements

  • The version of Ceph has been updated to Luminous 12.2.4 (@bassam)
  • When a Ceph monitor is failed over, it will be assigned an appropriate IP address when host networking is being used (@galexrt)
  • The upgrade user guide has been updated to include steps for upgrading from v0.6.x to the v0.7 releases (@travisn)
  • An issue was fixed that prevented the Helm charts from being correctly published to https://charts.rook.io/ (@bassam)
  • In environments where the Kubernetes cluster does not have a version set, the Helm charts will now appropriately proceed (@TimJones)

v0.7.0

21 Feb 22:20
66a0783
Compare
Choose a tag to compare
v0.7.0 Pre-release
Pre-release

Notable Features

  • The Cluster CRD can now be edited/updated to add and remove storage nodes. Note that only adding/removing entire nodes is currently supported, but adding individual disks/directories will also be supported soon.
  • The rook/rook image now uses the official Ceph packages instead of compiling from source. This ensures that Rook always ships the latest stable and supported Ceph version and reduces the developer burden for maintenance and building.
  • Resource limits are now supported for all pod types. You can constrain the CPU and memory usage for all Rook pods by setting resource limits in the Cluster CRD.
  • Monitoring is now done through the Ceph MGR service for Ceph storage.
  • The CRUSH root can be specified for pools with the crushRoot property, rather than always using the default root. Configuration of the CRUSH hierarchy is necessary with the ceph osd crush commands in the toolbox.
  • A full client API has been generated for all Kubernetes resource types defined in Rook. This allows you to programmatically interact with Rook deployments using golang.
  • The full list of resolved issues can be found in the 0.7 milestone page

Operator Settings

  • AGENT_TOLERATION: Toleration can be added to the Rook agent, such as to run on the master node.
  • FLEXVOLUME_DIR_PATH: Flex volume directory can be overridden on the Rook agent.

Breaking Changes

  • armhf build of Rook have been removed. Ceph is not supported or tested on armhf. arm64 support continues.

Cluster CRD

  • Removed the versionTag property. The container version to launch in all pods will be the same as the version of the operator container.
  • Added the cluster.rook.io finalizer. When a cluster is deleted, the operator will cleanup resources and remove the finalizer, which then allows K8s to delete the CRD.

Operator

  • Removed the ROOK_REPO_PREFIX env var. All containers will be launched with the same image as the operator

Deprecations

v0.6.2

13 Dec 23:12
13492f8
Compare
Choose a tag to compare
v0.6.2 Pre-release
Pre-release

Rook v0.6.2 is a patch release limited in scope with one bug fix.

Improvements

  • Allow Rook to run when RBAC is disabled in the Kubernetes cluster #1221 (@kokhang)