Skip to content

Releases: rook/rook

v0.6.1

04 Dec 23:11
7915e02
Compare
Choose a tag to compare
v0.6.1 Pre-release
Pre-release

Rook v0.6.1 is a patch release limited in scope and focusing on bug fixes.

Improvements

  • Non-default locations in Kubernetes 1.8 for the volume plugin directory will now correctly be discovered by the rook agents #1162 (@jbw976)
  • Fixed an issue where the rook-api pod would over time begin to consume 100% CPU #1195 (@jbw976)
  • Deletion of an object store will now be scoped to just the specified object store #1198 (@travisn)
  • Volume attachment locks will be cleaned up appropriately when a volume fails to attach for a pod and then the pod is deleted and recreated #1214 (@kokhang)
  • The dynamic volume provisioner in the Rook Operator no longer encounters a panic when a volume from before v0.6.0 is encountered #1252 (@Ulexus)

v0.6.0

08 Nov 16:50
116f088
Compare
Choose a tag to compare
v0.6.0 Pre-release
Pre-release

Major Themes

Rook v0.6 is a release focused on making progress towards our goal of running Rook everywhere Kubernetes runs. There is a new Rook volume plugin and Rook agent that integrates into Kubernetes to provide on demand block and shared filesystem storage for pods with a streamlined and seamless experience. We are keeping the Alpha status for v0.6 due to these two new components. The next release (v0.7) will be our first beta quality release.

Rook has continued its effort for deeper integration with Kubernetes by defining Custom Resource Definitions (CRDs) for both shared filesystems and object storage as well, allowing management of all storage types natively via kubectl.

Reliability has been further improved with a focus on self-healing functionality to recover the cluster to a healthy state when key components have been detected as unhealthy. Investment in the automated test pipelines has been made to increase automated scenario and environment coverage across multiple versions of Kubernetes and cloud providers.

Finally, the groundwork has been laid for automated software upgrades in future releases by both completing the design and publishing a manual upgrade user guide in this release.

Notable Features

  • Rook Flexvolume (@kokhang)
    • Introduces a new Rook plugin based on Flexvolume
    • Integrates with Kubernetes' Volume Controller framework
    • Handles all volume attachment requests such as attaching, mounting and formatting volumes on behalf of Kubernetes
    • Simplifies the deployment by not requiring to have ceph-tools package installed
    • Allows block devices and filesystems to be consumed without any user secret management
    • Improves experience with fencing and volume locking
  • Rook-Agents (@kokhang and @jbw976)
    • Configured and deployed via Daemonset by Rook-operator
    • Installs the Rook Flexvolume plugin
    • Handles all storage operations required on the node, such as attaching devices, mounting volumes and formatting filesystem.
    • Performs node cleanups during Rook cluster teardown
  • File system (@travisn)
    • File systems are defined by a CRD and handled by the Operator
    • Multiple file systems can be created, although still experimental in Ceph
    • Multiple data pools can be created
    • Multiple MDS instances can be created per file system
    • An MDS is started in standby mode for each active MDS
    • Shared filesystems are now supported by the Rook Flexvolume plugin
      • Improved and streamlined experience, now there are no manual steps to copy monitor information or secrets
      • Multiple shared filesystems can be created and consumed within the cluster
      • More information can be found in the shared filesystems user guides
  • Object Store (@travisn)
    • Object Stores are defined by a CRD and handled by the Operator
    • Multiple object stores supported through Ceph realms
  • OSDs (@jbw976, @paha, and @phsiao)
    • Bluestore is now the default backend store for OSDs when creating a new Rook cluster.
    • Bluestore can now be used on directories in addition to raw block devices that were already supported.
    • If an OSD loses its metadata and config but still has its data devices, the OSD will automatically regenerate the lost metadata to make the data available again.
    • If an OSD is observed to be in the down status for an extended period of time then it will automatically be restarted.
    • OSDs can now run as lower privileged when devices are not being used.
  • Pools (@travisn)
    • The failure domain for the CRUSH map can be specified on pools with the failureDomain property
    • Pools created by file systems or object stores are configurable with all options defined in the pool CRD
  • Upgrade (@jbw976)
    • A manual upgrade user guide has been published to help guide users through the recommended upgrade process.
  • API (@galexrt)
    • The API pod will continuously refresh its list of monitor endpoints in order to always be able to service requests even after monitor topology has changed
  • Test framework (@dangula)
    • Test pipelines cover Kubernetes 1.6-1.8 and cloud providers AWS and GCE
    • Long haul testing pipeline will run for multiple days to simulate longer term workloads
    • Raw block devices in addition to local filesystems (directories) are now tested
    • Helm chart installation is now tested
  • HostNetwork (@galexrt)
    • It is now possible to launch a Rook cluster with host networking support. See further details in the Cluster CRD documentation.

Breaking Changes

  • Rook Standalone
    • Standalone mode has been disabled and is no longer supported. A Kubernetes environment is required to run Rook.
  • Rook-operator now deploys in rook-system namespace
    • If using the example manifest of rook-operator.yaml, the rook-operator deployment is now changed to deploy in the rook-system namespace.
  • Rook Flexvolume
    • Persistent volumes from previous releases were created using the RBD plugin. These should be deleted and recreated in order to use the new Flexvolume plugin.
  • Pool CRD
    • replication renamed to replicated
    • erasureCode renamed to erasureCoded
  • OSDs
    • OSD pods now require RBAC permissions to create/get/update/delete/list config maps.
      An upgraded operator will create the necessary service account, role, and role bindings to enable this.
  • API
    • The API pod now uses RBAC permissions that are scoped only to the namespace it is running in.
      An upgraded operator will create the necessary service account, role, and role bindings to enable this.
  • Filesystem
    • The Rook Flexvolume uses the mds_namespace option to specify a cephFS. This is only available on Kernel v4.7 or newer. On older kernel, if there are more than one filesystems in the cluster, the mount operation could be inconsistent. See this doc.

Known Issues

  • Rook Flexvolume
    • For Kubernetes cluster 1.7.x and older, Kubelet will need to be restarted in order to load the new flexvolume. This has been resolved in K8S 1.8. For more information about the requirements, refer to this doc
    • For Kubernetes cluster 1.6.x, the attacher/detacher controller needs to be disabled in order to load the new Flexvolume. This is caused by a regression on 1.6.x. For more information about the requirements, refer to this doc
    • For CoreOS and Rancher Kubernetes, the Flexvolume plugin dir will need to be specified to be different than the default. Refer to Flexvolume configuration pre-reqs
  • OSD pods will not get to the running state for Kubernetes 1.6.4 or lower, due to a regression in Kubernetes that is not compatible with security context settings being applied to the OSD pods. The workaround is to upgrade Kubernetes to 1.6.5 or higher. More details can be found in the tracking issue.

Deprecations

  • Rook Standalone
    • As mentioned in the breaking changes section, Standalone mode has been removed.
  • Rook API
    • Rook has a goal to integrate natively and deeply with container orchestrators such as Kubernetes and using extension points to manage and access the Rook storage cluster. More information can be found in the tracking issue.
  • Rookctl
    • The rookctl client is being deprecated. With the deeper and more native integration of Rook with Kubernetes, kubectl now provides a rich Rook management experience on its own. For direct management of the Ceph storage cluster, the Rook toolbox provides full access to the Ceph tools.

v0.5.1

15 Aug 20:59
Compare
Choose a tag to compare
v0.5.1 Pre-release
Pre-release

Rook v0.5.1 is a patch release limited in scope and focusing on bug fixes and build improvements.

Improvements

  • Ceph Luminous has been upgraded to 12.1.3
  • Helm charts are now built and published as part of the continuous integration pipeline. Details can be found in the Helm Chart readme
  • Improve initial monitor quorum performance so a Rook cluster can be bootstrapped more quickly
  • Rook's metrics and monitoring via Prometheus is now fully compatible with Ceph Luminous
  • Allow placement policy to be applied to manager pods

v0.5.0

08 Aug 23:25
Compare
Choose a tag to compare
v0.5.0 Pre-release
Pre-release

Major Themes

Rook v0.5 is a milestone release that improves reliability, adds support for newer versions of Kubernetes, picks up the latest stable release of Ceph (Luminous), and makes a number of architectural changes that pave the way to getting to Beta and adding support for other storage back-ends beyond Ceph.

Attention needed

Rook does not yet support upgrading a cluster in place. To upgrade from 0.4 to 0.5 we recommend you tear down your cluster and install Rook 0.5 fresh.

We now publish the rook containers to quay.io and docker hub. Docker hub supports multi-arch containers so a simple docker pull rook/rook will pull the right images for any of the supported architectures. We will continue to publish quay.io for continuity.

Rook no longer runs as a single binary. For standalone mode we now require a container runtime. We now only publish containers for Rook daemons. Client tools are still released in binary form.

There is a new release site for Rook that contains all binaries, images, yaml files, test results etc.

Known Issues

If you shut down a Rook cluster without first unbinding persistent volumes, the volumes might be stuck indefinitely and require a host reboot to get cleared. See #376.

Notable Features

Kubernetes

  • Support for Kubernetes 1.7 with CRDs. Rook uses TPRs for pre-1.7 clusters.
  • Names of the deployments, services, daemonsets, pods, etc are more consistent. For example, rook-ceph-mon, rook-ceph-osd, rook-ceph-mgr, rook-ceph-rgw, rook-ceph-mds
  • Node affinity and Tolerations added to the Cluster TPR for api, mon, osd, mds, and rgw
  • Mon failover is now supported by using a service address for each mon pod
  • Each mon is managed with a replicaset
  • New Rook Operator Helm chart
  • A ConfigMap can be used to override Ceph settings in the daemons

Ceph

  • Ceph Luminous is now the version used by Rook. Luminous is the basis of a LTS release of Ceph and introduces bluestore which improves performance.
  • Ceph is no longer compiled into Rook as a static library. Instead we package a streamlined version of Ceph into our containers and call the ceph daemons and tools directly.
  • Ceph Kraken is no longer supported.

Tools

  • The client binary rook was renamed to rookctl
  • The daemon binary was renamed from rookd to rook
  • The rook-client container is no longer built. Run the toolbox container for access to the rookctl tool.
  • amd64, arm, and arm64 are supported by the toolbox in addition to the daemons

Build and Test

  • No more git submodules
  • Added support for armhf (experimental)
  • Faster incremental image builds based on caching
  • E2E Integration tests for Block, File, and Object
  • Block store long haul testing

v0.4.0

03 May 23:56
Compare
Choose a tag to compare
v0.4.0 Pre-release
Pre-release

Notes

  • Breaking changes list for v0.4.0
  • Kubernetes 1.6 is now the minimum supported version
  • In place upgrades are not yet supported
  • Rook releases now support "release channels" (master, alpha, beta, stable)
    • Users can now always try the latest from master in quay.io/rook/rookd:master-latest

Features and Improvements

The full set of completed issues can be found in the v0.4.0 milestone.

v0.3.1

28 Mar 04:18
Compare
Choose a tag to compare
v0.3.1 Pre-release
Pre-release

v0.3.0

03 Mar 23:51
Compare
Choose a tag to compare
v0.3.0 Pre-release
Pre-release

Rook Integration with Kubernetes

  • Rook storage as a first-class entity of Kubernetes
  • Rook operator creates clusters and and manages their health, triggered by a TPR. For each cluster, the operator creates a namespace and starts all the pods and services necessary to consume the storage
  • Block storage consumed by pods with the Ceph RBD volume plugin
  • Object storage can be enabled and consumed over S3 APIs
  • Shared file system can be enabled and consumed by the CephFS volume plugin
  • Rook tools pod for troubleshooting the cluster with Rook and Ceph command line tools
  • Multiple Rook clusters can be created, each in their own namespace (experimental)
  • By default, storage is simply in a directory for the lifetime of the OSD pod. To use available devices instead of a directory, set useAllDevices: true. Beware that this is overly aggressive to format and utilize the devices.

v0.2.2

19 Dec 19:13
Compare
Choose a tag to compare
v0.2.2 Pre-release
Pre-release

Stability fixes for running reliably in a vagrant kubernetes cluster.

v0.2.1

12 Dec 05:37
Compare
Choose a tag to compare
v0.2.1 Pre-release
Pre-release

Stability fixes for running rookd in a kubernetes daemonset.

v0.2.0

08 Dec 02:01
Compare
Choose a tag to compare
v0.2.0 Pre-release
Pre-release

v0.2.0 release of rook, with support for running in a kubernetes daemon set as well as object and file storage.