v0.8.0
Major Themes
- Framework and architecture to support general cloud-native storage orchestration, with new support for CockroachDB and Minio. More storage providers will be integrated in the near future.
- Ceph support has been graduated to Beta maturity
- Full project status details can be found in the project status section of the main README.
- Security model has been improved, the cluster admin now has full control over the permissions granted to Rook and the privileges required to run the operato(s) are now much more restricted.
- Openshift is now a supported environment.
Action Required
- Existing clusters that are running previous versions of Rook will need to be upgraded/migrated to be compatible with the
v0.8
operator and to begin using the newrook.io/v1alpha2
andceph.rook.io/v1beta1
CRD types. Please follow the instructions in the upgrade user guide to successfully migrate your existing Rook cluster to the new release, as it has been updated with specific steps to help you upgrade tov0.8
.
Notable Features
- Rook is now architected to be a general cloud-native storage orchestrator, and can now support multiple types of storage and providers beyond Ceph.
- CockroachDB is now supported by Rook with a new operator to deploy, configure and manage instances of this popular and resilient SQL database. Databases can be automatically deployed by creating an instance of the new
cluster.cockroachdb.rook.io
custom resource. See the CockroachDB user guide to get started with CockroachDB. - Minio is also supported now with an operator to deploy and manage this popular high performance distributed object storage server. To get started with Minio using the new
objectstore.minio.rook.io
custom resource, follow the steps in the Minio user guide.
- CockroachDB is now supported by Rook with a new operator to deploy, configure and manage instances of this popular and resilient SQL database. Databases can be automatically deployed by creating an instance of the new
- The status of Rook is no longer published for the project as a whole. Going forward, status will be published per storage provider or API group. Full details can be found in the project status section of the README.
- Ceph support has graduated to Beta.
- The
rook/ceph
image is now based on the ceph-container project's 'daemon-base' image so that Rook no longer has to manage installs of Ceph in image. This image is based on CentOS 7. - One OSD will run per pod to increase the reliability and maintainability of the OSDs. No longer will restarting an OSD pod mean that all OSDs on that node will go down. See the design doc.
- Ceph tools can be run from any rook pod.
- Output from stderr will be included in error messages returned from the
exec
of external tools. - Rook-Operator no longer creates the resources CRD's or TPR's at the runtime. Instead, those resources are provisioned during deployment via
helm
orkubectl
. - IPv6 environments are now supported.
- Rook CRD code generation is now working with BSD (Mac) and GNU sed.
- The Ceph dashboard can be enabled by the cluster CRD.
monCount
has been renamed tocount
, which has been moved into themon
spec. Additionally the default if unspecified or0
, is now3
.- You can now toggle if multiple Ceph mons might be placed on one node with the
allowMultiplePerNode
option (defaultfalse
) in themon
spec. - Added
nodeSelector
to Rook Ceph operator Helm chart.
Breaking Changes
- It is recommended to only use official releases of Rook, as unreleased versions from the master branch are subject to changes and incompatibilities that will not be supported in the official releases.
- Removed support for Kubernetes 1.6, including the legacy Third Party Resources (TPRs).
- Various paths and resources have changed to accommodate multiple storage providers:
- Examples: The yaml files for creating a Ceph cluster can be found in
cluster/examples/kubernetes/ceph
. The yaml files that are provider-independent will still be found in thecluster/examples/kubernetes
folder. - CRDs: The
apiVersion
of the Rook CRDs are now provider-specific, such asceph.rook.io/v1beta1
instead ofrook.io/v1alpha1
. - Cluster CRD: The Ceph cluster CRD has had several properties restructured for consistency with other storage provider CRDs. Rook will automatically upgrade the previous Ceph CRD versions to the new versions with all the compatible properties. When creating the cluster CRD based on the new
ceph.rook.io
apiVersion you will need to take note of the new settings structure. - Container images: The container images for Ceph and the toolbox are now
rook/ceph
androok/ceph-toolbox
. The steps in the upgrade user guide will automatically start using these new images for your cluster. - Namespaces: The example namespaces are now provider-specific. Instead of
rook-system
androok
, you will seerook-ceph-system
androok-ceph
. - Volume plugins: The dynamic provisioner and flex driver are now based on
ceph.rook.io
instead ofrook.io
- Examples: The yaml files for creating a Ceph cluster can be found in
- Minimal privileges are configured with a new cluster role for the operator and Ceph daemons, following the new security design.
- A role binding must be defined for each cluster to be managed by the operator.
- OSD pods are started by a deployment, instead of a daemonset or a replicaset. The new OSD pods will crash loop until the old daemonset or replicasets are removed.
Removal of the API service and rookctl tool
The REST API service has been removed. All cluster configuration is now accomplished through the
CRDs or with the Ceph tools in the toolbox.
The tool rookctl
has been removed from the toolbox pod. Cluster status and configuration can be queried and changed with the Ceph tools.
Here are some sample commands to help with your transition.
rookctl Command |
Replaced by | Description |
---|---|---|
rookctl status |
ceph status |
Query the status of the storage components |
rookctl block |
See the Block storage and direct Block config | Create, configure, mount, or unmount a block image |
rookctl filesystem |
See the Filesystem and direct File config | Create, configure, mount, or unmount a file system |
rookctl object |
See the Object storage config | Create and configure object stores and object users |
Deprecations
- Legacy CRD types in the
rook.io/v1alpha1
API group have been deprecated. The types from
rook.io/v1alpha2
should now be used instead. - Legacy command flag
public-ipv4
in the ceph components have been deprecated,public-ip
should now be used instead. - Legacy command flag
private-ipv4
in the ceph components have been deprecated,private-ip
should now be used instead.