From 91ec1ac4db7631e7ab1b9829633322707a7206e9 Mon Sep 17 00:00:00 2001 From: Blaine Gardner Date: Thu, 29 Jul 2021 10:56:21 -0600 Subject: [PATCH] docs: ceph: add peer spec migration to upgrade doc Add a section to the upgrade doc instructing users to (and how to) migrate `CephRBDMirror` `peers` spec to individual `CephBlockPools`. Adjust the pending release notes to refer to the upgrade section now, and clean up a few references in related docs to make sure users don't miss important documentation. Signed-off-by: Blaine Gardner --- Documentation/ceph-pool-crd.md | 6 +++--- Documentation/ceph-upgrade.md | 28 ++++++++++++++++++++++++++++ 2 files changed, 31 insertions(+), 3 deletions(-) diff --git a/Documentation/ceph-pool-crd.md b/Documentation/ceph-pool-crd.md index 4ff9ae49cb08..0a6772357bd5 100644 --- a/Documentation/ceph-pool-crd.md +++ b/Documentation/ceph-pool-crd.md @@ -205,11 +205,11 @@ stretched) then you will have 2 replicas per datacenter where each replica ends * `mirroring`: Sets up mirroring of the pool * `enabled`: whether mirroring is enabled on that pool (default: false) * `mode`: mirroring mode to run, possible values are "pool" or "image" (required). Refer to the [mirroring modes Ceph documentation](https://docs.ceph.com/docs/master/rbd/rbd-mirroring/#enable-mirroring) for more details. - * `snapshotSchedules`: schedule(s) snapshot at the **pool** level. **Only** supported as of Ceph Octopus release. One or more schedules are supported. + * `snapshotSchedules`: schedule(s) snapshot at the **pool** level. **Only** supported as of Ceph Octopus (v15) release. One or more schedules are supported. * `interval`: frequency of the snapshots. The interval can be specified in days, hours, or minutes using d, h, m suffix respectively. * `startTime`: optional, determines at what time the snapshot process starts, specified using the ISO 8601 time format. - * `peers`: to configure mirroring peers - * `secretNames`: a list of peers to connect to. Currently (Ceph Octopus release) **only a single** peer is supported where a peer represents a Ceph cluster. + * `peers`: to configure mirroring peers. See the prerequisite [RBD Mirror documentation](ceph-rbd-mirror-crd.md) first. + * `secretNames`: a list of peers to connect to. Currently **only a single** peer is supported where a peer represents a Ceph cluster. * `statusCheck`: Sets up pool mirroring status * `mirror`: displays the mirroring status diff --git a/Documentation/ceph-upgrade.md b/Documentation/ceph-upgrade.md index 1b1eac9141c3..1191f54b1de2 100644 --- a/Documentation/ceph-upgrade.md +++ b/Documentation/ceph-upgrade.md @@ -373,6 +373,34 @@ At this point, your Rook operator should be running version `rook/ceph:v1.7.0`. Verify the Ceph cluster's health using the [health verification section](#health-verification). +### **6. Update CephRBDMirror and CephBlockPool configs** + +If you are not using a `CephRBDMirror` in your Rook cluster, you may disregard this section. + +Otherwise, please note that the location of the `CephRBDMirror` `spec.peers` config has moved to +`CephBlockPool` `spec.mirroring.peers` in Rook v1.7. This change allows each pool to have its own +peer and enables pools to re-use an existing peer secret if it points to the same cluster peer. + +You may wish to see the [CephBlockPool spec Documentation](ceph-pool-crd.md#spec) for the latest +configuration advice. + +The pre-existing config location in `CephRBDMirror` `spec.peers` will continue to be supported, but +users are still encouraged to migrate this setting from `CephRBDMirror` to relevant `CephBlockPool` +resources. + +To migrate the setting, follow these steps: +1. Stop the Rook-Ceph operator by downscaling the Deployment to zero replicas. + ```sh + kubectl -n $ROOK_OPERATOR_NAMESPACE scale deployment rook-ceph-operator --replicas=0 + ``` +2. Copy the `spec.peers` config from `CephRBDMirror` to every `CephBlockPool` in your cluster that + has mirroring enabled. +3. Remove the `peers` spec from the `CephRBDMirror` resource. +4. Resume the Rook-Ceph operator by scaling the Deployment back to one replica. + ```sh + kubectl -n $ROOK_OPERATOR_NAMESPACE scale deployment rook-ceph-operator --replicas=1 + ``` + ## Ceph Version Upgrades