From 91ec1ac4db7631e7ab1b9829633322707a7206e9 Mon Sep 17 00:00:00 2001 From: Blaine Gardner Date: Thu, 29 Jul 2021 10:56:21 -0600 Subject: [PATCH 1/2] docs: ceph: add peer spec migration to upgrade doc Add a section to the upgrade doc instructing users to (and how to) migrate `CephRBDMirror` `peers` spec to individual `CephBlockPools`. Adjust the pending release notes to refer to the upgrade section now, and clean up a few references in related docs to make sure users don't miss important documentation. Signed-off-by: Blaine Gardner --- Documentation/ceph-pool-crd.md | 6 +++--- Documentation/ceph-upgrade.md | 28 ++++++++++++++++++++++++++++ 2 files changed, 31 insertions(+), 3 deletions(-) diff --git a/Documentation/ceph-pool-crd.md b/Documentation/ceph-pool-crd.md index 4ff9ae49cb08..0a6772357bd5 100644 --- a/Documentation/ceph-pool-crd.md +++ b/Documentation/ceph-pool-crd.md @@ -205,11 +205,11 @@ stretched) then you will have 2 replicas per datacenter where each replica ends * `mirroring`: Sets up mirroring of the pool * `enabled`: whether mirroring is enabled on that pool (default: false) * `mode`: mirroring mode to run, possible values are "pool" or "image" (required). Refer to the [mirroring modes Ceph documentation](https://docs.ceph.com/docs/master/rbd/rbd-mirroring/#enable-mirroring) for more details. - * `snapshotSchedules`: schedule(s) snapshot at the **pool** level. **Only** supported as of Ceph Octopus release. One or more schedules are supported. + * `snapshotSchedules`: schedule(s) snapshot at the **pool** level. **Only** supported as of Ceph Octopus (v15) release. One or more schedules are supported. * `interval`: frequency of the snapshots. The interval can be specified in days, hours, or minutes using d, h, m suffix respectively. * `startTime`: optional, determines at what time the snapshot process starts, specified using the ISO 8601 time format. - * `peers`: to configure mirroring peers - * `secretNames`: a list of peers to connect to. Currently (Ceph Octopus release) **only a single** peer is supported where a peer represents a Ceph cluster. + * `peers`: to configure mirroring peers. See the prerequisite [RBD Mirror documentation](ceph-rbd-mirror-crd.md) first. + * `secretNames`: a list of peers to connect to. Currently **only a single** peer is supported where a peer represents a Ceph cluster. * `statusCheck`: Sets up pool mirroring status * `mirror`: displays the mirroring status diff --git a/Documentation/ceph-upgrade.md b/Documentation/ceph-upgrade.md index 1b1eac9141c3..1191f54b1de2 100644 --- a/Documentation/ceph-upgrade.md +++ b/Documentation/ceph-upgrade.md @@ -373,6 +373,34 @@ At this point, your Rook operator should be running version `rook/ceph:v1.7.0`. Verify the Ceph cluster's health using the [health verification section](#health-verification). +### **6. Update CephRBDMirror and CephBlockPool configs** + +If you are not using a `CephRBDMirror` in your Rook cluster, you may disregard this section. + +Otherwise, please note that the location of the `CephRBDMirror` `spec.peers` config has moved to +`CephBlockPool` `spec.mirroring.peers` in Rook v1.7. This change allows each pool to have its own +peer and enables pools to re-use an existing peer secret if it points to the same cluster peer. + +You may wish to see the [CephBlockPool spec Documentation](ceph-pool-crd.md#spec) for the latest +configuration advice. + +The pre-existing config location in `CephRBDMirror` `spec.peers` will continue to be supported, but +users are still encouraged to migrate this setting from `CephRBDMirror` to relevant `CephBlockPool` +resources. + +To migrate the setting, follow these steps: +1. Stop the Rook-Ceph operator by downscaling the Deployment to zero replicas. + ```sh + kubectl -n $ROOK_OPERATOR_NAMESPACE scale deployment rook-ceph-operator --replicas=0 + ``` +2. Copy the `spec.peers` config from `CephRBDMirror` to every `CephBlockPool` in your cluster that + has mirroring enabled. +3. Remove the `peers` spec from the `CephRBDMirror` resource. +4. Resume the Rook-Ceph operator by scaling the Deployment back to one replica. + ```sh + kubectl -n $ROOK_OPERATOR_NAMESPACE scale deployment rook-ceph-operator --replicas=1 + ``` + ## Ceph Version Upgrades From 7c511328e4561f46e34b4398e7930be7fccec672 Mon Sep 17 00:00:00 2001 From: Blaine Gardner Date: Tue, 14 Sep 2021 13:39:28 -0600 Subject: [PATCH 2/2] docs: update rbd mirror docs for block pool config Remove legacy documentation for configuring RBD mirroring. While we still support legacy mirroring configs, we want to encourage new users to use the CephBlockPool configuration for mirroring. Signed-off-by: Blaine Gardner --- Documentation/ceph-rbd-mirror-crd.md | 52 ++-------------------------- 1 file changed, 2 insertions(+), 50 deletions(-) diff --git a/Documentation/ceph-rbd-mirror-crd.md b/Documentation/ceph-rbd-mirror-crd.md index 1820f6bb5353..cad6583b0c04 100644 --- a/Documentation/ceph-rbd-mirror-crd.md +++ b/Documentation/ceph-rbd-mirror-crd.md @@ -49,53 +49,5 @@ If any setting is unspecified, a suitable default will be used automatically. ### Configuring mirroring peers -On an external site you want to mirror with, you need to create a bootstrap peer token. -The token will be used by one site to **pull** images from the other site. -The following assumes the name of the pool is "test" and the site name "europe" (just like the region), so we will be pulling images from this site: - -```console -external-cluster-console # rbd mirror pool peer bootstrap create test --site-name europe -``` - -For more details, refer to the official rbd mirror documentation on [how to create a bootstrap peer](https://docs.ceph.com/docs/master/rbd/rbd-mirroring/#bootstrap-peers). - -When the peer token is available, you need to create a Kubernetes Secret. -Our `europe-cluster-peer-pool-test-1` will have to be created manually, like so: - -```console -$ kubectl -n rook-ceph create secret generic "europe-cluster-peer-pool-test-1" \ ---from-literal=token=eyJmc2lkIjoiYzZiMDg3ZjItNzgyOS00ZGJiLWJjZmMtNTNkYzM0ZTBiMzVkIiwiY2xpZW50X2lkIjoicmJkLW1pcnJvci1wZWVyIiwia2V5IjoiQVFBV1lsWmZVQ1Q2RGhBQVBtVnAwbGtubDA5YVZWS3lyRVV1NEE9PSIsIm1vbl9ob3N0IjoiW3YyOjE5Mi4xNjguMTExLjEwOjMzMDAsdjE6MTkyLjE2OC4xMTEuMTA6Njc4OV0sW3YyOjE5Mi4xNjguMTExLjEyOjMzMDAsdjE6MTkyLjE2OC4xMTEuMTI6Njc4OV0sW3YyOjE5Mi4xNjguMTExLjExOjMzMDAsdjE6MTkyLjE2OC4xMTEuMTE6Njc4OV0ifQ== \ ---from-literal=pool=test -``` - -Rook will read both `token` and `pool` keys of the Data content of the Secret. -Rook also accepts the `destination` key, which specifies the mirroring direction. -It defaults to rx-tx for bidirectional mirroring, but can also be set to rx-only for unidirectional mirroring. - -You can now inject the rbdmirror CR: - -```yaml -apiVersion: ceph.rook.io/v1 -kind: CephRBDMirror -metadata: - name: my-rbd-mirror - namespace: rook-ceph -spec: - count: 1 - peers: - secretNames: - - "europe-cluster-peer-pool-test-1" -``` - -You can add more pools, for this just repeat the above and change the "pool" value of the Kubernetes Secret. -So the list might eventually look like: - -```yaml - peers: - secretNames: - - "europe-cluster-peer-pool-test-1" - - "europe-cluster-peer-pool-test-2" - - "europe-cluster-peer-pool-test-3" -``` - -Along with three Kubernetes Secret. +Configure mirroring peers individually for each CephBlockPool. Refer to the +[CephBlockPool documentation](ceph-pool-crd.md#mirroring) for more detail.