Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: ceph: add peer spec migration to upgrade doc #8435

Merged
merged 2 commits into from Sep 15, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
6 changes: 3 additions & 3 deletions Documentation/ceph-pool-crd.md
Expand Up @@ -205,11 +205,11 @@ stretched) then you will have 2 replicas per datacenter where each replica ends
* `mirroring`: Sets up mirroring of the pool
* `enabled`: whether mirroring is enabled on that pool (default: false)
* `mode`: mirroring mode to run, possible values are "pool" or "image" (required). Refer to the [mirroring modes Ceph documentation](https://docs.ceph.com/docs/master/rbd/rbd-mirroring/#enable-mirroring) for more details.
* `snapshotSchedules`: schedule(s) snapshot at the **pool** level. **Only** supported as of Ceph Octopus release. One or more schedules are supported.
* `snapshotSchedules`: schedule(s) snapshot at the **pool** level. **Only** supported as of Ceph Octopus (v15) release. One or more schedules are supported.
* `interval`: frequency of the snapshots. The interval can be specified in days, hours, or minutes using d, h, m suffix respectively.
* `startTime`: optional, determines at what time the snapshot process starts, specified using the ISO 8601 time format.
* `peers`: to configure mirroring peers
* `secretNames`: a list of peers to connect to. Currently (Ceph Octopus release) **only a single** peer is supported where a peer represents a Ceph cluster.
Copy link
Member Author

@BlaineEXE BlaineEXE Jul 29, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think the (Ceph Octopus release) sidebar here is particularly useful for users. This only becomes relevant once a Ceph version does support multiple peers.

* `peers`: to configure mirroring peers. See the prerequisite [RBD Mirror documentation](ceph-rbd-mirror-crd.md) first.
* `secretNames`: a list of peers to connect to. Currently **only a single** peer is supported where a peer represents a Ceph cluster.

* `statusCheck`: Sets up pool mirroring status
* `mirror`: displays the mirroring status
Expand Down
52 changes: 2 additions & 50 deletions Documentation/ceph-rbd-mirror-crd.md
Expand Up @@ -49,53 +49,5 @@ If any setting is unspecified, a suitable default will be used automatically.

### Configuring mirroring peers

On an external site you want to mirror with, you need to create a bootstrap peer token.
The token will be used by one site to **pull** images from the other site.
The following assumes the name of the pool is "test" and the site name "europe" (just like the region), so we will be pulling images from this site:

```console
external-cluster-console # rbd mirror pool peer bootstrap create test --site-name europe
```

For more details, refer to the official rbd mirror documentation on [how to create a bootstrap peer](https://docs.ceph.com/docs/master/rbd/rbd-mirroring/#bootstrap-peers).

When the peer token is available, you need to create a Kubernetes Secret.
Our `europe-cluster-peer-pool-test-1` will have to be created manually, like so:

```console
$ kubectl -n rook-ceph create secret generic "europe-cluster-peer-pool-test-1" \
--from-literal=token=eyJmc2lkIjoiYzZiMDg3ZjItNzgyOS00ZGJiLWJjZmMtNTNkYzM0ZTBiMzVkIiwiY2xpZW50X2lkIjoicmJkLW1pcnJvci1wZWVyIiwia2V5IjoiQVFBV1lsWmZVQ1Q2RGhBQVBtVnAwbGtubDA5YVZWS3lyRVV1NEE9PSIsIm1vbl9ob3N0IjoiW3YyOjE5Mi4xNjguMTExLjEwOjMzMDAsdjE6MTkyLjE2OC4xMTEuMTA6Njc4OV0sW3YyOjE5Mi4xNjguMTExLjEyOjMzMDAsdjE6MTkyLjE2OC4xMTEuMTI6Njc4OV0sW3YyOjE5Mi4xNjguMTExLjExOjMzMDAsdjE6MTkyLjE2OC4xMTEuMTE6Njc4OV0ifQ== \
--from-literal=pool=test
```

Rook will read both `token` and `pool` keys of the Data content of the Secret.
Rook also accepts the `destination` key, which specifies the mirroring direction.
It defaults to rx-tx for bidirectional mirroring, but can also be set to rx-only for unidirectional mirroring.

You can now inject the rbdmirror CR:

```yaml
apiVersion: ceph.rook.io/v1
kind: CephRBDMirror
metadata:
name: my-rbd-mirror
namespace: rook-ceph
spec:
count: 1
peers:
secretNames:
- "europe-cluster-peer-pool-test-1"
```

You can add more pools, for this just repeat the above and change the "pool" value of the Kubernetes Secret.
So the list might eventually look like:

```yaml
peers:
secretNames:
- "europe-cluster-peer-pool-test-1"
- "europe-cluster-peer-pool-test-2"
- "europe-cluster-peer-pool-test-3"
```

Along with three Kubernetes Secret.
Configure mirroring peers individually for each CephBlockPool. Refer to the
[CephBlockPool documentation](ceph-pool-crd.md#mirroring) for more detail.
28 changes: 28 additions & 0 deletions Documentation/ceph-upgrade.md
Expand Up @@ -373,6 +373,34 @@ At this point, your Rook operator should be running version `rook/ceph:v1.7.0`.

Verify the Ceph cluster's health using the [health verification section](#health-verification).

### **6. Update CephRBDMirror and CephBlockPool configs**

If you are not using a `CephRBDMirror` in your Rook cluster, you may disregard this section.

Otherwise, please note that the location of the `CephRBDMirror` `spec.peers` config has moved to
`CephBlockPool` `spec.mirroring.peers` in Rook v1.7. This change allows each pool to have its own
peer and enables pools to re-use an existing peer secret if it points to the same cluster peer.

You may wish to see the [CephBlockPool spec Documentation](ceph-pool-crd.md#spec) for the latest
configuration advice.

The pre-existing config location in `CephRBDMirror` `spec.peers` will continue to be supported, but
users are still encouraged to migrate this setting from `CephRBDMirror` to relevant `CephBlockPool`
resources.
Comment on lines +387 to +389
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sp98 @leseb is this still a good statement to make in the upgrade guide for 1.7?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes it is.


To migrate the setting, follow these steps:
1. Stop the Rook-Ceph operator by downscaling the Deployment to zero replicas.
```sh
kubectl -n $ROOK_OPERATOR_NAMESPACE scale deployment rook-ceph-operator --replicas=0
```
2. Copy the `spec.peers` config from `CephRBDMirror` to every `CephBlockPool` in your cluster that
has mirroring enabled.
3. Remove the `peers` spec from the `CephRBDMirror` resource.
4. Resume the Rook-Ceph operator by scaling the Deployment back to one replica.
```sh
kubectl -n $ROOK_OPERATOR_NAMESPACE scale deployment rook-ceph-operator --replicas=1
```


## Ceph Version Upgrades

Expand Down