Skip to content

Commit

Permalink
docs: Add a document to set-up rbd mirroring
Browse files Browse the repository at this point in the history
The document tracks the steps which are required
to set-up rbd mirroring on clusters.

Signed-off-by: Yug Gupta <yuggupta27@gmail.com>
  • Loading branch information
Yuggupta27 committed Sep 8, 2021
1 parent 519479c commit 93afa39
Showing 1 changed file with 258 additions and 0 deletions.
258 changes: 258 additions & 0 deletions Documentation/rbd-mirroring.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,258 @@
---
title: RBD Mirroring
weight: 3242
indent: true
---

# RBD Mirroring

[RBD mirroring](https://docs.ceph.com/en/latest/rbd/rbd-mirroring/)
is an asynchronous replication of RBD images between multiple Ceph clusters.
This capability is available in two modes:

* Journal-based: Every write to the RBD image is first recorded
to the associated journal before modifying the actual image.
The remote cluster will read from this associated journal and
replay the updates to its local image.
* Snapshot-based: This mode uses periodically scheduled or
manually created RBD image mirror-snapshots to replicate
crash-consistent RBD images between clusters.

## Table of Contents <!-- omit in toc -->

* [Create RBD Pools](#create-rbd-pools)
* [Bootstrap Peers](#bootstrap-peers)
* [Configure the RBDMirror Daemon](#configure-the-rbdmirror-daemon)
* [Add mirroring peer information to RBD pools](#add-mirroring-peer-information-to-rbd-pools)
* [Enable CSI Replication Sidecars](#enable-csi-replication-sidecars)
* [Volume Replication Custom Resources](#volume-replication-custom-resources)

## Create RBD Pools

In this section, we create specific RBD pools that are RBD mirroring
enabled for use with the DR use case.

> :memo: **Note:** It is also feasible to edit existing pools and
> enable them for replication.
Execute the following steps on each peer cluster to create mirror
enabled pools:

* Create a RBD pool that is enabled for mirroring by adding the section
`spec.mirroring` in the CephBlockPool CR:

```yaml
apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
name: mirroredpool
namespace: rook-ceph
spec:
replicated:
size: 1
mirroring:
enabled: true
mode: image
# schedule(s) of snapshot
snapshotSchedules:
- interval: 24h # daily snapshots
startTime: 14:00:00-05:00
```

```bash
kubectl create -f pool-mirrored.yaml
```

> ```bash
> cephblockpool.ceph.rook.io/mirroredpool created
> ```
* Repeat the steps on the peer cluster.

> :memo: WARNING: Pool name across the cluster peers should be the same
> for RBD replication to function.
For more information on CephBlockPool CR, please refer to the [ceph-pool-crd documentation](Documentation/ceph-pool-crd.md#mirroring).

## Bootstrap Peers

In order for the rbd-mirror daemon to discover its peer cluster, the
peer must be registered and a user account must be created.

The following steps enable bootstrapping peers to discover and
authenticate to each other:

* For Bootstrapping a peer cluster its bootstrap secret is required. To determine the name of the secret that contains the bootstrap secret execute the following command on the remote cluster (cluster-2)

```bash
kubectl get cephblockpool.ceph.rook.io/mirroredpool -n rook-ceph --context=cluster-2 -ojsonpath='{.status.info.rbdMirrorBootstrapPeerSecretName}'
```

> ```bash
> pool-peer-token-mirroredpool
> ```
Here, `pool-peer-token-mirroredpool` is the desired bootstrap secret name.

* The secret pool-peer-token-mirroredpool contains all the information related to the token and needs to be injected to the peer, to fetch the decoded secret:

```bash
kubectl get secret -n rook-ceph pool-peer-token-mirroredpool --context=cluster-2 -o jsonpath='{.data.token}'|base64 -d
```

> ```bash
>eyJmc2lkIjoiNGQ1YmNiNDAtNDY3YS00OWVkLThjMGEtOWVhOGJkNDY2OTE3IiwiY2xpZW50X2lkIjoicmJkLW1pcnJvci1wZWVyIiwia2V5IjoiQVFDZ3hmZGdxN013R0JBQWZzcUtCaGpZVjJUZDRxVzJYQm5kemc9PSIsIm1vbl9ob3N0IjoiW3YyOjE5Mi4xNjguMzkuMzY6MzMwMCx2MToxOTIuMTY4LjM5LjM2OjY3ODldIn0=
> ```
* Get site name from secondary cluster(cluster-2)

```bash
kubectl get cephblockpools.ceph.rook.io mirroredpool -nrook-ceph --context=cluster-2 -o jsonpath='{.status.mirroringInfo.site_name}'
```

> ```bash
> 5a91d009-9e8b-46af-b311-c51aff3a7b49
> ```
* With this Decoded value, create a secret on the primary site(cluster-1), using the site name of the peer as the name.

```bash
kubectl -n rook-ceph create secret generic --context=cluster-1 5a91d009-9e8b-46af-b311-c51aff3a7b49 --from-literal=token=eyJmc2lkIjoiNGQ1YmNiNDAtNDY3YS00OWVkLThjMGEtOWVhOGJkNDY2OTE3IiwiY2xpZW50X2lkIjoicmJkLW1pcnJvci1wZWVyIiwia2V5IjoiQVFDZ3hmZGdxN013R0JBQWZzcUtCaGpZVjJUZDRxVzJYQm5kemc9PSIsIm1vbl9ob3N0IjoiW3YyOjE5Mi4xNjguMzkuMzY6MzMwMCx2MToxOTIuMTY4LjM5LjM2OjY3ODldIn0= --from-literal=pool=mirroredpool
```

> ```bash
> secret/5a91d009-9e8b-46af-b311-c51aff3a7b49 created
> ```
* This completes the bootstrap process for cluster-1 to be peered with cluster-2
* Repeat the process switching cluster-2 in place of cluster-1, to complete the bootstrap process across both peer clusters.

For more details, refer to the official rbd mirror documentation on
[how to create a bootstrap peer](https://docs.ceph.com/en/latest/rbd/rbd-mirroring/#bootstrap-peers).

## Configure the RBDMirror Daemon

Replication is handled by the rbd-mirror daemon. The rbd-mirror daemon
is responsible for pulling image updates from the remote, peer cluster,
and applying them to image within the local cluster.

Creation of the rbd-mirror daemon(s) is done through the custom resource
definitions (CRDs), as follows:

* Create mirror.yaml, to deploy the rbd-mirror daemon

```yaml
apiVersion: ceph.rook.io/v1
kind: CephRBDMirror
metadata:
name: my-rbd-mirror
namespace: openshift-storage
spec:
# the number of rbd-mirror daemons to deploy
count: 1
```

* Create the RBD mirror daemon

```bash
kubectl create -f mirror.yaml -n rook-ceph --context=cluster-1
```

> ```bash
> cephrbdmirror.ceph.rook.io/my-rbd-mirror created
> ```
* Validate if `rbd-mirror` daemon pod is now up

```bash
kubectl get pods -n rook-ceph --context=cluster-1
```

> ```bash
> rook-ceph-rbd-mirror-a-6985b47c8c-dpv4k 1/1 Running 0 10s
> ```
* Verify that daemon health is OK

```bash
kubectl get cephblockpools.ceph.rook.io mirroredpool -nrook-ceph -o jsonpath='{.status.mirroringStatus.summary}'
```

> ```bash
> {"daemon_health":"OK","health":"OK","image_health":"OK","states":{"replaying":1}}
> ```
* Repeat the above steps on the peer cluster.

For more information on how to set up Ceph RBDMirror CRD, refer to
[rook documentation](https://rook.io/docs/rook/master/ceph-rbd-mirror-crd.html).


## Add mirroring peer information to RBD pools

Each pool can have its own peer. To add the peer information, patch the already created mirroring enabled pool
to update the CephBlockPool CRD.

```bash
kubectl -n rook-ceph --context=cluster-1 patch cephblockpool mirroredpool --type merge -p '{"spec":{"mirroring":{"peers": {"secretNames": ["5a91d009-9e8b-46af-b311-c51aff3a7b49"]}}}}'
```

> ```bash
> cephblockpool.ceph.rook.io/replicapool patched
> ```
## Enable CSI Replication Sidecars

To achieve RBD Mirroring, `csi-omap-generator` and `volume-replication`
containers need to be deployed in the RBD provisioner pods.

* **Omap Generator**: Omap generator is a sidecar container that when
deployed with the CSI provisioner pod, generates the internal CSI
omaps between the PV and the RBD image. This is required as static PVs are
transferred across peer clusters in the DR use case, and hence
is needed to preserve PVC to storage mappings.

* **Volume Replication Operator**: Volume Replication Operator is a
kubernetes operator that provides common and reusable APIs for
storage disaster recovery.
It is based on [csi-addons/spec](https://github.com/csi-addons/spec)
specification and can be used by any storage provider.
For more details, refer to [volume replication operator](https://github.com/csi-addons/volume-replication-operator).

Execute the following steps on each peer cluster to enable the
OMap generator and Volume Replication sidecars:

* Edit the `rook-ceph-operator-config` configmap and add the
following configurations

```bash
kubectl edit cm rook-ceph-operator-config -n rook-ceph
```

Add the following configuration if not present:

```yaml
data:
CSI_ENABLE_OMAP_GENERATOR: "true"
CSI_ENABLE_VOLUME_REPLICATION: "true"
```

* After updating the configmap with those settings, two new sidecars
should now start automatically in the CSI provisioner pod.
* Repeat the steps on the peer cluster.

## Volume Replication Custom Resources

Volume Replication Operator follows controller pattern and provides extended
APIs for storage disaster recovery. The extended APIs are provided via Custom
Resource Definition (CRD). It provides support for two custom resources:

* **VolumeReplicationClass**: *VolumeReplicationClass* is a cluster scoped
resource that contains driver related configuration parameters. It holds
the storage admin information required for the volume replication operator.

* **VolumeReplication**: *VolumeReplication* is a namespaced resource that contains references to storage object to be replicated and VolumeReplicationClass
corresponding to the driver providing replication.

>:bulb: For more information, please refer to the
> [volume-replication-operator](https://github.com/csi-addons/volume-replication-operator).

0 comments on commit 93afa39

Please sign in to comment.