Skip to content

Commit

Permalink
docs: add a document to set-up rbd mirroring
Browse files Browse the repository at this point in the history
The document tracks the steps which are required
to set-up rbd mirroring on clusters.

Signed-off-by: Yug Gupta <yuggupta27@gmail.com>
  • Loading branch information
Yuggupta27 committed Oct 7, 2021
1 parent 635fc43 commit 235fb09
Showing 1 changed file with 328 additions and 0 deletions.
328 changes: 328 additions & 0 deletions Documentation/rbd-mirroring.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,328 @@
---
title: RBD Mirroring
weight: 3242
indent: true
---

# RBD Mirroring

[RBD mirroring](https://docs.ceph.com/en/latest/rbd/rbd-mirroring/)
is an asynchronous replication of RBD images between multiple Ceph clusters.
This capability is available in two modes:

* Journal-based: Every write to the RBD image is first recorded
to the associated journal before modifying the actual image.
The remote cluster will read from this associated journal and
replay the updates to its local image.
* Snapshot-based: This mode uses periodically scheduled or
manually created RBD image mirror-snapshots to replicate
crash-consistent RBD images between clusters.

## Table of Contents

* [Create RBD Pools](#create-rbd-pools)
* [Bootstrap Peers](#bootstrap-peers)
* [Configure the RBDMirror Daemon](#configure-the-rbdmirror-daemon)
* [Add mirroring peer information to RBD pools](#add-mirroring-peer-information-to-rbd-pools)
* [Enable CSI Replication Sidecars](#enable-csi-replication-sidecars)
* [Volume Replication Custom Resources](#volume-replication-custom-resources)
* [Enable mirroring on a PVC](#enable-mirroring-on-a-pvc)
* [Creating a VolumeReplicationClass CR](#create-a-volume-replication-class-cr)
* [Creating a VolumeReplications CR](#create-a-volumereplication-cr)
* [Check VolumeReplication CR status](async-disaster-recovery.md#checking-replication-status)

## Create RBD Pools

In this section, we create specific RBD pools that are RBD mirroring
enabled for use with the DR use case.

> :memo: **Note:** It is also feasible to edit existing pools and
> enable them for replication.
Execute the following steps on each peer cluster to create mirror
enabled pools:

* Create a RBD pool that is enabled for mirroring by adding the section
`spec.mirroring` in the CephBlockPool CR:

```yaml
apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
name: mirroredpool
namespace: rook-ceph
spec:
replicated:
size: 1
mirroring:
enabled: true
mode: image
```

```bash
kubectl create -f pool-mirrored.yaml
```

* Repeat the steps on the peer cluster.

> :memo: WARNING: Pool name across the cluster peers should be the same
> for RBD replication to function.
See the [CephBlockPool documentation](ceph-pool-crd.md#mirroring) for more details.

## Bootstrap Peers

In order for the rbd-mirror daemon to discover its peer cluster, the
peer must be registered and a user account must be created.

The following steps enable bootstrapping peers to discover and
authenticate to each other:

* For Bootstrapping a peer cluster its bootstrap secret is required. To determine the name of the secret that contains the bootstrap secret execute the following command on the remote cluster(cluster-2)

```bash
[cluster-2]$ kubectl get cephblockpool.ceph.rook.io/mirroredpool -n rook-ceph -ojsonpath='{.status.info.rbdMirrorBootstrapPeerSecretName}'
```

> ```bash
> pool-peer-token-mirroredpool
> ```
Here, `pool-peer-token-mirroredpool` is the desired bootstrap secret name.

* The secret pool-peer-token-mirroredpool contains all the information related to the token and needs to be injected to the peer, to fetch the decoded secret:

```bash
[cluster-2]$ kubectl get secret -n rook-ceph pool-peer-token-mirroredpool -o jsonpath='{.data.token}'|base64 -d
```

> ```bash
>eyJmc2lkIjoiNGQ1YmNiNDAtNDY3YS00OWVkLThjMGEtOWVhOGJkNDY2OTE3IiwiY2xpZW50X2lkIjoicmJkLW1pcnJvci1wZWVyIiwia2V5IjoiQVFDZ3hmZGdxN013R0JBQWZzcUtCaGpZVjJUZDRxVzJYQm5kemc9PSIsIm1vbl9ob3N0IjoiW3YyOjE5Mi4xNjguMzkuMzY6MzMwMCx2MToxOTIuMTY4LjM5LjM2OjY3ODldIn0=
> ```
* Get site name from secondary cluster(cluster-2)

```bash
[cluster-2]$ kubectl get cephblockpools mirroredpool -nrook-ceph -o jsonpath='{.status.mirroringInfo.site_name}'
```

* With this Decoded value, create a secret on the primary site(cluster-1), using the site name of the peer as the name.

```bash
[cluster-1]$ kubectl -n rook-ceph create secret generic 5a91d009-9e8b-46af-b311-c51aff3a7b49 --from-literal=token=eyJmc2lkIjoiNGQ1YmNiNDAtNDY3YS00OWVkLThjMGEtOWVhOGJkNDY2OTE3IiwiY2xpZW50X2lkIjoicmJkLW1pcnJvci1wZWVyIiwia2V5IjoiQVFDZ3hmZGdxN013R0JBQWZzcUtCaGpZVjJUZDRxVzJYQm5kemc9PSIsIm1vbl9ob3N0IjoiW3YyOjE5Mi4xNjguMzkuMzY6MzMwMCx2MToxOTIuMTY4LjM5LjM2OjY3ODldIn0= --from-literal=pool=mirroredpool
```

* This completes the bootstrap process for cluster-1 to be peered with cluster-2.
* Repeat the process switching cluster-2 in place of cluster-1, to complete the bootstrap process across both peer clusters.

For more details, refer to the official rbd mirror documentation on
[how to create a bootstrap peer](https://docs.ceph.com/en/latest/rbd/rbd-mirroring/#bootstrap-peers).

## Configure the RBDMirror Daemon

Replication is handled by the rbd-mirror daemon. The rbd-mirror daemon
is responsible for pulling image updates from the remote, peer cluster,
and applying them to image within the local cluster.

Creation of the rbd-mirror daemon(s) is done through the custom resource
definitions(CRDs), as follows:

* Create mirror.yaml, to deploy the rbd-mirror daemon

```yaml
apiVersion: ceph.rook.io/v1
kind: CephRBDMirror
metadata:
name: my-rbd-mirror
namespace: openshift-storage
spec:
# the number of rbd-mirror daemons to deploy
count: 1
```

* Create the RBD mirror daemon

```bash
[cluster-1]$ kubectl create -f mirror.yaml -n rook-ceph
```

* Validate if `rbd-mirror` daemon pod is now up

```bash
[cluster-1]$ kubectl get pods -n rook-ceph
```

> ```bash
> rook-ceph-rbd-mirror-a-6985b47c8c-dpv4k 1/1 Running 0 10s
> ```
* Verify that daemon health is OK

```bash
kubectl get cephblockpools.ceph.rook.io mirroredpool -nrook-ceph -o jsonpath='{.status.mirroringStatus.summary}'
```

> ```bash
> {"daemon_health":"OK","health":"OK","image_health":"OK","states":{"replaying":1}}
> ```
* Repeat the above steps on the peer cluster.

See the [CephRBDMirror CRD](ceph-rbd-mirror-crd.md) for more details on the mirroring settings.


## Add mirroring peer information to RBD pools

Each pool can have its own peer. To add the peer information, patch the already created mirroring enabled pool
to update the CephBlockPool CRD.

```bash
[cluster-1]$ kubectl -n rook-ceph patch cephblockpool mirroredpool --type merge -p '{"spec":{"mirroring":{"peers": {"secretNames": ["5a91d009-9e8b-46af-b311-c51aff3a7b49"]}}}}'
```

## Enable CSI Replication Sidecars

To achieve RBD Mirroring, `csi-omap-generator` and `volume-replication`
containers need to be deployed in the RBD provisioner pods.

* **Omap Generator**: Omap generator is a sidecar container that when
deployed with the CSI provisioner pod, generates the internal CSI
omaps between the PV and the RBD image. This is required as static PVs are
transferred across peer clusters in the DR use case, and hence
is needed to preserve PVC to storage mappings.

* **Volume Replication Operator**: Volume Replication Operator is a
kubernetes operator that provides common and reusable APIs for
storage disaster recovery.
It is based on [csi-addons/spec](https://github.com/csi-addons/spec)
specification and can be used by any storage provider.
For more details, refer to [volume replication operator](https://github.com/csi-addons/volume-replication-operator).

Execute the following steps on each peer cluster to enable the
OMap generator and Volume Replication sidecars:

* Edit the `rook-ceph-operator-config` configmap and add the
following configurations

```bash
kubectl edit cm rook-ceph-operator-config -n rook-ceph
```

Add the following properties if not present:

```yaml
data:
CSI_ENABLE_OMAP_GENERATOR: "true"
CSI_ENABLE_VOLUME_REPLICATION: "true"
```

* After updating the configmap with those settings, two new sidecars
should now start automatically in the CSI provisioner pod.
* Repeat the steps on the peer cluster.

## Volume Replication Custom Resources

Volume Replication Operator follows controller pattern and provides extended
APIs for storage disaster recovery. The extended APIs are provided via Custom
Resource Definition(CRD). It provides support for two custom resources:

* **VolumeReplicationClass**: *VolumeReplicationClass* is a cluster scoped
resource that contains driver related configuration parameters. It holds
the storage admin information required for the volume replication operator.

* **VolumeReplication**: *VolumeReplication* is a namespaced resource that contains references to storage object to be replicated and VolumeReplicationClass
corresponding to the driver providing replication.

> For more information, please refer to the
> [volume-replication-operator](https://github.com/csi-addons/volume-replication-operator).
## Enable mirroring on a PVC

Below guide assumes that we have a PVC(rbd-pvc) in BOUND state; created using
*StorageClass* with `Retain` reclaimPolicy.

```bash
[cluster-1]$ kubectl get pvc
```

>
> ```bash
> NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
> rbd-pvc Bound pvc-65dc0aac-5e15-4474-90f4-7a3532c621ec 1Gi RWO csi-rbd-sc 44s
> ```
### Create a Volume Replication Class CR

In this case, we create a Volume Replication Class on cluster-1

```bash
[cluster-1]$ kubectl apply -f cluster/examples/kubernetes/ceph/volume-replication-class.yaml
```

> **Note:** The `schedulingInterval` can be specified in formats of
> minutes, hours or days using suffix `m`,`h` and `d` respectively.
> The optional schedulingStartTime can be specified using the ISO 8601
> time format.
### Create a VolumeReplication CR

* Once VolumeReplicationClass is created, create a Volume Replication for
the PVC which we intend to replicate to secondary cluster.

```bash
[cluster-1]$ kubectl apply -f cluster/examples/kubernetes/ceph/volume-replication.yaml
```

>:memo: *VolumeReplication* is a namespace scoped object. Thus,
> it should be created in the same namespace as of PVC.
### Checking Replication Status

`replicationState` is the state of the volume being referenced.
Possible values are primary, secondary, and resync.

* `primary` denotes that the volume is primary.
* `secondary` denotes that the volume is secondary.
* `resync` denotes that the volume needs to be resynced.

To check VolumeReplication CR status:

```bash
[cluster-1]$kubectl get volumereplication pvc-volumereplication -oyaml
```

>```yaml
>...
>spec:
> dataSource:
> apiGroup: ""
> kind: PersistentVolumeClaim
> name: rbd-pvc
> replicationState: primary
> volumeReplicationClass: rbd-volumereplicationclass
>status:
> conditions:
> - lastTransitionTime: "2021-05-04T07:39:00Z"
> message: ""
> observedGeneration: 1
> reason: Promoted
> status: "True"
> type: Completed
> - lastTransitionTime: "2021-05-04T07:39:00Z"
> message: ""
> observedGeneration: 1
> reason: Healthy
> status: "False"
> type: Degraded
> - lastTransitionTime: "2021-05-04T07:39:00Z"
> message: ""
> observedGeneration: 1
> reason: NotResyncing
> status: "False"
> type: Resyncing
> lastCompletionTime: "2021-05-04T07:39:00Z"
> lastStartTime: "2021-05-04T07:38:59Z"
> message: volume is marked primary
> observedGeneration: 1
> state: Primary
>```

0 comments on commit 235fb09

Please sign in to comment.