From e584efa96694ea532657d7ce48fcbf853f8102c1 Mon Sep 17 00:00:00 2001 From: Yug Gupta Date: Wed, 20 Oct 2021 09:11:33 +0530 Subject: [PATCH 1/4] docs: add a document to set-up rbd mirroring The document tracks the steps which are required to set-up rbd mirroring on clusters. Signed-off-by: Yug Gupta --- Documentation/rbd-mirroring.md | 424 +++++++++++++++++++++++++++++++++ 1 file changed, 424 insertions(+) create mode 100644 Documentation/rbd-mirroring.md diff --git a/Documentation/rbd-mirroring.md b/Documentation/rbd-mirroring.md new file mode 100644 index 000000000000..ba4d67dce8a3 --- /dev/null +++ b/Documentation/rbd-mirroring.md @@ -0,0 +1,424 @@ +--- +title: RBD Mirroring +weight: 3242 +indent: true +--- + +# RBD Mirroring +## Disaster Recovery + +Disaster recovery (DR) is an organization's ability to react to and recover from an incident that negatively affects business operations. +This plan comprises strategies for minimizing the consequences of a disaster, so an organization can continue to operate – or quickly resume the key operations. +Thus, disaster recovery is one of the aspects of [business continuity](https://en.wikipedia.org/wiki/Business_continuity_planning). +One of the solutions, to achieve the same, is [RBD mirroring](https://docs.ceph.com/en/latest/rbd/rbd-mirroring/). + +## RBD Mirroring + +[RBD mirroring](https://docs.ceph.com/en/latest/rbd/rbd-mirroring/) + is an asynchronous replication of RBD images between multiple Ceph clusters. + This capability is available in two modes: + +* Journal-based: Every write to the RBD image is first recorded + to the associated journal before modifying the actual image. + The remote cluster will read from this associated journal and + replay the updates to its local image. +* Snapshot-based: This mode uses periodically scheduled or + manually created RBD image mirror-snapshots to replicate + crash-consistent RBD images between clusters. + +> **Note**: This document sheds light on rbd mirroring and how to set it up using rook. +> For steps on failover or failback scenarios + +## Table of Contents + +* [Create RBD Pools](#create-rbd-pools) +* [Bootstrap Peers](#bootstrap-peers) +* [Configure the RBDMirror Daemon](#configure-the-rbdmirror-daemon) +* [Add mirroring peer information to RBD pools](#add-mirroring-peer-information-to-rbd-pools) +* [Enable CSI Replication Sidecars](#enable-csi-replication-sidecars) +* [Volume Replication Custom Resources](#volume-replication-custom-resources) +* [Enable mirroring on a PVC](#enable-mirroring-on-a-pvc) + * [Creating a VolumeReplicationClass CR](#create-a-volume-replication-class-cr) + * [Creating a VolumeReplications CR](#create-a-volumereplication-cr) + * [Check VolumeReplication CR status](async-disaster-recovery.md#checking-replication-status) +* [Backup and Restore](#backup-&-restore) + +## Create RBD Pools + +In this section, we create specific RBD pools that are RBD mirroring + enabled for use with the DR use case. + +Execute the following steps on each peer cluster to create mirror + enabled pools: + +* Create a RBD pool that is enabled for mirroring by adding the section + `spec.mirroring` in the CephBlockPool CR: + +```yaml +apiVersion: ceph.rook.io/v1 +kind: CephBlockPool +metadata: + name: mirroredpool + namespace: rook-ceph +spec: + replicated: + size: 1 + mirroring: + enabled: true + mode: image +``` + +```bash +kubectl create -f pool-mirrored.yaml +``` + +* Repeat the steps on the peer cluster. + +> **NOTE:** Pool name across the cluster peers must be the same +> for RBD replication to function. + +See the [CephBlockPool documentation](ceph-pool-crd.md#mirroring) for more details. + +> **Note:** It is also feasible to edit existing pools and +> enable them for replication. + +## Bootstrap Peers + +In order for the rbd-mirror daemon to discover its peer cluster, the + peer must be registered and a user account must be created. + +The following steps enable bootstrapping peers to discover and + authenticate to each other: + +* For Bootstrapping a peer cluster its bootstrap secret is required. To determine the name of the secret that contains the bootstrap secret execute the following command on the remote cluster (cluster-2) + +```bash +[cluster-2]$ kubectl get cephblockpool.ceph.rook.io/mirroredpool -n rook-ceph -ojsonpath='{.status.info.rbdMirrorBootstrapPeerSecretName}' +``` + +Here, `pool-peer-token-mirroredpool` is the desired bootstrap secret name. + +* The secret pool-peer-token-mirroredpool contains all the information related to the token and needs to be injected to the peer, to fetch the decoded secret: + +```bash +[cluster-2]$ kubectl get secret -n rook-ceph pool-peer-token-mirroredpool -o jsonpath='{.data.token}'|base64 -d +``` + +> ```bash +>eyJmc2lkIjoiNGQ1YmNiNDAtNDY3YS00OWVkLThjMGEtOWVhOGJkNDY2OTE3IiwiY2xpZW50X2lkIjoicmJkLW1pcnJvci1wZWVyIiwia2V5IjoiQVFDZ3hmZGdxN013R0JBQWZzcUtCaGpZVjJUZDRxVzJYQm5kemc9PSIsIm1vbl9ob3N0IjoiW3YyOjE5Mi4xNjguMzkuMzY6MzMwMCx2MToxOTIuMTY4LjM5LjM2OjY3ODldIn0= +> ``` + +* With this Decoded value, create a secret on the primary site (cluster-1): + +```bash +[cluster-1]$ kubectl -n rook-ceph create secret generic rbd-primary-site-secret --from-literal=token=eyJmc2lkIjoiNGQ1YmNiNDAtNDY3YS00OWVkLThjMGEtOWVhOGJkNDY2OTE3IiwiY2xpZW50X2lkIjoicmJkLW1pcnJvci1wZWVyIiwia2V5IjoiQVFDZ3hmZGdxN013R0JBQWZzcUtCaGpZVjJUZDRxVzJYQm5kemc9PSIsIm1vbl9ob3N0IjoiW3YyOjE5Mi4xNjguMzkuMzY6MzMwMCx2MToxOTIuMTY4LjM5LjM2OjY3ODldIn0= --from-literal=pool=mirroredpool +``` + +* This completes the bootstrap process for cluster-1 to be peered with cluster-2. +* Repeat the process switching cluster-2 in place of cluster-1, to complete the bootstrap process across both peer clusters. + +For more details, refer to the official rbd mirror documentation on + [how to create a bootstrap peer](https://docs.ceph.com/en/latest/rbd/rbd-mirroring/#bootstrap-peers). + +## Configure the RBDMirror Daemon + +Replication is handled by the rbd-mirror daemon. The rbd-mirror daemon + is responsible for pulling image updates from the remote, peer cluster, + and applying them to image within the local cluster. + +Creation of the rbd-mirror daemon(s) is done through the custom resource + definitions (CRDs), as follows: + +* Create mirror.yaml, to deploy the rbd-mirror daemon + +```yaml +apiVersion: ceph.rook.io/v1 +kind: CephRBDMirror +metadata: + name: my-rbd-mirror + namespace: openshift-storage +spec: + # the number of rbd-mirror daemons to deploy + count: 1 +``` + +* Create the RBD mirror daemon + +```bash +[cluster-1]$ kubectl create -f mirror.yaml -n rook-ceph +``` + +* Validate if `rbd-mirror` daemon pod is now up + +```bash +[cluster-1]$ kubectl get pods -n rook-ceph +``` + +> ```bash +> rook-ceph-rbd-mirror-a-6985b47c8c-dpv4k 1/1 Running 0 10s +> ``` + +* Verify that daemon health is OK + +```bash +kubectl get cephblockpools.ceph.rook.io mirroredpool -n rook-ceph -o jsonpath='{.status.mirroringStatus.summary}' +``` + +> ```bash +> {"daemon_health":"OK","health":"OK","image_health":"OK","states":{"replaying":1}} +> ``` + +* Repeat the above steps on the peer cluster. + + See the [CephRBDMirror CRD](ceph-rbd-mirror-crd.md) for more details on the mirroring settings. + + +## Add mirroring peer information to RBD pools + +Each pool can have its own peer. To add the peer information, patch the already created mirroring enabled pool +to update the CephBlockPool CRD. + +```bash +[cluster-1]$ kubectl -n rook-ceph patch cephblockpool mirroredpool --type merge -p '{"spec":{"mirroring":{"peers": {"secretNames": ["rbd-primary-site-secret"]}}}}' +``` +## Create VolmeReplication CRDs + +Volume Replication Operator follows controller pattern and provides extended +APIs for storage disaster recovery. The extended APIs are provided via Custom +Resource Definition(CRD). Create the VolumeReplication CRDs on all the peer clusters. + +```bash +$ kubectl create -f https://raw.githubusercontent.com/csi-addons/volume-replication-operator/v0.1.0/config/crd/bases/replication.storage.openshift.io_volumereplications.yaml + +$ kubectl create -f https://raw.githubusercontent.com/csi-addons/volume-replication-operator/v0.1.0/config/crd/bases/replication.storage.openshift.io_volumereplicationclasses.yaml +``` + +## Enable CSI Replication Sidecars + +To achieve RBD Mirroring, `csi-omap-generator` and `volume-replication` + containers need to be deployed in the RBD provisioner pods, which are not enabled by default. + +* **Omap Generator**: Omap generator is a sidecar container that when + deployed with the CSI provisioner pod, generates the internal CSI + omaps between the PV and the RBD image. This is required as static PVs are + transferred across peer clusters in the DR use case, and hence + is needed to preserve PVC to storage mappings. + +* **Volume Replication Operator**: Volume Replication Operator is a + kubernetes operator that provides common and reusable APIs for + storage disaster recovery. + It is based on [csi-addons/spec](https://github.com/csi-addons/spec) + specification and can be used by any storage provider. + For more details, refer to [volume replication operator](https://github.com/csi-addons/volume-replication-operator). + +Execute the following steps on each peer cluster to enable the + OMap generator and Volume Replication sidecars: + +* Edit the `rook-ceph-operator-config` configmap and add the + following configurations + +```bash +kubectl edit cm rook-ceph-operator-config -n rook-ceph +``` + +Add the following properties if not present: + +```yaml +data: + CSI_ENABLE_OMAP_GENERATOR: "true" + CSI_ENABLE_VOLUME_REPLICATION: "true" +``` + +* After updating the configmap with those settings, two new sidecars + should now start automatically in the CSI provisioner pod. +* Repeat the steps on the peer cluster. + +## Volume Replication Custom Resources + +VolumeReplication CRDs provide support for two custom resources: + +* **VolumeReplicationClass**: *VolumeReplicationClass* is a cluster scoped +resource that contains driver related configuration parameters. It holds +the storage admin information required for the volume replication operator. + +* **VolumeReplication**: *VolumeReplication* is a namespaced resource that contains references to storage object to be replicated and VolumeReplicationClass +corresponding to the driver providing replication. + +> For more information, please refer to the +> [volume-replication-operator](https://github.com/csi-addons/volume-replication-operator). + +## Enable mirroring on a PVC + +Below guide assumes that we have a PVC (rbd-pvc) in BOUND state; created using + *StorageClass* with `Retain` reclaimPolicy. + +```bash +[cluster-1]$ kubectl get pvc +``` + +> +> ```bash +> NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +> rbd-pvc Bound pvc-65dc0aac-5e15-4474-90f4-7a3532c621ec 1Gi RWO csi-rbd-sc 44s +> ``` + +### Create a Volume Replication Class CR + +In this case, we create a Volume Replication Class on cluster-1 + +```bash +[cluster-1]$ kubectl apply -f cluster/examples/kubernetes/ceph/volume-replication-class.yaml +``` + +> **Note:** The `schedulingInterval` can be specified in formats of +> minutes, hours or days using suffix `m`,`h` and `d` respectively. +> The optional schedulingStartTime can be specified using the ISO 8601 +> time format. + +### Create a VolumeReplication CR + +* Once VolumeReplicationClass is created, create a Volume Replication for + the PVC which we intend to replicate to secondary cluster. + +```bash +[cluster-1]$ kubectl apply -f cluster/examples/kubernetes/ceph/volume-replication.yaml +``` + +>:memo: *VolumeReplication* is a namespace scoped object. Thus, +> it should be created in the same namespace as of PVC. + +### Checking Replication Status + +`replicationState` is the state of the volume being referenced. + Possible values are primary, secondary, and resync. + +* `primary` denotes that the volume is primary. +* `secondary` denotes that the volume is secondary. +* `resync` denotes that the volume needs to be resynced. + +To check VolumeReplication CR status: + +```bash +[cluster-1]$kubectl get volumereplication pvc-volumereplication -oyaml +``` + +>```yaml +>... +>spec: +> dataSource: +> apiGroup: "" +> kind: PersistentVolumeClaim +> name: rbd-pvc +> replicationState: primary +> volumeReplicationClass: rbd-volumereplicationclass +>status: +> conditions: +> - lastTransitionTime: "2021-05-04T07:39:00Z" +> message: "" +> observedGeneration: 1 +> reason: Promoted +> status: "True" +> type: Completed +> - lastTransitionTime: "2021-05-04T07:39:00Z" +> message: "" +> observedGeneration: 1 +> reason: Healthy +> status: "False" +> type: Degraded +> - lastTransitionTime: "2021-05-04T07:39:00Z" +> message: "" +> observedGeneration: 1 +> reason: NotResyncing +> status: "False" +> type: Resyncing +> lastCompletionTime: "2021-05-04T07:39:00Z" +> lastStartTime: "2021-05-04T07:38:59Z" +> message: volume is marked primary +> observedGeneration: 1 +> state: Primary +>``` + +## Backup & Restore + +> **NOTE:** To effectively resume operations after a failover/relocation, +> backup of the kubernetes artifacts like deployment, PVC, PV, etc need to be created beforehand by the admin; so that the application can be restored on the peer cluster. + +Here, we take a backup of PVC and PV object on one site, so that they can be restored later to the peer cluster. + +#### **Take backup on cluster-1** + +* Take backup of the PVC `rbd-pvc` + +```bash +[cluster-1]$ kubectl get pvc rbd-pvc -oyaml > pvc-backup.yaml +``` + +* Take a backup of the PV, corresponding to the PVC + +```bash +[cluster-1]$ kubectl get pv/pvc-65dc0aac-5e15-4474-90f4-7a3532c621ec -oyaml > pv_backup.yaml +``` + +> **Note**: We can also take backup using external tools like **Velero**. +> See [velero documentation](https://velero.io/docs/main/) for more information. + +#### **Restore the backup on cluster-2** + +* Create storageclass on the secondary cluster + +```bash +[cluster-2]$ kubectl create -f examples/rbd/storageclass.yaml +``` + +* Create VolumeReplicationClass on the secondary cluster + +```bash +[cluster-1]$ kubectl apply -f cluster/examples/kubernetes/ceph/volume-replication-class.yaml + ``` + +> ```bash +> volumereplicationclass.replication.storage.openshift.io/rbd-volumereplicationclass created +> ``` + +* If Persistent Volumes and Claims are created manually on the secondary cluster, + remove the `claimRef` on the backed up PV objects in yaml files; so that the + PV can get bound to the new claim on the secondary cluster. + +```yaml +... +spec: + accessModes: + - ReadWriteOnce + capacity: + storage: 1Gi + claimRef: + apiVersion: v1 + kind: PersistentVolumeClaim + name: rbd-pvc + namespace: default + resourceVersion: "64252" + uid: 65dc0aac-5e15-4474-90f4-7a3532c621ec + csi: +... +``` + +* Apply the Persistent Volume backup from the primary cluster + +```bash +[cluster-2]$ kubectl create -f pv-backup.yaml +``` + +* Apply the Persistent Volume claim from the restored backup + +```bash +[cluster-2]$ kubectl create -f pvc-backup.yaml +``` + +```bash +[cluster-2]$ kubectl get pvc +``` + +> ```bash +> NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +> rbd-pvc Bound pvc-65dc0aac-5e15-4474-90f4-7a3532c621ec 1Gi RWO rook-ceph-block 44s +> ``` From c0b8f5a708913cf661f413fc75ff6e03f9c30383 Mon Sep 17 00:00:00 2001 From: Yug Gupta Date: Wed, 20 Oct 2021 09:12:09 +0530 Subject: [PATCH 2/4] docs: add documents for failover and failback add a document to track the steps for failover and failback in case of Async DR; for Planned Migration and Disaster Recovery use case. Signed-off-by: Yug Gupta --- Documentation/async-disaster-recovery.md | 112 +++++++++++++++++++++++ 1 file changed, 112 insertions(+) create mode 100644 Documentation/async-disaster-recovery.md diff --git a/Documentation/async-disaster-recovery.md b/Documentation/async-disaster-recovery.md new file mode 100644 index 000000000000..9577976b33c9 --- /dev/null +++ b/Documentation/async-disaster-recovery.md @@ -0,0 +1,112 @@ +--- +title: Failover and Failback +weight: 3245 +indent: true +--- + +# RBD Asynchronous DR Failover and Failback + +## Table of Contents + +* [Planned Migration and Disaster Recovery](#planned-migration-and-disaster-recovery) +* [Planned Migration](#planned-migration) + * [Relocation](#relocation) +* [Disaster Recovery](#disaster-recovery) + * [Failover](#failover-abrupt-shutdown) + * [Failback](#failback-post-disaster-recovery) + +## Planned Migration and Disaster Recovery + +Rook comes with the volume replication support, which allows users to perform disaster recovery and planned migration of clusters. + +The following document will help to track the procedure for failover and failback in case of a Disaster recovery or Planned migration use cases. + +> **Note**: The document assumes that RBD Mirroring is set up between the peer clusters. +> For information on rbd mirroring and how to set it up using rook, please refer to +> the [rbd-mirroring guide](rbd-mirroring.md). + +## Planned Migration + +> Use cases: Datacenter maintenance, technology refresh, disaster avoidance, etc. + +### Relocation + +The Relocation operation is the process of switching production to a + backup facility(normally your recovery site) or vice versa. For relocation, + access to the image on the primary site should be stopped. +The image should now be made *primary* on the secondary cluster so that + the access can be resumed there. + +> :memo: Periodic or one-time backup of +> the application should be available for restore on the secondary site (cluster-2). + +Follow the below steps for planned migration of workload from the primary + cluster to the secondary cluster: + +* Scale down all the application pods which are using the + mirrored PVC on the Primary Cluster. +* [Take a backup](rbd-mirroring.md#backup-&-restore) of PVC and PV object from the primary cluster. + This can be done using some backup tools like + [velero](https://velero.io/docs/main/). +* [Update VolumeReplication CR](rbd-mirroring.md#create-a-volumereplication-cr) to set `replicationState` to `secondary` at the Primary Site. + When the operator sees this change, it will pass the information down to the + driver via GRPC request to mark the dataSource as `secondary`. +* If you are manually recreating the PVC and PV on the secondary cluster, + remove the `claimRef` section in the PV objects. (See [this](rbd-mirroring.md#restore-the-backup-on-cluster-2) for details) +* Recreate the storageclass, PVC, and PV objects on the secondary site. +* As you are creating the static binding between PVC and PV, a new PV won’t + be created here, the PVC will get bind to the existing PV. +* [Create the VolumeReplicationClass](rbd-mirroring.md#create-a-volume-replication-class-cr) on the secondary site. +* [Create VolumeReplications](rbd-mirroring.md#create-a-volumereplication-cr) for all the PVC’s for which mirroring + is enabled + * `replicationState` should be `primary` for all the PVC’s on + the secondary site. +* [Check VolumeReplication CR status](rbd-mirroring.md#checking-replication-status) to verify if the image is marked `primary` on the secondary site. +* Once the Image is marked as `primary`, the PVC is now ready + to be used. Now, we can scale up the applications to use the PVC. + +>:memo: **WARNING**: In Async Disaster recovery use case, we don't get +> the complete data. +> We will only get the crash-consistent data based on the snapshot interval time. + +## Disaster Recovery + +> Use cases: Natural disasters, Power failures, System failures, and crashes, etc. + +> **NOTE:** To effectively resume operations after a failover/relocation, +> backup of the kubernetes artifacts like deployment, PVC, PV, etc need to be created beforehand by the admin; so that the application can be restored on the peer cluster. For more information, see [backup and restore](rbd-mirroring.md#backup-&-restore). +### Failover (abrupt shutdown) + +In case of Disaster recovery, create VolumeReplication CR at the Secondary Site. + Since the connection to the Primary Site is lost, the operator automatically + sends a GRPC request down to the driver to forcefully mark the dataSource as `primary` + on the Secondary Site. + +* If you are manually creating the PVC and PV on the secondary cluster, remove + the claimRef section in the PV objects. (See [this](rbd-mirroring.md#restore-the-backup-on-cluster-2) for details) +* Create the storageclass, PVC, and PV objects on the secondary site. +* As you are creating the static binding between PVC and PV, a new PV won’t be + created here, the PVC will get bind to the existing PV. +* [Create the VolumeReplicationClass](rbd-mirroring.md#create-a-volume-replication-class-cr) and [VolumeReplication CR](rbd-mirroring.md#create-a-volumereplication-cr) on the secondary site. +* [Check VolumeReplication CR status](rbd-mirroring.md#checking-replication-status) to verify if the image is marked `primary` on the secondary site. +* Once the Image is marked as `primary`, the PVC is now ready to be used. Now, + we can scale up the applications to use the PVC. + +### Failback (post-disaster recovery) + +Once the failed cluster is recovered on the primary site and you want to failback + from secondary site, follow the below steps: + +* Scale down the running applications (if any) on the primary site. + Ensure that all persistent volumes in use by the workload are no + longer in use on the primary cluster. +* [Update VolumeReplication CR](rbd-mirroring.md#create-a-volumereplication-cr) replicationState + from `primary` to `secondary` on the primary site. +* Scale down the applications on the secondary site. +* [Update VolumeReplication CR](rbd-mirroring.md#create-a-volumereplication-cr) replicationState state from `primary` to + `secondary` in secondary site. +* On the primary site, [verify the VolumeReplication status](rbd-mirroring.md#checking-replication-status) is marked as + volume ready to use. +* Once the volume is marked to ready to use, change the replicationState state + from `secondary` to `primary` in primary site. +* Scale up the applications again on the primary site. From 41ef8561d03cb10bd8f53e23286af1bd5ce862f1 Mon Sep 17 00:00:00 2001 From: Yug Gupta Date: Wed, 20 Oct 2021 09:12:45 +0530 Subject: [PATCH 3/4] ceph: add mirrored pool yaml Add a new yaml for creating pools that have mirroring enabled. Signed-off-by: Yug Gupta --- .../examples/kubernetes/ceph/pool-mirrored.yaml | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) create mode 100644 cluster/examples/kubernetes/ceph/pool-mirrored.yaml diff --git a/cluster/examples/kubernetes/ceph/pool-mirrored.yaml b/cluster/examples/kubernetes/ceph/pool-mirrored.yaml new file mode 100644 index 000000000000..7fe22d1980e5 --- /dev/null +++ b/cluster/examples/kubernetes/ceph/pool-mirrored.yaml @@ -0,0 +1,16 @@ +################################################################################################################# +# Create a mirroring enabled Ceph pool. +# kubectl create -f pool-mirrored.yaml +################################################################################################################# + +apiVersion: ceph.rook.io/v1 +kind: CephBlockPool +metadata: + name: mirrored-pool + namespace: rook-ceph +spec: + replicated: + size: 3 + mirroring: + enabled: true + mode: image From 0f62a7c03d56a23729a74279e476660e85a49053 Mon Sep 17 00:00:00 2001 From: Yug Gupta Date: Wed, 20 Oct 2021 09:13:09 +0530 Subject: [PATCH 4/4] ceph: add volume replication cr yaml Add a new yaml for creating volume replicationclass and volume replication cr. Signed-off-by: Yug Gupta --- .../kubernetes/ceph/volume-replication-class.yaml | 12 ++++++++++++ .../examples/kubernetes/ceph/volume-replication.yaml | 11 +++++++++++ 2 files changed, 23 insertions(+) create mode 100644 cluster/examples/kubernetes/ceph/volume-replication-class.yaml create mode 100644 cluster/examples/kubernetes/ceph/volume-replication.yaml diff --git a/cluster/examples/kubernetes/ceph/volume-replication-class.yaml b/cluster/examples/kubernetes/ceph/volume-replication-class.yaml new file mode 100644 index 000000000000..5700285cf2ea --- /dev/null +++ b/cluster/examples/kubernetes/ceph/volume-replication-class.yaml @@ -0,0 +1,12 @@ +apiVersion: replication.storage.openshift.io/v1alpha1 +kind: VolumeReplicationClass +metadata: + name: rbd-volumereplicationclass +spec: + provisioner: rook-ceph.rbd.csi.ceph.com + parameters: + mirroringMode: snapshot + schedulingInterval: "12m" + schedulingStartTime: "16:18:43" + replication.storage.openshift.io/replication-secret-name: rook-csi-rbd-provisioner + replication.storage.openshift.io/replication-secret-namespace: rook-ceph diff --git a/cluster/examples/kubernetes/ceph/volume-replication.yaml b/cluster/examples/kubernetes/ceph/volume-replication.yaml new file mode 100644 index 000000000000..8b26e369d53a --- /dev/null +++ b/cluster/examples/kubernetes/ceph/volume-replication.yaml @@ -0,0 +1,11 @@ +apiVersion: replication.storage.openshift.io/v1alpha1 +kind: VolumeReplication +metadata: + name: pvc-volumereplication +spec: + volumeReplicationClass: rbd-volumereplicationclass + replicationState: primary + dataSource: + apiGroup: "" + kind: PersistentVolumeClaim + name: rbd-pvc # Name of the PVC on which mirroring is to be enabled.