Skip to content

Commit

Permalink
docs: add guide doc for flex migration
Browse files Browse the repository at this point in the history
Adding guide doc for flex migration steps.

Signed-off-by: subhamkrai <srai@redhat.com>
  • Loading branch information
subhamkrai committed Nov 24, 2021
1 parent cbe29ba commit a4b5567
Showing 1 changed file with 61 additions and 0 deletions.
61 changes: 61 additions & 0 deletions Documentation/flex-to-csi-migration.md
@@ -0,0 +1,61 @@
---
title: Flex Migration
weight: 11900
indent: true
---

# Flex to CSI Migration

In Rook v1.8, the Flex driver has been deprecated. Before updating to v1.8, any Flex volumes created in previous versions of Rook will need to be converted to Ceph-CSI volumes.

The tool [persistent-volume-migrator](https://github.com/ceph/persistent-volume-migrator) will help automate migration of Flex rbd volumes to Ceph-CSI volumes.

## Migration Preparation

1. Rook v1.7.x is required. If you have a previous version of Rook running, follow the [upgrade guide](https://rook.io/docs/rook/v1.7/ceph-upgrade.html) to upgrade from previous releases until on v1.7.x.
2. Enable the CSI driver if not already enabled. See the [operator settings](https://github.com/rook/rook/blob/release-1.7/cluster/examples/kubernetes/ceph/operator.yaml#L29-L32) such as `ROOK_CSI_ENABLE_RBD`.
3. We assume that you have healthy cluster i.e. `ceph status` shows `health: OK`
4. Create the CSI storage class to which you want to migrate
5. Create migrator pod
1) `kubectl create -f <update this with file link in the tool repo>`
6. Download the tool binary
1. wget <link to download the binary>

**NOTE**: The Migration procedure will come with a downtime as we need to scale down the applications using the volumes before migration.

## Migrate a PVC
1) Stop the application pods that are consuming the flex volume(s) that need to be converted.
2) Connect to migration pod
1) `migration_pod=$(kubectl -n rook-ceph get pod -l app=rook-ceph-migrator -o jsonpath='{.items[*].metadata.name}')`
2) `kubectl -n rook-ceph exec -it "$migration_pod" -- sh`

3) Run the tool to migrate the PVC
1) `<binary_name> --pvc=<pvc-name> --pvc-ns=<pvc-namespace> --destination-sc=<storageclass-name-to-migrate-in> --rook-ns=<rook-ceph> --ceph-cluster-ns=<ceph-cluster-namespace>`
4) Start the application pods which were stopped in step 1.


## Tool settings

1. `--pvc`: **required**: reads name of the pvc to migrate
2. `--pvc-ns`: **required**: reads the namespace of the PVC which is going to migrate
3. `--destination-sc`: **required**: read the name of the storageclass in which you want mirgrate.
4. `--rook-ns`: **optional** namespace where the rook operator is running. **defalult: rook-ceph**.
5. `--ceph-cluster-ns`: **optional** namespace where the ceph cluster is running. **defalult: rook-ceph**.


```console
---
I1123 07:22:14.574489 65 log.go:34] Cluster connection created
I1123 07:22:14.574493 65 log.go:34] Delete the placeholder CSI volume in ceph cluster
I1123 07:22:14.790916 65 log.go:34] Successfully removed volume csi-vol-13c6fe24-4c2e-11ec-a4cf-0242ac110005
I1123 07:22:14.791196 65 log.go:34] Rename old ceph volume to new CSI volume
I1123 07:22:14.853310 65 log.go:34] successfully renamed volume csi-vol-13c6fe24-4c2e-11ec-a4cf-0242ac110005 -> pvc-5090804a-585f-46bc-a4df-e5fcda610d5c
I1123 07:22:14.853331 65 log.go:34] Delete old PV object: pvc-5090804a-585f-46bc-a4df-e5fcda610d5c
I1123 07:22:14.873745 65 log.go:34] waiting for PV pvc-5090804a-585f-46bc-a4df-e5fcda610d5c in state &PersistentVolumeStatus{Phase:Bound,Message:,Reason:,} to be deleted (0 seconds elapsed)
I1123 07:22:14.948075 65 log.go:34] deleted persistent volume pvc-5090804a-585f-46bc-a4df-e5fcda610d5c
I1123 07:22:14.948097 65 log.go:34] successfully migrated pvc rbd-pvc
I1123 07:22:14.948205 65 log.go:34] Successfully migrated all the PVCs to CSI
```

**For more options**, see the [tool documentation](https://github.com/ceph/persistent-volume-migrator), for example to convert all PVCs automatically that belong to the same storage class.
After running above command you should see something similar to this output

0 comments on commit a4b5567

Please sign in to comment.