Skip to content

Commit

Permalink
docs: refactor documentation for ceph provider
Browse files Browse the repository at this point in the history
With the nfs and cassandra providers moving to their own repo
we can simplify the docs a bit. Some obsolete documentation
is also removed.

Signed-off-by: Travis Nielsen <tnielsen@redhat.com>
  • Loading branch information
travisn committed Sep 13, 2021
1 parent 661ed66 commit 6b7062e
Show file tree
Hide file tree
Showing 32 changed files with 357 additions and 492 deletions.
19 changes: 7 additions & 12 deletions Documentation/README.md
Expand Up @@ -6,23 +6,18 @@ Rook turns storage software into self-managing, self-scaling, and self-healing s

Rook integrates deeply into cloud native environments leveraging extension points and providing a seamless experience for scheduling, lifecycle management, resource management, security, monitoring, and user experience.

For more details about the status of storage solutions currently supported by Rook, please refer to the [project status section](https://github.com/rook/rook/blob/master/README.md#project-status) of the Rook repository.
We plan to continue adding support for other storage systems and environments based on community demand and engagement in future releases.
The Ceph operator was declared stable in December 2018 in the Rook v0.9 release, providing a production storage platform for several years already.

## Quick Start Guides
## Quick Start Guide

Starting Rook in your cluster is as simple as a few `kubectl` commands depending on the storage provider.
See our [Quickstart](quickstart.md) guide list for the detailed instructions for each storage provider.
Starting Ceph in your cluster is as simple as a few `kubectl` commands.
See our [Quickstart](quickstart.md) guide to get started with the Ceph operator!

## Storage Provider Designs
## Designs

High-level Storage Provider design documents:
[Ceph](https://docs.ceph.com/en/latest/) is a highly scalable distributed storage solution for block storage, object storage, and shared filesystems with years of production deployments. See the [Ceph overview](ceph-storage.md).

| Storage Provider | Status | Description |
| ----------------------- | ------ | ------------------------------------------------------------------------------------------------------------------------------------------------------ |
| [Ceph](ceph-storage.md) | Stable | Ceph is a highly scalable distributed storage solution for block storage, object storage, and shared filesystems with years of production deployments. |

Low level design documentation for supported list of storage systems collected at [design docs](https://github.com/rook/rook/tree/master/design) section.
For detailed design documentation, see also the [design docs](https://github.com/rook/rook/tree/master/design).

## Need help? Be sure to join the Rook Slack

Expand Down
65 changes: 65 additions & 0 deletions Documentation/authenticated-registry.md
@@ -0,0 +1,65 @@
---
title: Authenticated Registries
weight: 1100
indent: true
---

## Authenticated docker registries

If you want to use an image from authenticated docker registry (e.g. for image cache/mirror), you'll need to
add an `imagePullSecret` to all relevant service accounts. This way all pods created by the operator (for service account:
`rook-ceph-system`) or all new pods in the namespace (for service account: `default`) will have the `imagePullSecret` added
to their spec.

The whole process is described in the [official kubernetes documentation](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account).

### Example setup for a ceph cluster

To get you started, here's a quick rundown for the ceph example from the [quickstart guide](/Documentation/quickstart.md).

First, we'll create the secret for our registry as described [here](https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod):

```console
# for namespace rook-ceph
$ kubectl -n rook-ceph create secret docker-registry my-registry-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL

# and for namespace rook-ceph (cluster)
$ kubectl -n rook-ceph create secret docker-registry my-registry-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
```

Next we'll add the following snippet to all relevant service accounts as described [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account):

```yaml
imagePullSecrets:
- name: my-registry-secret
```

The service accounts are:

* `rook-ceph-system` (namespace: `rook-ceph`): Will affect all pods created by the rook operator in the `rook-ceph` namespace.
* `default` (namespace: `rook-ceph`): Will affect most pods in the `rook-ceph` namespace.
* `rook-ceph-mgr` (namespace: `rook-ceph`): Will affect the MGR pods in the `rook-ceph` namespace.
* `rook-ceph-osd` (namespace: `rook-ceph`): Will affect the OSD pods in the `rook-ceph` namespace.

You can do it either via e.g. `kubectl -n <namespace> edit serviceaccount default` or by modifying the [`operator.yaml`](https://github.com/rook/rook/blob/master/cluster/examples/kubernetes/ceph/operator.yaml)
and [`cluster.yaml`](https://github.com/rook/rook/blob/master/cluster/examples/kubernetes/ceph/cluster.yaml) before deploying them.

Since it's the same procedure for all service accounts, here is just one example:

```console
kubectl -n rook-ceph edit serviceaccount default
```

```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: default
namespace: rook-ceph
secrets:
- name: default-token-12345
imagePullSecrets: # here are the new
- name: my-registry-secret # parts
```

After doing this for all service accounts all pods should be able to pull the image from your registry.
2 changes: 1 addition & 1 deletion Documentation/ceph-block.md
Expand Up @@ -11,7 +11,7 @@ Block storage allows a single pod to mount storage. This guide shows how to crea

## Prerequisites

This guide assumes a Rook cluster as explained in the [Quickstart](ceph-quickstart.md).
This guide assumes a Rook cluster as explained in the [Quickstart](quickstart.md).

## Provision Storage

Expand Down
2 changes: 1 addition & 1 deletion Documentation/ceph-client-crd.md
Expand Up @@ -44,4 +44,4 @@ spec:

### Prerequisites

This guide assumes you have created a Rook cluster as explained in the main [Quickstart guide](ceph-quickstart.md)
This guide assumes you have created a Rook cluster as explained in the main [Quickstart guide](quickstart.md)
2 changes: 1 addition & 1 deletion Documentation/ceph-cluster-crd.md
Expand Up @@ -451,7 +451,7 @@ Below are the settings for host-based cluster. This type of cluster can specify
* `config`: Device-specific config settings. See the [config settings](#osd-configuration-settings) below

Host-based cluster only supports raw device and partition. Be sure to see the
[Ceph quickstart doc prerequisites](ceph-quickstart.md#prerequisites) for additional considerations.
[Ceph quickstart doc prerequisites](quickstart.md#prerequisites) for additional considerations.

Below are the settings for a PVC-based cluster.

Expand Down
4 changes: 2 additions & 2 deletions Documentation/ceph-common-issues.md
Expand Up @@ -92,7 +92,7 @@ If you see that the PVC remains in **pending** state, see the topic [PVCs stay i

### Possible Solutions Summary

* `rook-ceph-agent` pod is in a `CrashLoopBackOff` status because it cannot deploy its driver on a read-only filesystem: [Flexvolume configuration pre-reqs](./ceph-prerequisites.md#ceph-flexvolume-configuration)
* `rook-ceph-agent` pod is in a `CrashLoopBackOff` status because it cannot deploy its driver on a read-only filesystem: [Flexvolume configuration pre-reqs](./prerequisites.md#ceph-flexvolume-configuration)
* Persistent Volume and/or Claim are failing to be created and bound: [Volume Creation](#volume-creation)
* `rook-ceph-agent` pod is failing to mount and format the volume: [Rook Agent Mounting](#volume-mounting)

Expand Down Expand Up @@ -165,7 +165,7 @@ First, clean up the agent deployment with:
kubectl -n rook-ceph delete daemonset rook-ceph-agent
```

Once the `rook-ceph-agent` pods are gone, **follow the instructions in the [Flexvolume configuration pre-reqs](./ceph-prerequisites.md#ceph-flexvolume-configuration)** to ensure a good value for `--volume-plugin-dir` has been provided to the Kubelet.
Once the `rook-ceph-agent` pods are gone, **follow the instructions in the [Flexvolume configuration pre-reqs](./prerequisites.md#ceph-flexvolume-configuration)** to ensure a good value for `--volume-plugin-dir` has been provided to the Kubelet.
After that has been configured, and the Kubelet has been restarted, start the agent pods up again by restarting `rook-operator`:

```console
Expand Down
2 changes: 1 addition & 1 deletion Documentation/ceph-filesystem.md
Expand Up @@ -13,7 +13,7 @@ This example runs a shared filesystem for the [kube-registry](https://github.com

## Prerequisites

This guide assumes you have created a Rook cluster as explained in the main [Kubernetes guide](ceph-quickstart.md)
This guide assumes you have created a Rook cluster as explained in the main [Kubernetes guide](quickstart.md)

### Multiple Filesystems Support

Expand Down
3 changes: 1 addition & 2 deletions Documentation/ceph-fs-mirror-crd.md
Expand Up @@ -5,7 +5,7 @@ indent: true
---
{% include_relative branch.liquid %}

This guide assumes you have created a Rook cluster as explained in the main [Quickstart guide](ceph-quickstart.md)
This guide assumes you have created a Rook cluster as explained in the main [Quickstart guide](quickstart.md)

# Ceph FilesystemMirror CRD

Expand Down Expand Up @@ -97,4 +97,3 @@ If any setting is unspecified, a suitable default will be used automatically.
* `labels`: Key value pair list of labels to add.
* `resources`: The resource requirements for the cephfs-mirror pods.
* `priorityClassName`: The priority class to set on the cephfs-mirror pods.

2 changes: 1 addition & 1 deletion Documentation/ceph-object-multisite.md
Expand Up @@ -22,7 +22,7 @@ To review core multisite concepts please read the [ceph-multisite design overvie

## Prerequisites

This guide assumes a Rook cluster as explained in the [Quickstart](ceph-quickstart.md).
This guide assumes a Rook cluster as explained in the [Quickstart](quickstart.md).

# Creating Object Multisite

Expand Down
2 changes: 1 addition & 1 deletion Documentation/ceph-object.md
Expand Up @@ -10,7 +10,7 @@ Object storage exposes an S3 API to the storage cluster for applications to put

## Prerequisites

This guide assumes a Rook cluster as explained in the [Quickstart](ceph-quickstart.md).
This guide assumes a Rook cluster as explained in the [Quickstart](quickstart.md).

## Configure an Object Store

Expand Down
6 changes: 3 additions & 3 deletions Documentation/ceph-osd-mgmt.md
Expand Up @@ -33,7 +33,7 @@ kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-t

## Add an OSD

The [QuickStart Guide](ceph-quickstart.md) will provide the basic steps to create a cluster and start some OSDs. For more details on the OSD
The [QuickStart Guide](quickstart.md) will provide the basic steps to create a cluster and start some OSDs. For more details on the OSD
settings also see the [Cluster CRD](ceph-cluster-crd.md) documentation. If you are not seeing OSDs created, see the [Ceph Troubleshooting Guide](ceph-common-issues.md).

To add more OSDs, Rook will automatically watch for new nodes and devices being added to your cluster.
Expand Down Expand Up @@ -70,10 +70,10 @@ If you are using `useAllDevices: true`, no change to the CR is necessary.
removal steps in order to prevent Rook from detecting the old OSD and trying to re-create it before
the disk is wiped or removed.**

To stop the Rook Operator, run
To stop the Rook Operator, run
`kubectl -n rook-ceph scale deployment rook-ceph-operator --replicas=0`.

You must perform steps below to (1) purge the OSD and either (2.a) delete the underlying data or
You must perform steps below to (1) purge the OSD and either (2.a) delete the underlying data or
(2.b)replace the disk before starting the Rook Operator again.

Once you have done that, you can start the Rook operator again with
Expand Down

0 comments on commit 6b7062e

Please sign in to comment.