Skip to content

Commit

Permalink
Merge pull request #2354 from travisn/doc-updates
Browse files Browse the repository at this point in the history
Restructure docs for storage provider view
  • Loading branch information
travisn committed Dec 8, 2018
2 parents 710e9b7 + 0524084 commit 7391c10
Show file tree
Hide file tree
Showing 46 changed files with 124 additions and 131 deletions.
4 changes: 2 additions & 2 deletions Documentation/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ High-level Storage Provider design documents:

| Storage Provider | Status | Description |
|---|---|---|
| [Ceph](ceph-design.md) | Beta | Ceph is a highly scalable distributed storage solution for block storage, object storage, and shared file systems with years of production deployments. |
| [EdgeFS](edgefs-design.md) | Alpha | EdgeFS is high-performance and low-latency object storage system with Geo-Transparent data access via standard protocols (S3, NFS, iSCSI) from on-prem, private/public clouds or small footprint edge (IoT) devices. |
| [Ceph](ceph-storage.md) | Beta | Ceph is a highly scalable distributed storage solution for block storage, object storage, and shared file systems with years of production deployments. |
| [EdgeFS](edgefs-storage.md) | Alpha | EdgeFS is high-performance and low-latency object storage system with Geo-Transparent data access via standard protocols (S3, NFS, iSCSI) from on-prem, private/public clouds or small footprint edge (IoT) devices. |

Low level design documentation for supported list of storage systems collected at [design docs](https://github.com/rook/rook/tree/master/design) section.
4 changes: 2 additions & 2 deletions Documentation/advanced-configuration.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Advanced Configuration
weight: 76
weight: 113
indent: true
---

Expand All @@ -22,7 +22,7 @@ storage cluster.
## Prerequisites

Most of the examples make use of the `ceph` client command. A quick way to use
the Ceph client suite is from a [Rook Toolbox container](toolbox.md).
the Ceph client suite is from a [Rook Toolbox container](ceph-toolbox.md).

The Kubernetes based examples assume Rook OSD pods are in the `rook-ceph` namespace.
If you run them in a different namespace, modify `kubectl -n rook-ceph [...]` to fit
Expand Down
12 changes: 5 additions & 7 deletions Documentation/cassandra-cluster-crd.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
---
title: Cassandra Cluster
weight: 36
indent: true
title: Cassandra Cluster CRD
weight: 50
---

# Cassandra Cluster CRD
Expand All @@ -16,7 +15,7 @@ This page will explain all the available configuration options on the Cassandra
```yaml
apiVersion: cassandra.rook.io/v1alpha1
kind: Cluster
metadata:
metadata:
name: rook-cassandra
namespace: rook-cassandra
spec:
Expand All @@ -29,12 +28,12 @@ spec:
- name: us-east-1a
members: 3
storage:
volumeClaimTemplates:
volumeClaimTemplates:
- metadata:
name: rook-cassandra-data
spec:
storageClassName: my-storage-class
resources:
resources:
requests:
storage: 200Gi
resources:
Expand Down Expand Up @@ -85,4 +84,3 @@ Rack Settings:
* [`podAffinity`](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity)
* [`podAntiAffinity`](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity)
* [`tolerations`](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/)

2 changes: 1 addition & 1 deletion Documentation/block.md → Documentation/ceph-block.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Block Storage
weight: 22
weight: 21
indent: true
---
{% assign url = page.url | split: '/' %}
Expand Down
8 changes: 4 additions & 4 deletions Documentation/ceph-cluster-crd.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Ceph Cluster
weight: 31
title: Cluster CRD
weight: 26
indent: true
---

Expand Down Expand Up @@ -86,8 +86,8 @@ If these settings are changed in the CRD the operator will update the number of

To change the defaults that the operator uses to determine the mon health and whether to failover a mon, the following environment variables can be changed in [operator.yaml](https://github.com/rook/rook/blob/master/cluster/examples/kubernetes/ceph/operator.yaml). The intervals should be small enough that you have confidence the mons will maintain quorum, while also being
log enough to ignore network blips where mons are failed over too often.
- ROOK_MON_HEALTHCHECK_INTERVAL: The frequency with which to check if mons are in quorum (default is 45 seconds)
- ROOK_MON_OUT_TIMEOUT: The interval to wait before marking a mon as "out" and starting a new mon to replace it in the quroum (default is 5 minutes)
- `ROOK_MON_HEALTHCHECK_INTERVAL`: The frequency with which to check if mons are in quorum (default is 45 seconds)
- `ROOK_MON_OUT_TIMEOUT`: The interval to wait before marking a mon as "out" and starting a new mon to replace it in the quroum (default is 5 minutes)

### Node Settings
In addition to the cluster level settings specified above, each individual node can also specify configuration to override the cluster level settings and defaults.
Expand Down
4 changes: 2 additions & 2 deletions Documentation/ceph-dashboard.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Ceph Dashboard
weight: 46
title: Dashboard
weight: 24
indent: true
---

Expand Down
4 changes: 2 additions & 2 deletions Documentation/ceph-filesystem-crd.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Ceph Shared File System
weight: 35
title: Shared File System CRD
weight: 30
indent: true
---
{% assign url = page.url | split: '/' %}
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Shared File System
weight: 26
weight: 23
indent: true
---

Expand Down Expand Up @@ -56,7 +56,7 @@ rook-ceph-mds-myfs-7d59fdfcf4-h8kw9 1/1 Running 0 12s
rook-ceph-mds-myfs-7d59fdfcf4-kgkjp 1/1 Running 0 12s
```

To see detailed status of the file system, start and connect to the [Rook toolbox](toolbox.md). A new line will be shown with `ceph status` for the `mds` service. In this example, there is one active instance of MDS which is up, with one MDS instance in `standby-replay` mode in case of failover.
To see detailed status of the file system, start and connect to the [Rook toolbox](ceph-toolbox.md). A new line will be shown with `ceph status` for the `mds` service. In this example, there is one active instance of MDS which is up, with one MDS instance in `standby-replay` mode in case of failover.

```bash
$ ceph status
Expand Down
2 changes: 1 addition & 1 deletion Documentation/ceph-monitoring.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Monitoring
weight: 27
weight: 25
indent: true
---

Expand Down
4 changes: 2 additions & 2 deletions Documentation/ceph-object-store-crd.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Ceph Object Store
weight: 33
title: Object Store CRD
weight: 28
indent: true
---

Expand Down
4 changes: 2 additions & 2 deletions Documentation/ceph-object-store-user-crd.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Ceph Object Store User
weight: 34
title: Object Store User CRD
weight: 29
indent: true
---

Expand Down
18 changes: 13 additions & 5 deletions Documentation/object.md → Documentation/ceph-object.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Object Storage
weight: 24
weight: 22
indent: true
---

Expand Down Expand Up @@ -77,9 +77,9 @@ When the object store user is created the Rook operator will create the RGW user
kubectl create -f object-user.yaml

# To confirm the object store user is configured, describe the secret
kubectl -n rook-ceph describe secret rook-ceph-object-user-my-user
kubectl -n rook-ceph describe secret rook-ceph-object-user-my-store-my-user

Name: rook-ceph-object-user-my-user
Name: rook-ceph-object-user-my-store-my-user
Namespace: rook-ceph
Labels: app=rook-ceph-rgw
rook_cluster=rook-ceph
Expand All @@ -95,13 +95,19 @@ SecretKey: 40 bytes
```

The AccessKey and SecretKey data fields can be mounted in a pod as an environment variable. More information on consuming
kubernetes secrets can be found on [The kubernetes website](https://kubernetes.io/docs/concepts/configuration/secret/)
kubernetes secrets can be found in the [K8s secret documentation](https://kubernetes.io/docs/concepts/configuration/secret/)

To directly retrieve the secrets:
```bash
kubectl -n rook-ceph get secret rook-ceph-object-user-my-store-my-user -o yaml | grep AccessKey | awk '{print $2}' | base64 --decode
kubectl -n rook-ceph get secret rook-ceph-object-user-my-store-my-user -o yaml | grep SecretKey | awk '{print $2}' | base64 --decode
```

## Consume the Object Storage

Use an S3 compatible client to create a bucket in the object store.

This section will allow you to test connecting to the object store and uploading and downloading from it. Run the following commands after you have connected to the [Rook toolbox](toolbox.md).
This section will allow you to test connecting to the object store and uploading and downloading from it. Run the following commands after you have connected to the [Rook toolbox](ceph-toolbox.md).

### Install s3cmd

Expand Down Expand Up @@ -133,6 +139,8 @@ export AWS_ACCESS_KEY_ID=XEZDB3UJ6X7HVBE7X7MA
export AWS_SECRET_ACCESS_KEY=7yGIZON7EhFORz0I40BFniML36D2rl8CQQ5kXU6l
```

The access key and secret key can be retreived as described in the section above on [creating a user](#create-a-user).

### Create a bucket

Now that the user connection variables were set above, we can proceed to perform operations such as creating buckets.
Expand Down
4 changes: 2 additions & 2 deletions Documentation/ceph-pool-crd.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Ceph Block Pool
weight: 32
title: Block Pool CRD
weight: 27
indent: true
---

Expand Down
8 changes: 4 additions & 4 deletions Documentation/ceph-quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -256,17 +256,17 @@ rook-ceph-osd-prepare-mynode-ftt57 0/1 Completed 0 1m
# Storage

For a walkthrough of the three types of storage exposed by Rook, see the guides for:
- **[Block](block.md)**: Create block storage to be consumed by a pod
- **[Object](object.md)**: Create an object store that is accessible inside or outside the Kubernetes cluster
- **[Shared File System](filesystem.md)**: Create a file system to be shared across multiple pods
- **[Block](ceph-block.md)**: Create block storage to be consumed by a pod
- **[Object](ceph-object.md)**: Create an object store that is accessible inside or outside the Kubernetes cluster
- **[Shared File System](ceph-filesystem.md)**: Create a file system to be shared across multiple pods

# Ceph Dashboard

Ceph has a dashboard in which you can view the status of your cluster. Please see the [dashboard guide](ceph-dashboard.md) for more details.

# Tools

We have created a toolbox container that contains the full suite of Ceph clients for debugging and troubleshooting your Rook cluster. Please see the [toolbox readme](toolbox.md) for setup and usage information. Also see our [advanced configuration](advanced-configuration.md) document for helpful maintenance and tuning examples.
We have created a toolbox container that contains the full suite of Ceph clients for debugging and troubleshooting your Rook cluster. Please see the [toolbox readme](ceph-toolbox.md) for setup and usage information. Also see our [advanced configuration](advanced-configuration.md) document for helpful maintenance and tuning examples.

# Monitoring

Expand Down
Original file line number Diff line number Diff line change
@@ -1,12 +1,11 @@
---
title: Ceph Rook Operator Design
weight: 1
indent: true
title: Ceph Storage
weight: 20
---

# Ceph Rook Operator Design
# Ceph Storage

Ceph is a highly scalable distributed storage solution for block storage, object storage, and shared file systems with years of production deployments.
Ceph is a highly scalable distributed storage solution for **block storage**, **object storage**, and **shared file systems** with years of production deployments.

## Design

Expand Down
6 changes: 3 additions & 3 deletions Documentation/ceph-teardown.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Ceph Cleanup
weight: 4
title: Cleanup
weight: 39
indent: true
---

Expand All @@ -19,7 +19,7 @@ If you are tearing down a cluster frequently for development purposes, it is ins
## Delete the Block and File artifacts
First you will need to clean up the resources created on top of the Rook cluster.

These commands will clean up the resources from the [block](block.md#teardown) and [file](filesystem.md#teardown) walkthroughs (unmount volumes, delete volume claims, etc). If you did not complete those parts of the walkthrough, you can skip these instructions:
These commands will clean up the resources from the [block](ceph-block.md#teardown) and [file](ceph-filesystem.md#teardown) walkthroughs (unmount volumes, delete volume claims, etc). If you did not complete those parts of the walkthrough, you can skip these instructions:
```console
kubectl delete -f ../wordpress.yaml
kubectl delete -f ../mysql.yaml
Expand Down
2 changes: 1 addition & 1 deletion Documentation/toolbox.md → Documentation/ceph-toolbox.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Toolbox
weight: 72
weight: 111
indent: true
---

Expand Down
8 changes: 4 additions & 4 deletions Documentation/tools.md → Documentation/ceph-tools.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
---
title: Tools
weight: 70
title: Ceph Tools
weight: 110
---

# Tools
# Ceph Tools

Rook provides a number of tools to help you manage your cluster.
- [Toolbox](toolbox.md): A pod from which you can run all of the tools to troubleshoot the storage cluster
- [Toolbox](ceph-toolbox.md): A pod from which you can run all of the tools to troubleshoot the storage cluster
- [Direct Tools](direct-tools.md): Run ceph commands to test directly mounting block and file storage
- [Advanced Configuration](advanced-configuration.md): Tips and tricks for configuring for cluster
- [Common Issues](common-issues.md): Common issues and their potential solutions
Expand Down
7 changes: 4 additions & 3 deletions Documentation/upgrade.md → Documentation/ceph-upgrade.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
---
title: Ceph Upgrades
weight: 60
title: Upgrades
weight: 38
indent: true
---

# Ceph Upgrades
Expand Down Expand Up @@ -111,7 +112,7 @@ Before we begin the upgrade process, let's first review some ways that you can v
your cluster, ensuring that the upgrade is going smoothly after each step. Most of the health
verification checks for your cluster during the upgrade process can be performed with the Rook
toolbox. For more information about how to run the toolbox, please visit the
[Rook toolbox readme](./toolbox.md#running-the-toolbox-in-kubernetes).
[Rook toolbox readme](./ceph-toolbox.md#running-the-toolbox-in-kubernetes).

### Pods all Running
In a healthy Rook cluster, the operator, the agents and all Rook namespace pods should be in the
Expand Down
5 changes: 2 additions & 3 deletions Documentation/cockroachdb-cluster-crd.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
---
title: CockroachDB Cluster
weight: 37
indent: true
title: CockroachDB Cluster CRD
weight: 60
---

# CockroachDB Cluster CRD
Expand Down
10 changes: 5 additions & 5 deletions Documentation/common-issues.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Common Issues
weight: 78
weight: 114
indent: true
---

Expand Down Expand Up @@ -47,7 +47,7 @@ Kubernetes status is the first line of investigating when something goes wrong w
After you verify the basic health of the running pods, next you will want to run Ceph tools for status of the storage components. There are two ways to run the Ceph tools, either in the Rook toolbox or inside other Rook pods that are already running.

### Tools in the Rook Toolbox
The [rook-ceph-tools pod](./toolbox.md) provides a simple environment to run Ceph tools. Once the pod is up and running, connect to the pod to execute Ceph commands to evaluate that current state of the cluster.
The [rook-ceph-tools pod](./ceph-toolbox.md) provides a simple environment to run Ceph tools. Once the pod is up and running, connect to the pod to execute Ceph commands to evaluate that current state of the cluster.
```bash
kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') bash
```
Expand Down Expand Up @@ -213,7 +213,7 @@ If `dmesg` shows something similar to below, then it means you have an old kerne
libceph: mon2 10.205.92.13:6790 feature set mismatch, my 4a042a42 < server's 2004a042a42, missing 20000000000
```
If `uname -a` shows that you have a kernel version older than `3.15`, you'll need to perform **one** of the following:
* Disable some Ceph features by starting the [rook toolbox](./toolbox.md) and running `ceph osd crush tunables bobtail`
* Disable some Ceph features by starting the [rook toolbox](./ceph-toolbox.md) and running `ceph osd crush tunables bobtail`
* Upgrade your kernel to `3.15` or later.

### Filesystem Mounting
Expand All @@ -240,7 +240,7 @@ We want to help you get your storage working and learn from those lessons to pre
* One or more MONs are restarting periodically

## Investigation
Create a [rook-ceph-tools pod](./toolbox.md) to investigate the current state of CEPH. Here is an example of what one might see. In this case the `ceph status` command would just hang so a CTRL-C needed to be sent.
Create a [rook-ceph-tools pod](./ceph-toolbox.md) to investigate the current state of CEPH. Here is an example of what one might see. In this case the `ceph status` command would just hang so a CTRL-C needed to be sent.

```console
$ kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') bash
Expand Down Expand Up @@ -500,4 +500,4 @@ Rebooting the system to use the new kernel, this issue should be fixed: the Agen
The only solution to this problem is to upgrade your kernel to `4.7` or higher.
This is due to a mount flag added in the kernel version `4.7` which allows to chose the filesystem by name.

For additional info on the kernel version requirement for multiple shared filesystems (CephFS), see [Filesystem - Kernel version requirement](filesystem.md#kernel-version-requirement).
For additional info on the kernel version requirement for multiple shared filesystems (CephFS), see [Filesystem - Kernel version requirement](ceph-filesystem.md#kernel-version-requirement).
2 changes: 1 addition & 1 deletion Documentation/container-linux.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Container Linux
weight: 80
weight: 115
indent: true
---
# Using the Container Linux Update Operator with Rook
Expand Down

0 comments on commit 7391c10

Please sign in to comment.