Skip to content

Commit

Permalink
Merge pull request #8798 from travisn/master-doc-links
Browse files Browse the repository at this point in the history
docs: Update master doc links to latest
  • Loading branch information
travisn committed Sep 22, 2021
2 parents 0ec1303 + 0591753 commit 9fa8c5d
Show file tree
Hide file tree
Showing 22 changed files with 30 additions and 30 deletions.
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,4 +45,4 @@ Read [Github documentation if you need help](https://help.github.com/en/articles
* Storage backend version (e.g. for ceph do `ceph -v`):
* Kubernetes version (use `kubectl version`):
* Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift):
* Storage backend status (e.g. for Ceph use `ceph health` in the [Rook Ceph toolbox](https://rook.io/docs/rook/master/ceph-toolbox.html)):
* Storage backend status (e.g. for Ceph use `ceph health` in the [Rook Ceph toolbox](https://rook.io/docs/rook/latest/ceph-toolbox.html)):
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/bug_report.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,4 +36,4 @@ Read [Github documentation if you need help](https://help.github.com/en/articles
* Storage backend version (e.g. for ceph do `ceph -v`):
* Kubernetes version (use `kubectl version`):
* Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift):
* Storage backend status (e.g. for Ceph use `ceph health` in the [Rook Ceph toolbox](https://rook.io/docs/rook/master/ceph-toolbox.html)):
* Storage backend status (e.g. for Ceph use `ceph health` in the [Rook Ceph toolbox](https://rook.io/docs/rook/latest/ceph-toolbox.html)):
6 changes: 3 additions & 3 deletions .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
<!-- Please take a look at our [Contributing](https://rook.io/docs/rook/master/development-flow.html)
<!-- Please take a look at our [Contributing](https://rook.io/docs/rook/latest/development-flow.html)
documentation before submitting a Pull Request!
Thank you for contributing to Rook! -->

Expand All @@ -9,10 +9,10 @@ Resolves #

**Checklist:**

- [ ] **Commit Message Formatting**: Commit titles and messages follow guidelines in the [developer guide](https://rook.io/docs/rook/master/development-flow.html#commit-structure).
- [ ] **Commit Message Formatting**: Commit titles and messages follow guidelines in the [developer guide](https://rook.io/docs/rook/latest/development-flow.html#commit-structure).
- [ ] **Skip Tests for Docs**: Add the flag for skipping the build if this is only a documentation change. See [here](https://github.com/rook/rook/blob/master/INSTALL.md#skip-ci) for the flag.
- [ ] **Skip Unrelated Tests**: Add a flag to run tests for a specific storage provider. See [test options](https://github.com/rook/rook/blob/master/INSTALL.md#test-storage-provider).
- [ ] Reviewed the developer guide on [Submitting a Pull Request](https://rook.io/docs/rook/master/development-flow.html#submitting-a-pull-request)
- [ ] Reviewed the developer guide on [Submitting a Pull Request](https://rook.io/docs/rook/latest/development-flow.html#submitting-a-pull-request)
- [ ] Documentation has been updated, if necessary.
- [ ] Unit tests have been added, if necessary.
- [ ] Integration tests have been added, if necessary.
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/commitlint.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,4 +23,4 @@ jobs:
- uses: wagoid/commitlint-github-action@v2.0.3
with:
configFile: './.commitlintrc.json'
helpURL: https://rook.io/docs/rook/master/development-flow.html#commit-structure
helpURL: https://rook.io/docs/rook/latest/development-flow.html#commit-structure
2 changes: 1 addition & 1 deletion .mergify.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ pull_request_rules:
- conflict
actions:
comment:
message: This pull request has merge conflicts that must be resolved before it can be merged. @{{author}} please rebase it. https://rook.io/docs/rook/master/development-flow.html#updating-your-fork
message: This pull request has merge conflicts that must be resolved before it can be merged. @{{author}} please rebase it. https://rook.io/docs/rook/latest/development-flow.html#updating-your-fork

# automerge on master only under certain strict conditions
- name: automerge merge master with specific label and approvals with code change
Expand Down
8 changes: 4 additions & 4 deletions Documentation/ceph-advanced-configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ storage cluster.
* [OSD Dedicated Network](#osd-dedicated-network)
* [Phantom OSD Removal](#phantom-osd-removal)
* [Change Failure Domain](#change-failure-domain)
* [Auto Expansion of OSDs](#auto-expansion-of-OSDs)
* [Auto Expansion of OSDs](#auto-expansion-of-osds)

## Prerequisites

Expand Down Expand Up @@ -467,7 +467,7 @@ Two changes are necessary to the configuration to enable this capability:

### Use hostNetwork in the rook ceph cluster configuration

Enable the `hostNetwork` setting in the [Ceph Cluster CRD configuration](https://rook.io/docs/rook/master/ceph-cluster-crd.html#samples).
Enable the `hostNetwork` setting in the [Ceph Cluster CRD configuration](ceph-cluster-crd.md#samples).
For example,

```yaml
Expand Down Expand Up @@ -606,13 +606,13 @@ Exactly the same approach can be used to change from `host` back to `osd`.

Run the following script to auto-grow the size of OSDs on a PVC-based Rook-Ceph cluster whenever the OSDs have reached the storage near-full threshold.
```console
tests/scripts/auto-grow-storage.sh size --max maxSize --growth-rate percent
tests/scripts/auto-grow-storage.sh size --max maxSize --growth-rate percent
```
>growth-rate percentage represents the percent increase you want in the OSD capacity and maxSize represent the maximum disk size.
For example, if you need to increase the size of OSD by 30% and max disk size is 1Ti
```console
./auto-grow-storage.sh size --max 1Ti --growth-rate 30
./auto-grow-storage.sh size --max 1Ti --growth-rate 30
```

### To scale OSDs Horizontally
Expand Down
2 changes: 1 addition & 1 deletion Documentation/development-environment.md
Original file line number Diff line number Diff line change
Expand Up @@ -174,7 +174,7 @@ and you can use that token to log into the UI at http://localhost:8001/ui.

Everything should happen on the host, your development environment will reside on the host machine NOT inside the virtual machines running the Kubernetes cluster.

Now, please refer to [https://rook.io/docs/rook/master/development-flow.html](https://rook.io/docs/rook/master/development-flow.html) to setup your development environment (go, git etc).
Now, please refer to the [development flow](development-flow.md) to setup your development environment (go, git etc).

At this stage, Rook should be cloned on your host.

Expand Down
2 changes: 1 addition & 1 deletion Documentation/pre-reqs.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ Ceph requires a Linux kernel built with the RBD module. Many Linux distributions
For example, the GKE Container-Optimised OS (COS) does not have RBD.

You can test your Kubernetes nodes by running `modprobe rbd`.
If it says 'not found', you may have to [rebuild your kernel](https://rook.io/docs/rook/master/common-issues.html#rook-agent-rbd-module-missing-error)
If it says 'not found', you may have to [rebuild your kernel](common-issues.md#rook-agent-rbd-module-missing-error)
or choose a different Linux distribution.

### CephFS
Expand Down
2 changes: 1 addition & 1 deletion Documentation/storage-providers.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ Rook provides the following framework to assist storage providers in building an
each storage provider has its own repo.
* Helm charts are published to [charts.rook.io](https://charts.rook.io/release)
* Documentation for the storage provider is to be written by the storage provider
members. The build process will publish the documentation to the [Rook website](https://rook.github.io/docs/rook/master/).
members. The build process will publish the documentation to the [Rook website](https://rook.github.io/docs/rook/latest/).
* All storage providers are added to the [Rook.io website](https://rook.io/)
* A great Slack community is available where you can communicate amongst developers and users

Expand Down
2 changes: 1 addition & 1 deletion INSTALL.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ On every commit to PR and master the CI will build, run unit tests, and run inte
If the build is for master or a release, the build will also be published to
[dockerhub.com](https://cloud.docker.com/u/rook/repository/list).

> Note that if the pull request title follows Rook's [contribution guidelines](https://rook.io/docs/rook/master/development-flow.html#commit-structure), the CI will automatically run the appropriate test scenario. For example if a pull request title is "ceph: add a feature", then the tests for the Ceph storage provider will run. Similarly, tests will only run for a single provider with the "cassandra:" and "nfs:" prefixes.
> Note that if the pull request title follows Rook's [contribution guidelines](https://rook.io/docs/rook/latest/development-flow.html#commit-structure), the CI will automatically run the appropriate test scenario. For example if a pull request title is "ceph: add a feature", then the tests for the Ceph storage provider will run. Similarly, tests will only run for a single provider with the "cassandra:" and "nfs:" prefixes.
## Building for other platforms

Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Rook is hosted by the [Cloud Native Computing Foundation](https://cncf.io) (CNCF

## Getting Started and Documentation

For installation, deployment, and administration, see our [Documentation](https://rook.github.io/docs/rook/master).
For installation, deployment, and administration, see our [Documentation](https://rook.github.io/docs/rook/latest).

## Contributing

Expand Down Expand Up @@ -80,7 +80,7 @@ More details about API versioning and status in Kubernetes can be found on the K
| ---- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------- | ------ |
| Ceph | [Ceph](https://ceph.com/) is a distributed storage system that provides file, block and object storage and is deployed in large scale production clusters. | ceph.rook.io/v1 | Stable |

This repo is for the Ceph storage provider. The [Cassandra](https://github.com/rook/cassandra) and [NFS](https://github.com/rook/nfs) storage providers moved to a separate repo to allow for each [storage provider](https://rook.github.io/docs/rook/master/storage-providers.html) to have an independent development and release schedule.
This repo is for the Ceph storage provider. The [Cassandra](https://github.com/rook/cassandra) and [NFS](https://github.com/rook/nfs) storage providers moved to a separate repo to allow for each [storage provider](https://rook.github.io/docs/rook/latest/storage-providers.html) to have an independent development and release schedule.

### Official Releases

Expand Down
2 changes: 1 addition & 1 deletion cluster/charts/rook-ceph-cluster/templates/NOTES.txt
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
The Ceph Cluster has been installed. Check its status by running:
kubectl --namespace {{ .Release.Namespace }} get cephcluster

Visit https://rook.github.io/docs/rook/master/ceph-cluster-crd.html for more information about the Ceph CRD.
Visit https://rook.github.io/docs/rook/latest/ceph-cluster-crd.html for more information about the Ceph CRD.

Important Notes:
- You can only deploy a single cluster per namespace
Expand Down
2 changes: 1 addition & 1 deletion cluster/charts/rook-ceph-cluster/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ cephClusterSpec:
# Whether or not upgrade should continue even if a check fails
# This means Ceph's status could be degraded and we don't recommend upgrading but you might decide otherwise
# Use at your OWN risk
# To understand Rook's upgrade process of Ceph, read https://rook.io/docs/rook/master/ceph-upgrade.html#ceph-version-upgrades
# To understand Rook's upgrade process of Ceph, read https://rook.io/docs/rook/latest/ceph-upgrade.html#ceph-version-upgrades
skipUpgradeChecks: false

# Whether or not continue if PGs are not clean during an upgrade
Expand Down
2 changes: 1 addition & 1 deletion cluster/charts/rook-ceph/templates/NOTES.txt
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
The Rook Operator has been installed. Check its status by running:
kubectl --namespace {{ .Release.Namespace }} get pods -l "app=rook-ceph-operator"

Visit https://rook.io/docs/rook/master for instructions on how to create and configure Rook clusters
Visit https://rook.io/docs/rook/latest for instructions on how to create and configure Rook clusters

Important Notes:
- You must customize the 'CephCluster' resource in the sample manifests for your cluster.
Expand Down
4 changes: 2 additions & 2 deletions cluster/charts/rook-ceph/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ crds:
# managed independently with cluster/examples/kubernetes/ceph/crds.yaml.
# **WARNING** Only set during first deployment. If later disabled the cluster may be DESTROYED.
# If the CRDs are deleted in this case, see the disaster recovery guide to restore them.
# https://rook.github.io/docs/rook/master/ceph-disaster-recovery.html#restoring-crds-after-deletion
# https://rook.github.io/docs/rook/latest/ceph-disaster-recovery.html#restoring-crds-after-deletion
enabled: true

resources:
Expand Down Expand Up @@ -321,7 +321,7 @@ allowMultipleFilesystems: false
# effect: NoSchedule
# nodeAffinity: key1=value1,value2; key2=value3
# mountSecurityMode: Any
## For information on FlexVolume path, please refer to https://rook.io/docs/rook/master/flexvolume.html
## For information on FlexVolume path, please refer to https://rook.io/docs/rook/latest/flexvolume.html
# flexVolumeDirPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/
# libModulesDirPath: /lib/modules
# mounts: mount1=/host/path:/container/path,/host/path2:/container/path2
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ spec:
stretchCluster:
# The ceph failure domain will be extracted from the label, which by default is the zone. The nodes running OSDs must have
# this label in order for the OSDs to be configured in the correct topology. For topology labels, see
# https://rook.github.io/docs/rook/master/ceph-cluster-crd.html#osd-topology.
# https://rook.github.io/docs/rook/latest/ceph-cluster-crd.html#osd-topology.
failureDomainLabel: topology.kubernetes.io/zone
# The sub failure domain is the secondary level at which the data will be placed to maintain data durability and availability.
# The default is "host", which means that each OSD must be on a different node and you would need at least two nodes per zone.
Expand Down
2 changes: 1 addition & 1 deletion cluster/examples/kubernetes/ceph/cluster-stretched.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ spec:
stretchCluster:
# The ceph failure domain will be extracted from the label, which by default is the zone. The nodes running OSDs must have
# this label in order for the OSDs to be configured in the correct topology. For topology labels, see
# https://rook.github.io/docs/rook/master/ceph-cluster-crd.html#osd-topology.
# https://rook.github.io/docs/rook/latest/ceph-cluster-crd.html#osd-topology.
failureDomainLabel: topology.kubernetes.io/zone
# The sub failure domain is the secondary level at which the data will be placed to maintain data durability and availability.
# The default is "host", which means that each OSD must be on a different node and you would need at least two nodes per zone.
Expand Down
2 changes: 1 addition & 1 deletion cluster/examples/kubernetes/ceph/cluster.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ spec:
# Whether or not upgrade should continue even if a check fails
# This means Ceph's status could be degraded and we don't recommend upgrading but you might decide otherwise
# Use at your OWN risk
# To understand Rook's upgrade process of Ceph, read https://rook.io/docs/rook/master/ceph-upgrade.html#ceph-version-upgrades
# To understand Rook's upgrade process of Ceph, read https://rook.io/docs/rook/latest/ceph-upgrade.html#ceph-version-upgrades
skipUpgradeChecks: false
# Whether or not continue if PGs are not clean during an upgrade
continueUpgradeAfterChecksEvenIfNotHealthy: false
Expand Down
2 changes: 1 addition & 1 deletion cluster/examples/kubernetes/ceph/operator-openshift.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -138,7 +138,7 @@ data:

# Enable Ceph Kernel clients on kernel < 4.17 which support quotas for Cephfs
# If you disable the kernel client, your application may be disrupted during upgrade.
# See the upgrade guide: https://rook.io/docs/rook/master/ceph-upgrade.html
# See the upgrade guide: https://rook.io/docs/rook/latest/ceph-upgrade.html
CSI_FORCE_CEPHFS_KERNEL_CLIENT: "true"

# (Optional) policy for modifying a volume's ownership or permissions when the RBD PVC is being mounted.
Expand Down
2 changes: 1 addition & 1 deletion cluster/examples/kubernetes/ceph/operator.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ data:

# Enable cephfs kernel driver instead of ceph-fuse.
# If you disable the kernel client, your application may be disrupted during upgrade.
# See the upgrade guide: https://rook.io/docs/rook/master/ceph-upgrade.html
# See the upgrade guide: https://rook.io/docs/rook/latest/ceph-upgrade.html
# NOTE! cephfs quota is not supported in kernel version < 4.17
CSI_FORCE_CEPHFS_KERNEL_CLIENT: "true"

Expand Down
2 changes: 1 addition & 1 deletion design/README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
# Rook Feature Proposal

Please follow the [design section guideline](https://rook.io/docs/rook/master/development-flow.html#design-document).
Please follow the [design section guideline](https://rook.io/docs/rook/latest/development-flow.html#design-document).
4 changes: 2 additions & 2 deletions design/ceph/ceph-stretch-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ The topology of the K8s cluster is to be determined by the admin, outside the sc
Rook will simply detect the topology labels that have been added to the nodes.

If the desired failure domain is a "zone", the `topology.kubernetes.io/zone` label should
be added to the nodes. Any of the [topology labels](https://rook.io/docs/rook/master/ceph-cluster-crd.html#osd-topology)
be added to the nodes. Any of the [topology labels](https://rook.io/docs/rook/latest/ceph-cluster-crd.html#osd-topology)
supported by OSDs can be used.

In the minimum configuration, two nodes in each data zone would be labeled, while one
Expand Down Expand Up @@ -89,7 +89,7 @@ three zones must be listed and the arbiter zone is identified.
stretchCluster:
# The cluster is most commonly stretched over zones, but could also be stretched over
# another failure domain such as datacenter or region. Must be one of the labels
# used by OSDs as documented at https://rook.io/docs/rook/master/ceph-cluster-crd.html#osd-topology.
# used by OSDs as documented at https://rook.io/docs/rook/latest/ceph-cluster-crd.html#osd-topology.
failureDomainLabel: topology.kubernetes.io/zone
zones:
# There must be exactly three zones in this list, with one of them being the arbiter
Expand Down

0 comments on commit 9fa8c5d

Please sign in to comment.