Skip to content

Commit

Permalink
Merge pull request #9451 from BlaineEXE/update-release-1.8-to-v1.8.1
Browse files Browse the repository at this point in the history
build: update examples and manifests for v1.8.1
  • Loading branch information
BlaineEXE committed Dec 16, 2021
2 parents bd403a8 + d27303a commit f746b50
Show file tree
Hide file tree
Showing 11 changed files with 23 additions and 23 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/canary-integration-test.yml
Expand Up @@ -76,7 +76,7 @@ jobs:
sed -i 's/<OSD-IDs>/1/' deploy/examples/osd-purge.yaml
# the CI must force the deletion since we use replica 1 on 2 OSDs
sed -i 's/false/true/' deploy/examples/osd-purge.yaml
sed -i 's|rook/ceph:master|rook/ceph:local-build|' deploy/examples/osd-purge.yaml
sed -i 's|rook/ceph:v1.8.1|rook/ceph:local-build|' deploy/examples/osd-purge.yaml
kubectl -n rook-ceph create -f deploy/examples/osd-purge.yaml
toolbox=$(kubectl get pod -l app=rook-ceph-tools -n rook-ceph -o jsonpath='{.items[*].metadata.name}')
kubectl -n rook-ceph exec $toolbox -- ceph status
Expand Down
2 changes: 1 addition & 1 deletion Documentation/ceph-monitoring.md
Expand Up @@ -38,7 +38,7 @@ With the Prometheus operator running, we can create a service monitor that will
From the root of your locally cloned Rook repo, go the monitoring directory:

```console
$ git clone --single-branch --branch v1.8.0 https://github.com/rook/rook.git
$ git clone --single-branch --branch v1.8.1 https://github.com/rook/rook.git
cd rook/deploy/examples/monitoring
```

Expand Down
24 changes: 12 additions & 12 deletions Documentation/ceph-upgrade.md
Expand Up @@ -266,7 +266,7 @@ Any pod that is using a Rook volume should also remain healthy:
## Rook Operator Upgrade Process

In the examples given in this guide, we will be upgrading a live Rook cluster running `v1.7.8` to
the version `v1.8.0`. This upgrade should work from any official patch release of Rook v1.7 to any
the version `v1.8.1`. This upgrade should work from any official patch release of Rook v1.7 to any
official patch release of v1.8.

**Rook release from `master` are expressly unsupported.** It is strongly recommended that you use
Expand All @@ -291,7 +291,7 @@ by the Operator. Also update the Custom Resource Definitions (CRDs).

Get the latest common resources manifests that contain the latest changes.
```sh
git clone --single-branch --depth=1 --branch v1.8.0 https://github.com/rook/rook.git
git clone --single-branch --depth=1 --branch v1.8.1 https://github.com/rook/rook.git
cd rook/deploy/examples
```

Expand Down Expand Up @@ -343,7 +343,7 @@ The largest portion of the upgrade is triggered when the operator's image is upd
When the operator is updated, it will proceed to update all of the Ceph daemons.

```sh
kubectl -n $ROOK_OPERATOR_NAMESPACE set image deploy/rook-ceph-operator rook-ceph-operator=rook/ceph:v1.8.0
kubectl -n $ROOK_OPERATOR_NAMESPACE set image deploy/rook-ceph-operator rook-ceph-operator=rook/ceph:v1.8.1
```

#### Admission controller
Expand Down Expand Up @@ -377,16 +377,16 @@ watch --exec kubectl -n $ROOK_CLUSTER_NAMESPACE get deployments -l rook_cluster=
```

As an example, this cluster is midway through updating the OSDs. When all deployments report `1/1/1`
availability and `rook-version=v1.8.0`, the Ceph cluster's core components are fully updated.
availability and `rook-version=v1.8.1`, the Ceph cluster's core components are fully updated.

>```
>Every 2.0s: kubectl -n rook-ceph get deployment -o j...
>
>rook-ceph-mgr-a req/upd/avl: 1/1/1 rook-version=v1.8.0
>rook-ceph-mon-a req/upd/avl: 1/1/1 rook-version=v1.8.0
>rook-ceph-mon-b req/upd/avl: 1/1/1 rook-version=v1.8.0
>rook-ceph-mon-c req/upd/avl: 1/1/1 rook-version=v1.8.0
>rook-ceph-osd-0 req/upd/avl: 1// rook-version=v1.8.0
>rook-ceph-mgr-a req/upd/avl: 1/1/1 rook-version=v1.8.1
>rook-ceph-mon-a req/upd/avl: 1/1/1 rook-version=v1.8.1
>rook-ceph-mon-b req/upd/avl: 1/1/1 rook-version=v1.8.1
>rook-ceph-mon-c req/upd/avl: 1/1/1 rook-version=v1.8.1
>rook-ceph-osd-0 req/upd/avl: 1// rook-version=v1.8.1
>rook-ceph-osd-1 req/upd/avl: 1/1/1 rook-version=v1.7.8
>rook-ceph-osd-2 req/upd/avl: 1/1/1 rook-version=v1.7.8
>```
Expand All @@ -398,14 +398,14 @@ An easy check to see if the upgrade is totally finished is to check that there i
# kubectl -n $ROOK_CLUSTER_NAMESPACE get deployment -l rook_cluster=$ROOK_CLUSTER_NAMESPACE -o jsonpath='{range .items[*]}{"rook-version="}{.metadata.labels.rook-version}{"\n"}{end}' | sort | uniq
This cluster is not yet finished:
rook-version=v1.7.8
rook-version=v1.8.0
rook-version=v1.8.1
This cluster is finished:
rook-version=v1.8.0
rook-version=v1.8.1
```

### **5. Verify the updated cluster**

At this point, your Rook operator should be running version `rook/ceph:v1.8.0`.
At this point, your Rook operator should be running version `rook/ceph:v1.8.1`.

Verify the Ceph cluster's health using the [health verification section](#health-verification).

Expand Down
2 changes: 1 addition & 1 deletion Documentation/quickstart.md
Expand Up @@ -34,7 +34,7 @@ In order to configure the Ceph storage cluster, at least one of these local stor
A simple Rook cluster can be created with the following kubectl commands and [example manifests](https://github.com/rook/rook/blob/{{ branchName }}/deploy/examples).

```console
$ git clone --single-branch --branch v1.8.0 https://github.com/rook/rook.git
$ git clone --single-branch --branch v1.8.1 https://github.com/rook/rook.git
cd rook/deploy/examples
kubectl create -f crds.yaml -f common.yaml -f operator.yaml
kubectl create -f cluster.yaml
Expand Down
2 changes: 1 addition & 1 deletion deploy/examples/direct-mount.yaml
Expand Up @@ -18,7 +18,7 @@ spec:
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: rook-direct-mount
image: rook/ceph:v1.8.0
image: rook/ceph:v1.8.1
command: ["/bin/bash"]
args: ["-m", "-c", "/usr/local/bin/toolbox.sh"]
imagePullPolicy: IfNotPresent
Expand Down
2 changes: 1 addition & 1 deletion deploy/examples/images.txt
Expand Up @@ -6,4 +6,4 @@
quay.io/ceph/ceph:v16.2.7
quay.io/cephcsi/cephcsi:v3.4.0
quay.io/csiaddons/volumereplication-operator:v0.1.0
rook/ceph:v1.8.0
rook/ceph:v1.8.1
2 changes: 1 addition & 1 deletion deploy/examples/operator-openshift.yaml
Expand Up @@ -445,7 +445,7 @@ spec:
serviceAccountName: rook-ceph-system
containers:
- name: rook-ceph-operator
image: rook/ceph:v1.8.0
image: rook/ceph:v1.8.1
args: ["ceph", "operator"]
securityContext:
runAsNonRoot: true
Expand Down
2 changes: 1 addition & 1 deletion deploy/examples/operator.yaml
Expand Up @@ -362,7 +362,7 @@ spec:
serviceAccountName: rook-ceph-system
containers:
- name: rook-ceph-operator
image: rook/ceph:v1.8.0
image: rook/ceph:v1.8.1
args: ["ceph", "operator"]
securityContext:
runAsNonRoot: true
Expand Down
2 changes: 1 addition & 1 deletion deploy/examples/osd-purge.yaml
Expand Up @@ -25,7 +25,7 @@ spec:
serviceAccountName: rook-ceph-purge-osd
containers:
- name: osd-removal
image: rook/ceph:v1.8.0
image: rook/ceph:v1.8.1
# TODO: Insert the OSD ID in the last parameter that is to be removed
# The OSD IDs are a comma-separated list. For example: "0" or "0,2".
# If you want to preserve the OSD PVCs, set `--preserve-pvc true`.
Expand Down
4 changes: 2 additions & 2 deletions deploy/examples/toolbox-job.yaml
Expand Up @@ -10,7 +10,7 @@ spec:
spec:
initContainers:
- name: config-init
image: rook/ceph:v1.8.0
image: rook/ceph:v1.8.1
command: ["/usr/local/bin/toolbox.sh"]
args: ["--skip-watch"]
imagePullPolicy: IfNotPresent
Expand All @@ -32,7 +32,7 @@ spec:
mountPath: /etc/rook
containers:
- name: script
image: rook/ceph:v1.8.0
image: rook/ceph:v1.8.1
volumeMounts:
- mountPath: /etc/ceph
name: ceph-config
Expand Down
2 changes: 1 addition & 1 deletion deploy/examples/toolbox.yaml
Expand Up @@ -18,7 +18,7 @@ spec:
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: rook-ceph-tools
image: rook/ceph:v1.8.0
image: rook/ceph:v1.8.1
command: ["/bin/bash"]
args: ["-m", "-c", "/usr/local/bin/toolbox.sh"]
imagePullPolicy: IfNotPresent
Expand Down

0 comments on commit f746b50

Please sign in to comment.