Skip to content

Commit

Permalink
Merge pull request #9013 from leseb/release-1.7.6
Browse files Browse the repository at this point in the history
build: Update the patch version to v1.7.6
  • Loading branch information
leseb committed Oct 20, 2021
2 parents ca43f86 + 3b69739 commit 481d59e
Show file tree
Hide file tree
Showing 11 changed files with 28 additions and 28 deletions.
2 changes: 1 addition & 1 deletion Documentation/ceph-monitoring.md
Expand Up @@ -38,7 +38,7 @@ With the Prometheus operator running, we can create a service monitor that will
From the root of your locally cloned Rook repo, go the monitoring directory:

```console
$ git clone --single-branch --branch v1.7.5 https://github.com/rook/rook.git
$ git clone --single-branch --branch v1.7.6 https://github.com/rook/rook.git
cd rook/cluster/examples/kubernetes/ceph/monitoring
```

Expand Down
6 changes: 3 additions & 3 deletions Documentation/ceph-toolbox.md
Expand Up @@ -43,7 +43,7 @@ spec:
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: rook-ceph-tools
image: rook/ceph:v1.7.5
image: rook/ceph:v1.7.6
command: ["/tini"]
args: ["-g", "--", "/usr/local/bin/toolbox.sh"]
imagePullPolicy: IfNotPresent
Expand Down Expand Up @@ -133,7 +133,7 @@ spec:
spec:
initContainers:
- name: config-init
image: rook/ceph:v1.7.5
image: rook/ceph:v1.7.6
command: ["/usr/local/bin/toolbox.sh"]
args: ["--skip-watch"]
imagePullPolicy: IfNotPresent
Expand All @@ -155,7 +155,7 @@ spec:
mountPath: /etc/rook
containers:
- name: script
image: rook/ceph:v1.7.5
image: rook/ceph:v1.7.6
volumeMounts:
- mountPath: /etc/ceph
name: ceph-config
Expand Down
30 changes: 15 additions & 15 deletions Documentation/ceph-upgrade.md
Expand Up @@ -53,12 +53,12 @@ With this upgrade guide, there are a few notes to consider:

Unless otherwise noted due to extenuating requirements, upgrades from one patch release of Rook to
another are as simple as updating the common resources and the image of the Rook operator. For
example, when Rook v1.7.5 is released, the process of updating from v1.7.0 is as simple as running
example, when Rook v1.7.6 is released, the process of updating from v1.7.0 is as simple as running
the following:

First get the latest common resources manifests that contain the latest changes for Rook v1.7.
```sh
git clone --single-branch --depth=1 --branch v1.7.5 https://github.com/rook/rook.git
git clone --single-branch --depth=1 --branch v1.7.6 https://github.com/rook/rook.git
cd rook/cluster/examples/kubernetes/ceph
```

Expand All @@ -75,7 +75,7 @@ section for instructions on how to change the default namespaces in `common.yaml
Then apply the latest changes from v1.7 and update the Rook Operator image.
```console
kubectl apply -f common.yaml -f crds.yaml
kubectl -n rook-ceph set image deploy/rook-ceph-operator rook-ceph-operator=rook/ceph:v1.7.5
kubectl -n rook-ceph set image deploy/rook-ceph-operator rook-ceph-operator=rook/ceph:v1.7.6
```

As exemplified above, it is a good practice to update Rook-Ceph common resources from the example
Expand Down Expand Up @@ -261,7 +261,7 @@ Any pod that is using a Rook volume should also remain healthy:
## Rook Operator Upgrade Process

In the examples given in this guide, we will be upgrading a live Rook cluster running `v1.6.8` to
the version `v1.7.5`. This upgrade should work from any official patch release of Rook v1.6 to any
the version `v1.7.6`. This upgrade should work from any official patch release of Rook v1.6 to any
official patch release of v1.7.

**Rook release from `master` are expressly unsupported.** It is strongly recommended that you use
Expand Down Expand Up @@ -291,7 +291,7 @@ needed by the Operator. Also update the Custom Resource Definitions (CRDs).
First get the latest common resources manifests that contain the latest changes.
```sh
git clone --single-branch --depth=1 --branch v1.7.5 https://github.com/rook/rook.git
git clone --single-branch --depth=1 --branch v1.7.6 https://github.com/rook/rook.git
cd rook/cluster/examples/kubernetes/ceph
```

Expand Down Expand Up @@ -337,7 +337,7 @@ The largest portion of the upgrade is triggered when the operator's image is upd
When the operator is updated, it will proceed to update all of the Ceph daemons.

```sh
kubectl -n $ROOK_OPERATOR_NAMESPACE set image deploy/rook-ceph-operator rook-ceph-operator=rook/ceph:v1.7.5
kubectl -n $ROOK_OPERATOR_NAMESPACE set image deploy/rook-ceph-operator rook-ceph-operator=rook/ceph:v1.7.6
```

### **4. Wait for the upgrade to complete**
Expand All @@ -353,16 +353,16 @@ watch --exec kubectl -n $ROOK_CLUSTER_NAMESPACE get deployments -l rook_cluster=
```

As an example, this cluster is midway through updating the OSDs. When all deployments report `1/1/1`
availability and `rook-version=v1.7.5`, the Ceph cluster's core components are fully updated.
availability and `rook-version=v1.7.6`, the Ceph cluster's core components are fully updated.

>```
>Every 2.0s: kubectl -n rook-ceph get deployment -o j...
>
>rook-ceph-mgr-a req/upd/avl: 1/1/1 rook-version=v1.7.5
>rook-ceph-mon-a req/upd/avl: 1/1/1 rook-version=v1.7.5
>rook-ceph-mon-b req/upd/avl: 1/1/1 rook-version=v1.7.5
>rook-ceph-mon-c req/upd/avl: 1/1/1 rook-version=v1.7.5
>rook-ceph-osd-0 req/upd/avl: 1// rook-version=v1.7.5
>rook-ceph-mgr-a req/upd/avl: 1/1/1 rook-version=v1.7.6
>rook-ceph-mon-a req/upd/avl: 1/1/1 rook-version=v1.7.6
>rook-ceph-mon-b req/upd/avl: 1/1/1 rook-version=v1.7.6
>rook-ceph-mon-c req/upd/avl: 1/1/1 rook-version=v1.7.6
>rook-ceph-osd-0 req/upd/avl: 1// rook-version=v1.7.6
>rook-ceph-osd-1 req/upd/avl: 1/1/1 rook-version=v1.6.8
>rook-ceph-osd-2 req/upd/avl: 1/1/1 rook-version=v1.6.8
>```
Expand All @@ -374,14 +374,14 @@ An easy check to see if the upgrade is totally finished is to check that there i
# kubectl -n $ROOK_CLUSTER_NAMESPACE get deployment -l rook_cluster=$ROOK_CLUSTER_NAMESPACE -o jsonpath='{range .items[*]}{"rook-version="}{.metadata.labels.rook-version}{"\n"}{end}' | sort | uniq
This cluster is not yet finished:
rook-version=v1.6.8
rook-version=v1.7.5
rook-version=v1.7.6
This cluster is finished:
rook-version=v1.7.5
rook-version=v1.7.6
```

### **5. Verify the updated cluster**

At this point, your Rook operator should be running version `rook/ceph:v1.7.5`.
At this point, your Rook operator should be running version `rook/ceph:v1.7.6`.

Verify the Ceph cluster's health using the [health verification section](#health-verification).

Expand Down
2 changes: 1 addition & 1 deletion cluster/examples/kubernetes/ceph/direct-mount.yaml
Expand Up @@ -18,7 +18,7 @@ spec:
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: rook-direct-mount
image: rook/ceph:v1.7.5
image: rook/ceph:v1.7.6
command: ["/tini"]
args: ["-g", "--", "/usr/local/bin/toolbox.sh"]
imagePullPolicy: IfNotPresent
Expand Down
2 changes: 1 addition & 1 deletion cluster/examples/kubernetes/ceph/images.txt
Expand Up @@ -6,4 +6,4 @@
quay.io/ceph/ceph:v16.2.6
quay.io/cephcsi/cephcsi:v3.4.0
quay.io/csiaddons/volumereplication-operator:v0.1.0
rook/ceph:v1.7.5
rook/ceph:v1.7.6
2 changes: 1 addition & 1 deletion cluster/examples/kubernetes/ceph/operator-openshift.yaml
Expand Up @@ -446,7 +446,7 @@ spec:
serviceAccountName: rook-ceph-system
containers:
- name: rook-ceph-operator
image: rook/ceph:v1.7.5
image: rook/ceph:v1.7.6
args: ["ceph", "operator"]
volumeMounts:
- mountPath: /var/lib/rook
Expand Down
2 changes: 1 addition & 1 deletion cluster/examples/kubernetes/ceph/operator.yaml
Expand Up @@ -369,7 +369,7 @@ spec:
serviceAccountName: rook-ceph-system
containers:
- name: rook-ceph-operator
image: rook/ceph:v1.7.5
image: rook/ceph:v1.7.6
args: ["ceph", "operator"]
volumeMounts:
- mountPath: /var/lib/rook
Expand Down
2 changes: 1 addition & 1 deletion cluster/examples/kubernetes/ceph/osd-purge.yaml
Expand Up @@ -25,7 +25,7 @@ spec:
serviceAccountName: rook-ceph-purge-osd
containers:
- name: osd-removal
image: rook/ceph:v1.7.5
image: rook/ceph:v1.7.6
# TODO: Insert the OSD ID in the last parameter that is to be removed
# The OSD IDs are a comma-separated list. For example: "0" or "0,2".
# If you want to preserve the OSD PVCs, set `--preserve-pvc true`.
Expand Down
4 changes: 2 additions & 2 deletions cluster/examples/kubernetes/ceph/toolbox-job.yaml
Expand Up @@ -10,7 +10,7 @@ spec:
spec:
initContainers:
- name: config-init
image: rook/ceph:v1.7.5
image: rook/ceph:v1.7.6
command: ["/usr/local/bin/toolbox.sh"]
args: ["--skip-watch"]
imagePullPolicy: IfNotPresent
Expand All @@ -32,7 +32,7 @@ spec:
mountPath: /etc/rook
containers:
- name: script
image: rook/ceph:v1.7.5
image: rook/ceph:v1.7.6
volumeMounts:
- mountPath: /etc/ceph
name: ceph-config
Expand Down
2 changes: 1 addition & 1 deletion cluster/examples/kubernetes/ceph/toolbox.yaml
Expand Up @@ -18,7 +18,7 @@ spec:
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: rook-ceph-tools
image: rook/ceph:v1.7.5
image: rook/ceph:v1.7.6
command: ["/tini"]
args: ["-g", "--", "/usr/local/bin/toolbox.sh"]
imagePullPolicy: IfNotPresent
Expand Down
2 changes: 1 addition & 1 deletion tests/scripts/github-action-helper.sh
Expand Up @@ -176,7 +176,7 @@ function create_cluster_prerequisites() {
}

function deploy_manifest_with_local_build() {
sed -i "s|image: rook/ceph:v1.7.5|image: rook/ceph:local-build|g" $1
sed -i "s|image: rook/ceph:v1.7.6|image: rook/ceph:local-build|g" $1
kubectl create -f $1
}

Expand Down

0 comments on commit 481d59e

Please sign in to comment.