Skip to content

Commit

Permalink
Merge pull request #11850 from travisn/release-1.11.1
Browse files Browse the repository at this point in the history
build: Update release version to v1.11.1
  • Loading branch information
travisn committed Mar 7, 2023
2 parents f6b3d53 + 35b31f9 commit b270431
Show file tree
Hide file tree
Showing 9 changed files with 21 additions and 21 deletions.
2 changes: 1 addition & 1 deletion Documentation/Getting-Started/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ In order to configure the Ceph storage cluster, at least one of these local stor
A simple Rook cluster can be created with the following kubectl commands and [example manifests](https://github.com/rook/rook/blob/master/deploy/examples).

```console
$ git clone --single-branch --branch v1.11.0 https://github.com/rook/rook.git
$ git clone --single-branch --branch v1.11.1 https://github.com/rook/rook.git
cd rook/deploy/examples
kubectl create -f crds.yaml -f common.yaml -f operator.yaml
kubectl create -f cluster.yaml
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ There are two sources for metrics collection:
From the root of your locally cloned Rook repo, go the monitoring directory:

```console
$ git clone --single-branch --branch v1.11.0 https://github.com/rook/rook.git
$ git clone --single-branch --branch v1.11.1 https://github.com/rook/rook.git
cd rook/deploy/examples/monitoring
```

Expand Down
24 changes: 12 additions & 12 deletions Documentation/Upgrade/rook-upgrade.md
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ In order to successfully upgrade a Rook cluster, the following prerequisites mus
## Rook Operator Upgrade

In the examples given in this guide, we will be upgrading a live Rook cluster running `v1.10.12` to
the version `v1.11.0`. This upgrade should work from any official patch release of Rook v1.10 to any
the version `v1.11.1`. This upgrade should work from any official patch release of Rook v1.10 to any
official patch release of v1.11.

Let's get started!
Expand Down Expand Up @@ -140,7 +140,7 @@ by the Operator. Also update the Custom Resource Definitions (CRDs).
Get the latest common resources manifests that contain the latest changes.

```console
git clone --single-branch --depth=1 --branch v1.11.0 https://github.com/rook/rook.git
git clone --single-branch --depth=1 --branch v1.11.1 https://github.com/rook/rook.git
cd rook/deploy/examples
```

Expand Down Expand Up @@ -179,7 +179,7 @@ The largest portion of the upgrade is triggered when the operator's image is upd
When the operator is updated, it will proceed to update all of the Ceph daemons.

```console
kubectl -n $ROOK_OPERATOR_NAMESPACE set image deploy/rook-ceph-operator rook-ceph-operator=rook/ceph:v1.11.0
kubectl -n $ROOK_OPERATOR_NAMESPACE set image deploy/rook-ceph-operator rook-ceph-operator=rook/ceph:v1.11.1
```

### **3. Update Ceph CSI**
Expand Down Expand Up @@ -209,16 +209,16 @@ watch --exec kubectl -n $ROOK_CLUSTER_NAMESPACE get deployments -l rook_cluster=
```

As an example, this cluster is midway through updating the OSDs. When all deployments report `1/1/1`
availability and `rook-version=v1.11.0`, the Ceph cluster's core components are fully updated.
availability and `rook-version=v1.11.1`, the Ceph cluster's core components are fully updated.

```console
Every 2.0s: kubectl -n rook-ceph get deployment -o j...

rook-ceph-mgr-a req/upd/avl: 1/1/1 rook-version=v1.11.0
rook-ceph-mon-a req/upd/avl: 1/1/1 rook-version=v1.11.0
rook-ceph-mon-b req/upd/avl: 1/1/1 rook-version=v1.11.0
rook-ceph-mon-c req/upd/avl: 1/1/1 rook-version=v1.11.0
rook-ceph-osd-0 req/upd/avl: 1// rook-version=v1.11.0
rook-ceph-mgr-a req/upd/avl: 1/1/1 rook-version=v1.11.1
rook-ceph-mon-a req/upd/avl: 1/1/1 rook-version=v1.11.1
rook-ceph-mon-b req/upd/avl: 1/1/1 rook-version=v1.11.1
rook-ceph-mon-c req/upd/avl: 1/1/1 rook-version=v1.11.1
rook-ceph-osd-0 req/upd/avl: 1// rook-version=v1.11.1
rook-ceph-osd-1 req/upd/avl: 1/1/1 rook-version=v1.10.12
rook-ceph-osd-2 req/upd/avl: 1/1/1 rook-version=v1.10.12
```
Expand All @@ -230,13 +230,13 @@ An easy check to see if the upgrade is totally finished is to check that there i
# kubectl -n $ROOK_CLUSTER_NAMESPACE get deployment -l rook_cluster=$ROOK_CLUSTER_NAMESPACE -o jsonpath='{range .items[*]}{"rook-version="}{.metadata.labels.rook-version}{"\n"}{end}' | sort | uniq
This cluster is not yet finished:
rook-version=v1.10.12
rook-version=v1.11.0
rook-version=v1.11.1
This cluster is finished:
rook-version=v1.11.0
rook-version=v1.11.1
```

### **5. Verify the updated cluster**

At this point, your Rook operator should be running version `rook/ceph:v1.11.0`.
At this point, your Rook operator should be running version `rook/ceph:v1.11.1`.

Verify the Ceph cluster's health using the [health verification doc](health-verification.md).
2 changes: 1 addition & 1 deletion deploy/examples/direct-mount.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ spec:
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: rook-direct-mount
image: rook/ceph:v1.11.0
image: rook/ceph:v1.11.1
command: ["/bin/bash"]
args: ["-m", "-c", "/usr/local/bin/toolbox.sh"]
imagePullPolicy: IfNotPresent
Expand Down
2 changes: 1 addition & 1 deletion deploy/examples/images.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,4 @@
registry.k8s.io/sig-storage/csi-provisioner:v3.4.0
registry.k8s.io/sig-storage/csi-resizer:v1.7.0
registry.k8s.io/sig-storage/csi-snapshotter:v6.2.1
rook/ceph:v1.11.0
rook/ceph:v1.11.1
2 changes: 1 addition & 1 deletion deploy/examples/operator-openshift.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -614,7 +614,7 @@ spec:
serviceAccountName: rook-ceph-system
containers:
- name: rook-ceph-operator
image: rook/ceph:v1.11.0
image: rook/ceph:v1.11.1
args: ["ceph", "operator"]
securityContext:
runAsNonRoot: true
Expand Down
2 changes: 1 addition & 1 deletion deploy/examples/operator.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -541,7 +541,7 @@ spec:
serviceAccountName: rook-ceph-system
containers:
- name: rook-ceph-operator
image: rook/ceph:v1.11.0
image: rook/ceph:v1.11.1
args: ["ceph", "operator"]
securityContext:
runAsNonRoot: true
Expand Down
2 changes: 1 addition & 1 deletion deploy/examples/osd-purge.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ spec:
serviceAccountName: rook-ceph-purge-osd
containers:
- name: osd-removal
image: rook/ceph:v1.11.0
image: rook/ceph:v1.11.1
# TODO: Insert the OSD ID in the last parameter that is to be removed
# The OSD IDs are a comma-separated list. For example: "0" or "0,2".
# If you want to preserve the OSD PVCs, set `--preserve-pvc true`.
Expand Down
4 changes: 2 additions & 2 deletions deploy/examples/toolbox-job.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ spec:
spec:
initContainers:
- name: config-init
image: rook/ceph:v1.11.0
image: rook/ceph:v1.11.1
command: ["/usr/local/bin/toolbox.sh"]
args: ["--skip-watch"]
imagePullPolicy: IfNotPresent
Expand All @@ -29,7 +29,7 @@ spec:
mountPath: /var/lib/rook-ceph-mon
containers:
- name: script
image: rook/ceph:v1.11.0
image: rook/ceph:v1.11.1
volumeMounts:
- mountPath: /etc/ceph
name: ceph-config
Expand Down

0 comments on commit b270431

Please sign in to comment.