Skip to content

Commit

Permalink
Merge pull request #14084 from travisn/release-1.14.1
Browse files Browse the repository at this point in the history
build: Update the release version to v1.14.1
  • Loading branch information
travisn committed Apr 17, 2024
2 parents 0bdd66b + b6d4eff commit b66af92
Show file tree
Hide file tree
Showing 12 changed files with 28 additions and 28 deletions.
2 changes: 1 addition & 1 deletion Documentation/Getting-Started/quickstart.md
Expand Up @@ -36,7 +36,7 @@ To configure the Ceph storage cluster, at least one of these local storage optio
A simple Rook cluster is created for Kubernetes with the following `kubectl` commands and [example manifests](https://github.com/rook/rook/blob/master/deploy/examples).

```console
$ git clone --single-branch --branch v1.14.0 https://github.com/rook/rook.git
$ git clone --single-branch --branch v1.14.1 https://github.com/rook/rook.git
cd rook/deploy/examples
kubectl create -f crds.yaml -f common.yaml -f operator.yaml
kubectl create -f cluster.yaml
Expand Down
Expand Up @@ -47,7 +47,7 @@ There are two sources for metrics collection:
From the root of your locally cloned Rook repo, go the monitoring directory:

```console
$ git clone --single-branch --branch v1.14.0 https://github.com/rook/rook.git
$ git clone --single-branch --branch v1.14.1 https://github.com/rook/rook.git
cd rook/deploy/examples/monitoring
```

Expand Down
32 changes: 16 additions & 16 deletions Documentation/Upgrade/rook-upgrade.md
Expand Up @@ -133,8 +133,8 @@ In order to successfully upgrade a Rook cluster, the following prerequisites mus

## Rook Operator Upgrade

The examples given in this guide upgrade a live Rook cluster running `v1.13.7` to
the version `v1.14.0`. This upgrade should work from any official patch release of Rook v1.13 to any
The examples given in this guide upgrade a live Rook cluster running `v1.13.8` to
the version `v1.14.1`. This upgrade should work from any official patch release of Rook v1.13 to any
official patch release of v1.14.

Let's get started!
Expand All @@ -161,7 +161,7 @@ by the Operator. Also update the Custom Resource Definitions (CRDs).
Get the latest common resources manifests that contain the latest changes.

```console
git clone --single-branch --depth=1 --branch v1.14.0 https://github.com/rook/rook.git
git clone --single-branch --depth=1 --branch v1.14.1 https://github.com/rook/rook.git
cd rook/deploy/examples
```

Expand Down Expand Up @@ -200,7 +200,7 @@ The largest portion of the upgrade is triggered when the operator's image is upd
When the operator is updated, it will proceed to update all of the Ceph daemons.

```console
kubectl -n $ROOK_OPERATOR_NAMESPACE set image deploy/rook-ceph-operator rook-ceph-operator=rook/ceph:v1.14.0
kubectl -n $ROOK_OPERATOR_NAMESPACE set image deploy/rook-ceph-operator rook-ceph-operator=rook/ceph:v1.14.1
```

### **3. Update Ceph CSI**
Expand Down Expand Up @@ -230,18 +230,18 @@ watch --exec kubectl -n $ROOK_CLUSTER_NAMESPACE get deployments -l rook_cluster=
```

As an example, this cluster is midway through updating the OSDs. When all deployments report `1/1/1`
availability and `rook-version=v1.14.0`, the Ceph cluster's core components are fully updated.
availability and `rook-version=v1.14.1`, the Ceph cluster's core components are fully updated.

```console
Every 2.0s: kubectl -n rook-ceph get deployment -o j...

rook-ceph-mgr-a req/upd/avl: 1/1/1 rook-version=v1.14.0
rook-ceph-mon-a req/upd/avl: 1/1/1 rook-version=v1.14.0
rook-ceph-mon-b req/upd/avl: 1/1/1 rook-version=v1.14.0
rook-ceph-mon-c req/upd/avl: 1/1/1 rook-version=v1.14.0
rook-ceph-osd-0 req/upd/avl: 1// rook-version=v1.14.0
rook-ceph-osd-1 req/upd/avl: 1/1/1 rook-version=v1.13.7
rook-ceph-osd-2 req/upd/avl: 1/1/1 rook-version=v1.13.7
rook-ceph-mgr-a req/upd/avl: 1/1/1 rook-version=v1.14.1
rook-ceph-mon-a req/upd/avl: 1/1/1 rook-version=v1.14.1
rook-ceph-mon-b req/upd/avl: 1/1/1 rook-version=v1.14.1
rook-ceph-mon-c req/upd/avl: 1/1/1 rook-version=v1.14.1
rook-ceph-osd-0 req/upd/avl: 1// rook-version=v1.14.1
rook-ceph-osd-1 req/upd/avl: 1/1/1 rook-version=v1.13.8
rook-ceph-osd-2 req/upd/avl: 1/1/1 rook-version=v1.13.8
```

An easy check to see if the upgrade is totally finished is to check that there is only one
Expand All @@ -250,15 +250,15 @@ An easy check to see if the upgrade is totally finished is to check that there i
```console
# kubectl -n $ROOK_CLUSTER_NAMESPACE get deployment -l rook_cluster=$ROOK_CLUSTER_NAMESPACE -o jsonpath='{range .items[*]}{"rook-version="}{.metadata.labels.rook-version}{"\n"}{end}' | sort | uniq
This cluster is not yet finished:
rook-version=v1.13.7
rook-version=v1.14.0
rook-version=v1.13.8
rook-version=v1.14.1
This cluster is finished:
rook-version=v1.14.0
rook-version=v1.14.1
```

### **5. Verify the updated cluster**

At this point, the Rook operator should be running version `rook/ceph:v1.14.0`.
At this point, the Rook operator should be running version `rook/ceph:v1.14.1`.

Verify the CephCluster health using the [health verification doc](health-verification.md).

Expand Down
2 changes: 1 addition & 1 deletion deploy/charts/rook-ceph/values.yaml
Expand Up @@ -7,7 +7,7 @@ image:
repository: rook/ceph
# -- Image tag
# @default -- `master`
tag: v1.14.0
tag: v1.14.1
# -- Image pull policy
pullPolicy: IfNotPresent

Expand Down
2 changes: 1 addition & 1 deletion deploy/examples/direct-mount.yaml
Expand Up @@ -19,7 +19,7 @@ spec:
serviceAccountName: rook-ceph-default
containers:
- name: rook-direct-mount
image: rook/ceph:v1.14.0
image: rook/ceph:v1.14.1
command: ["/bin/bash"]
args: ["-m", "-c", "/usr/local/bin/toolbox.sh"]
imagePullPolicy: IfNotPresent
Expand Down
2 changes: 1 addition & 1 deletion deploy/examples/images.txt
Expand Up @@ -8,4 +8,4 @@
registry.k8s.io/sig-storage/csi-provisioner:v4.0.0
registry.k8s.io/sig-storage/csi-resizer:v1.10.0
registry.k8s.io/sig-storage/csi-snapshotter:v7.0.1
rook/ceph:v1.14.0
rook/ceph:v1.14.1
2 changes: 1 addition & 1 deletion deploy/examples/multus-validation.yaml
Expand Up @@ -101,7 +101,7 @@ spec:
serviceAccountName: rook-ceph-multus-validation
containers:
- name: multus-validation
image: rook/ceph:v1.14.0
image: rook/ceph:v1.14.1
command: ["rook"]
args:
- "multus"
Expand Down
2 changes: 1 addition & 1 deletion deploy/examples/operator-openshift.yaml
Expand Up @@ -667,7 +667,7 @@ spec:
serviceAccountName: rook-ceph-system
containers:
- name: rook-ceph-operator
image: rook/ceph:v1.14.0
image: rook/ceph:v1.14.1
args: ["ceph", "operator"]
securityContext:
runAsNonRoot: true
Expand Down
2 changes: 1 addition & 1 deletion deploy/examples/operator.yaml
Expand Up @@ -591,7 +591,7 @@ spec:
serviceAccountName: rook-ceph-system
containers:
- name: rook-ceph-operator
image: rook/ceph:v1.14.0
image: rook/ceph:v1.14.1
args: ["ceph", "operator"]
securityContext:
runAsNonRoot: true
Expand Down
2 changes: 1 addition & 1 deletion deploy/examples/osd-purge.yaml
Expand Up @@ -28,7 +28,7 @@ spec:
serviceAccountName: rook-ceph-purge-osd
containers:
- name: osd-removal
image: rook/ceph:v1.14.0
image: rook/ceph:v1.14.1
# TODO: Insert the OSD ID in the last parameter that is to be removed
# The OSD IDs are a comma-separated list. For example: "0" or "0,2".
# If you want to preserve the OSD PVCs, set `--preserve-pvc true`.
Expand Down
4 changes: 2 additions & 2 deletions deploy/examples/toolbox-job.yaml
Expand Up @@ -10,7 +10,7 @@ spec:
spec:
initContainers:
- name: config-init
image: rook/ceph:v1.14.0
image: rook/ceph:v1.14.1
command: ["/usr/local/bin/toolbox.sh"]
args: ["--skip-watch"]
imagePullPolicy: IfNotPresent
Expand All @@ -29,7 +29,7 @@ spec:
mountPath: /var/lib/rook-ceph-mon
containers:
- name: script
image: rook/ceph:v1.14.0
image: rook/ceph:v1.14.1
volumeMounts:
- mountPath: /etc/ceph
name: ceph-config
Expand Down
2 changes: 1 addition & 1 deletion deploy/examples/toolbox-operator-image.yaml
Expand Up @@ -25,7 +25,7 @@ spec:
serviceAccountName: rook-ceph-default
containers:
- name: rook-ceph-tools-operator-image
image: rook/ceph:v1.14.0
image: rook/ceph:v1.14.1
command:
- /bin/bash
- -c
Expand Down

0 comments on commit b66af92

Please sign in to comment.