Skip to content

Commit

Permalink
Merge pull request #14024 from travisn/release-1.14.0
Browse files Browse the repository at this point in the history
build: set the release version to v1.14.0
  • Loading branch information
travisn committed Apr 3, 2024
2 parents ee84245 + 606cef3 commit bc37caa
Show file tree
Hide file tree
Showing 13 changed files with 16 additions and 16 deletions.
4 changes: 2 additions & 2 deletions Documentation/Contributing/rook-test-framework.md
Expand Up @@ -41,11 +41,11 @@ virtual machine.
make build
```

Tag the newly built images to `rook/ceph:local-build` for running tests, or `rook/ceph:v1.14.0-beta.0` if creating example manifests::
Tag the newly built images to `rook/ceph:local-build` for running tests, or `rook/ceph:master` if creating example manifests::

```console
docker tag $(docker images|awk '/build-/ {print $1}') rook/ceph:local-build
docker tag rook/ceph:local-build rook/ceph:v1.14.0-beta.0
docker tag rook/ceph:local-build rook/ceph:master
```

## Run integration tests
Expand Down
2 changes: 1 addition & 1 deletion Documentation/Getting-Started/quickstart.md
Expand Up @@ -36,7 +36,7 @@ To configure the Ceph storage cluster, at least one of these local storage optio
A simple Rook cluster is created for Kubernetes with the following `kubectl` commands and [example manifests](https://github.com/rook/rook/blob/master/deploy/examples).

```console
$ git clone --single-branch --branch v1.14.0-beta.0 https://github.com/rook/rook.git
$ git clone --single-branch --branch v1.14.0 https://github.com/rook/rook.git
cd rook/deploy/examples
kubectl create -f crds.yaml -f common.yaml -f operator.yaml
kubectl create -f cluster.yaml
Expand Down
Expand Up @@ -47,7 +47,7 @@ There are two sources for metrics collection:
From the root of your locally cloned Rook repo, go the monitoring directory:

```console
$ git clone --single-branch --branch v1.14.0-beta.0 https://github.com/rook/rook.git
$ git clone --single-branch --branch v1.14.0 https://github.com/rook/rook.git
cd rook/deploy/examples/monitoring
```

Expand Down
4 changes: 2 additions & 2 deletions Documentation/Upgrade/rook-upgrade.md
Expand Up @@ -161,7 +161,7 @@ by the Operator. Also update the Custom Resource Definitions (CRDs).
Get the latest common resources manifests that contain the latest changes.

```console
git clone --single-branch --depth=1 --branch v1.14.0-beta.0 https://github.com/rook/rook.git
git clone --single-branch --depth=1 --branch v1.14.0 https://github.com/rook/rook.git
cd rook/deploy/examples
```

Expand Down Expand Up @@ -200,7 +200,7 @@ The largest portion of the upgrade is triggered when the operator's image is upd
When the operator is updated, it will proceed to update all of the Ceph daemons.

```console
kubectl -n $ROOK_OPERATOR_NAMESPACE set image deploy/rook-ceph-operator rook-ceph-operator=rook/ceph:v1.14.0-beta.0
kubectl -n $ROOK_OPERATOR_NAMESPACE set image deploy/rook-ceph-operator rook-ceph-operator=rook/ceph:v1.14.0
```

### **3. Update Ceph CSI**
Expand Down
2 changes: 1 addition & 1 deletion deploy/charts/rook-ceph/values.yaml
Expand Up @@ -7,7 +7,7 @@ image:
repository: rook/ceph
# -- Image tag
# @default -- `master`
tag: v1.14.0-beta.0
tag: v1.14.0
# -- Image pull policy
pullPolicy: IfNotPresent

Expand Down
2 changes: 1 addition & 1 deletion deploy/examples/direct-mount.yaml
Expand Up @@ -19,7 +19,7 @@ spec:
serviceAccountName: rook-ceph-default
containers:
- name: rook-direct-mount
image: rook/ceph:v1.14.0-beta.0
image: rook/ceph:v1.14.0
command: ["/bin/bash"]
args: ["-m", "-c", "/usr/local/bin/toolbox.sh"]
imagePullPolicy: IfNotPresent
Expand Down
2 changes: 1 addition & 1 deletion deploy/examples/images.txt
Expand Up @@ -8,4 +8,4 @@
registry.k8s.io/sig-storage/csi-provisioner:v4.0.0
registry.k8s.io/sig-storage/csi-resizer:v1.10.0
registry.k8s.io/sig-storage/csi-snapshotter:v7.0.1
rook/ceph:v1.14.0-beta.0
rook/ceph:v1.14.0
2 changes: 1 addition & 1 deletion deploy/examples/multus-validation.yaml
Expand Up @@ -101,7 +101,7 @@ spec:
serviceAccountName: rook-ceph-multus-validation
containers:
- name: multus-validation
image: rook/ceph:v1.14.0-beta.0
image: rook/ceph:v1.14.0
command: ["rook"]
args:
- "multus"
Expand Down
2 changes: 1 addition & 1 deletion deploy/examples/operator-openshift.yaml
Expand Up @@ -667,7 +667,7 @@ spec:
serviceAccountName: rook-ceph-system
containers:
- name: rook-ceph-operator
image: rook/ceph:v1.14.0-beta.0
image: rook/ceph:v1.14.0
args: ["ceph", "operator"]
securityContext:
runAsNonRoot: true
Expand Down
2 changes: 1 addition & 1 deletion deploy/examples/operator.yaml
Expand Up @@ -591,7 +591,7 @@ spec:
serviceAccountName: rook-ceph-system
containers:
- name: rook-ceph-operator
image: rook/ceph:v1.14.0-beta.0
image: rook/ceph:v1.14.0
args: ["ceph", "operator"]
securityContext:
runAsNonRoot: true
Expand Down
2 changes: 1 addition & 1 deletion deploy/examples/osd-purge.yaml
Expand Up @@ -28,7 +28,7 @@ spec:
serviceAccountName: rook-ceph-purge-osd
containers:
- name: osd-removal
image: rook/ceph:v1.14.0-beta.0
image: rook/ceph:v1.14.0
# TODO: Insert the OSD ID in the last parameter that is to be removed
# The OSD IDs are a comma-separated list. For example: "0" or "0,2".
# If you want to preserve the OSD PVCs, set `--preserve-pvc true`.
Expand Down
4 changes: 2 additions & 2 deletions deploy/examples/toolbox-job.yaml
Expand Up @@ -10,7 +10,7 @@ spec:
spec:
initContainers:
- name: config-init
image: rook/ceph:v1.14.0-beta.0
image: rook/ceph:v1.14.0
command: ["/usr/local/bin/toolbox.sh"]
args: ["--skip-watch"]
imagePullPolicy: IfNotPresent
Expand All @@ -29,7 +29,7 @@ spec:
mountPath: /var/lib/rook-ceph-mon
containers:
- name: script
image: rook/ceph:v1.14.0-beta.0
image: rook/ceph:v1.14.0
volumeMounts:
- mountPath: /etc/ceph
name: ceph-config
Expand Down
2 changes: 1 addition & 1 deletion deploy/examples/toolbox-operator-image.yaml
Expand Up @@ -25,7 +25,7 @@ spec:
serviceAccountName: rook-ceph-default
containers:
- name: rook-ceph-tools-operator-image
image: rook/ceph:v1.14.0-beta.0
image: rook/ceph:v1.14.0
command:
- /bin/bash
- -c
Expand Down

0 comments on commit bc37caa

Please sign in to comment.