Skip to content

Commit

Permalink
Merge pull request #2562 from travisn/release-0-9-2-version
Browse files Browse the repository at this point in the history
Set Rook image to v0.9.2 and Ceph image to v13.2.4
  • Loading branch information
travisn committed Jan 26, 2019
2 parents 6071f87 + 0bc2da4 commit 2da0f82
Show file tree
Hide file tree
Showing 16 changed files with 34 additions and 34 deletions.
14 changes: 7 additions & 7 deletions Documentation/ceph-cluster-crd.md
Expand Up @@ -22,7 +22,7 @@ metadata:
spec:
cephVersion:
# see the "Cluster Settings" section below for more details on which image of ceph to run
image: ceph/ceph:v13.2.2-20181023
image: ceph/ceph:v13.2.4-20190109
dataDirHostPath: /var/lib/rook
storage:
useAllNodes: true
Expand All @@ -42,7 +42,7 @@ Settings can be specified at the global level to apply to the cluster as a whole
### Cluster Settings

- `cephVersion`: The version information for launching the ceph daemons.
- `image`: The image used for running the ceph daemons. For example, `ceph/ceph:v12.2.9-20181026` or `ceph/ceph:v13.2.2-20181023`.
- `image`: The image used for running the ceph daemons. For example, `ceph/ceph:v12.2.9-20181026` or `ceph/ceph:v13.2.4-20190109`.
For the latest ceph images, see the [Ceph DockerHub](https://hub.docker.com/r/ceph/ceph/tags/).
To ensure a consistent version of the image is running across all nodes in the cluster, it is recommended to use a very specific image version.
Tags also exist that would give the latest version, but they are only recommended for test environments. For example, the tag `v13` will be updated each time a new mimic build is released.
Expand Down Expand Up @@ -177,7 +177,7 @@ metadata:
namespace: rook-ceph
spec:
cephVersion:
image: ceph/ceph:v13.2.2-20181023
image: ceph/ceph:v13.2.4-20190109
dataDirHostPath: /var/lib/rook
network:
hostNetwork: false
Expand Down Expand Up @@ -208,7 +208,7 @@ metadata:
namespace: rook-ceph
spec:
cephVersion:
image: ceph/ceph:v13.2.2-20181023
image: ceph/ceph:v13.2.4-20190109
dataDirHostPath: /var/lib/rook
network:
hostNetwork: false
Expand Down Expand Up @@ -250,7 +250,7 @@ metadata:
namespace: rook-ceph
spec:
cephVersion:
image: ceph/ceph:v13.2.2-20181023
image: ceph/ceph:v13.2.4-20190109
dataDirHostPath: /var/lib/rook
network:
hostNetwork: false
Expand Down Expand Up @@ -286,7 +286,7 @@ metadata:
namespace: rook-ceph
spec:
cephVersion:
image: ceph/ceph:v13.2.2-20181023
image: ceph/ceph:v13.2.4-20190109
dataDirHostPath: /var/lib/rook
network:
hostNetwork: false
Expand Down Expand Up @@ -330,7 +330,7 @@ metadata:
namespace: rook-ceph
spec:
cephVersion:
image: ceph/ceph:v13.2.2-20181023
image: ceph/ceph:v13.2.4-20190109
dataDirHostPath: /var/lib/rook
# cluster level resource requests/limits configuration
resources:
Expand Down
2 changes: 1 addition & 1 deletion Documentation/ceph-quickstart.md
Expand Up @@ -220,7 +220,7 @@ metadata:
spec:
cephVersion:
# For the latest ceph images, see https://hub.docker.com/r/ceph/ceph/tags
image: ceph/ceph:v13.2.2-20181023
image: ceph/ceph:v13.2.4-20190109
dataDirHostPath: /var/lib/rook
dashboard:
enabled: true
Expand Down
2 changes: 1 addition & 1 deletion Documentation/ceph-toolbox.md
Expand Up @@ -36,7 +36,7 @@ spec:
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: rook-ceph-tools
image: rook/ceph:v0.9.1
image: rook/ceph:v0.9.2
command: ["/tini"]
args: ["-g", "--", "/usr/local/bin/toolbox.sh"]
imagePullPolicy: IfNotPresent
Expand Down
24 changes: 12 additions & 12 deletions Documentation/ceph-upgrade.md
Expand Up @@ -33,7 +33,7 @@ those releases.
### Patch Release Upgrades
One of the goals of the 0.9 release is that patch releases are able to be automated completely by
the Rook operator. It is intended that upgrades from one patch release to another are as simple as
updating the image of the Rook operator. For example, when Rook v0.9.1 is released, the process
updating the image of the Rook operator. For example, when Rook v0.9.2 is released, the process
should be as simple as running the following:
```
kubectl -n rook-ceph-system set image deploy/rook-ceph-operator rook-ceph-operator=rook/ceph:v0.9.x
Expand Down Expand Up @@ -243,11 +243,11 @@ kubectl -n $ROOK_NAMESPACE patch rolebinding rook-ceph-osd-psp -p "{\"subjects\"
```

### 3. Update the Rook operator image
The largest portion of the upgrade is triggered when the operator's image is updated to v0.9.1, and
The largest portion of the upgrade is triggered when the operator's image is updated to v0.9.2, and
with the greatly-expanded automatic update features in the new version, this is all done
automatically.
```sh
kubectl -n $ROOK_SYSTEM_NAMESPACE set image deploy/rook-ceph-operator rook-ceph-operator=rook/ceph:v0.9.1
kubectl -n $ROOK_SYSTEM_NAMESPACE set image deploy/rook-ceph-operator rook-ceph-operator=rook/ceph:v0.9.2
```

Watch now in amazement as the Ceph MONs, MGR, OSDs, RGWs, and MDSes are terminated and replaced with
Expand Down Expand Up @@ -285,11 +285,11 @@ being used in the cluster.
kubectl -n $ROOK_NAMESPACE describe pods | grep "Image:.*" | sort | uniq
# This cluster is not yet finished:
# Image: ceph/ceph:v12.2.9-20181026
# Image: rook/ceph:v0.9.1
# Image: rook/ceph:v0.9.2
# Image: rook/ceph:v0.8.3
# This cluster is finished:
# Image: ceph/ceph:v12.2.9-20181026
# Image: rook/ceph:v0.9.1
# Image: rook/ceph:v0.9.2
```

### 6. Remove unused resources
Expand All @@ -314,7 +314,7 @@ kubectl -n $ROOK_NAMESPACE patch rolebinding rook-ceph-osd-psp -p "{\"subjects\"
```

### 7. Verify the updated cluster
At this point, your Rook operator should be running version `rook/ceph:v0.9.1`, and the Ceph daemons
At this point, your Rook operator should be running version `rook/ceph:v0.9.2`, and the Ceph daemons
should be running image `ceph/ceph:v12.2.9-20181026`. The Rook operator version and the Ceph version
are no longer tied together, and we'll cover how to upgrade Ceph later in this document.

Expand Down Expand Up @@ -343,7 +343,7 @@ choose to update Ceph at any time.
### Ceph images
Official Ceph container images can be found on [Docker Hub](https://hub.docker.com/r/ceph/ceph/tags/).
These images are tagged in a few ways:
* The most explicit form of tags are full-ceph-version-and-build tags (e.g., `v13.2.2-20181023`).
* The most explicit form of tags are full-ceph-version-and-build tags (e.g., `v13.2.4-20190109`).
These tags are recommended for production clusters, as there is no possibility for the cluster to
be heterogeneous with respect to the version of Ceph running in containers.
* Ceph major version tags (e.g., `v13`) are useful for development and test clusters so that the
Expand All @@ -359,7 +359,7 @@ Ceph image field in the cluster CRD (`spec:cephVersion:image`).
```sh
# sed -i.bak "s%image: .*%image: $NEW_CEPH_IMAGE%" cluster.yaml
# kubectl -n $ROOK_SYSTEM_NAMESPACE replace -f cluster.yaml
NEW_CEPH_IMAGE='ceph/ceph:v13.2.2-20181023'
NEW_CEPH_IMAGE='ceph/ceph:v13.2.4-20190109'
CLUSTER_NAME="$ROOK_NAMESPACE" # change if your cluster name is not the Rook namespace
kubectl patch CephCluster $CLUSTER_NAME --type=merge \
-p "{\"spec\": {\"cephVersion\": {\"image\": \"$NEW_CEPH_IMAGE\"}}}"
Expand All @@ -378,11 +378,11 @@ To verify the Ceph upgrade is complete, check that all the images Rook is using
kubectl -n $ROOK_NAMESPACE describe pods | grep "Image:.*ceph/ceph" | sort | uniq
# This cluster is not yet finished:
# Image: ceph/ceph:v12.2.9-20181026
# Image: ceph/ceph:v13.2.2-20181023
# Image: rook/ceph:v0.9.1
# Image: ceph/ceph:v13.2.4-20190109
# Image: rook/ceph:v0.9.2
# This cluster is finished:
# Image: ceph/ceph:v13.2.2-20181023
# Image: rook/ceph:v0.9.1
# Image: ceph/ceph:v13.2.4-20190109
# Image: rook/ceph:v0.9.2
```

#### 2. Update dashboard external service if applicable
Expand Down
2 changes: 1 addition & 1 deletion Documentation/helm-operator.md
Expand Up @@ -106,7 +106,7 @@ The following tables lists the configurable parameters of the rook-operator char
| Parameter | Description | Default |
| ------------------------- | --------------------------------------------------------------- | ------------------------------------------------------ |
| `image.repository` | Image | `rook/ceph` |
| `image.tag` | Image tag | `v0.9.1` |
| `image.tag` | Image tag | `v0.9.2` |
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `rbacEnable` | If true, create & use RBAC resources | `true` |
| `pspEnable` | If true, create & use PSP resources | `true` |
Expand Down
2 changes: 1 addition & 1 deletion cluster/examples/coreos/after-reboot-daemonset.yaml
Expand Up @@ -18,7 +18,7 @@ spec:
effect: NoSchedule
containers:
- name: ceph-after-reboot-check
image: rook/ceph-toolbox:v0.9.1
image: rook/ceph-toolbox:v0.9.2
imagePullPolicy: IfNotPresent
command: ["/scripts/status-check.sh"]
env:
Expand Down
2 changes: 1 addition & 1 deletion cluster/examples/coreos/before-reboot-daemonset.yaml
Expand Up @@ -18,7 +18,7 @@ spec:
effect: NoSchedule
containers:
- name: ceph-before-reboot-check
image: rook/ceph-toolbox:v0.9.1
image: rook/ceph-toolbox:v0.9.2
imagePullPolicy: IfNotPresent
command: ["/scripts/status-check.sh"]
env:
Expand Down
2 changes: 1 addition & 1 deletion cluster/examples/kubernetes/cassandra/operator.yaml
Expand Up @@ -186,7 +186,7 @@ subjects:
serviceAccountName: rook-cassandra-operator
containers:
- name: rook-cassandra-operator
image: rook/cassandra:v0.9.1
image: rook/cassandra:v0.9.2
imagePullPolicy: "Always"
args: ["cassandra", "operator"]
env:
Expand Down
2 changes: 1 addition & 1 deletion cluster/examples/kubernetes/ceph/cluster.yaml
Expand Up @@ -161,7 +161,7 @@ spec:
# v12 is luminous, v13 is mimic, and v14 is nautilus.
# RECOMMENDATION: In production, use a specific version tag instead of the general v13 flag, which pulls the latest release and could result in different
# versions running within the cluster. See tags available at https://hub.docker.com/r/ceph/ceph/tags/.
image: ceph/ceph:v13.2.2-20181023
image: ceph/ceph:v13.2.4-20190109
# Whether to allow unsupported versions of Ceph. Currently only luminous and mimic are supported.
# After nautilus is released, Rook will be updated to support nautilus.
# Do not set to true in production.
Expand Down
2 changes: 1 addition & 1 deletion cluster/examples/kubernetes/ceph/operator.yaml
Expand Up @@ -389,7 +389,7 @@ spec:
serviceAccountName: rook-ceph-system
containers:
- name: rook-ceph-operator
image: rook/ceph:v0.9.1
image: rook/ceph:v0.9.2
args: ["ceph", "operator"]
volumeMounts:
- mountPath: /var/lib/rook
Expand Down
2 changes: 1 addition & 1 deletion cluster/examples/kubernetes/ceph/toolbox.yaml
Expand Up @@ -18,7 +18,7 @@ spec:
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: rook-ceph-tools
image: rook/ceph:v0.9.1
image: rook/ceph:v0.9.2
command: ["/tini"]
args: ["-g", "--", "/usr/local/bin/toolbox.sh"]
imagePullPolicy: IfNotPresent
Expand Down
2 changes: 1 addition & 1 deletion cluster/examples/kubernetes/cockroachdb/operator.yaml
Expand Up @@ -98,7 +98,7 @@ spec:
serviceAccountName: rook-cockroachdb-operator
containers:
- name: rook-cockroachdb-operator
image: rook/cockroachdb:v0.9.1
image: rook/cockroachdb:v0.9.2
args: ["cockroachdb", "operator"]
env:
- name: POD_NAME
Expand Down
2 changes: 1 addition & 1 deletion cluster/examples/kubernetes/edgefs/operator.yaml
Expand Up @@ -205,7 +205,7 @@ spec:
serviceAccountName: rook-edgefs-system
containers:
- name: rook-edgefs-operator
image: rook/edgefs:v0.9.1
image: rook/edgefs:v0.9.2
imagePullPolicy: "Always"
args: ["edgefs", "operator"]
env:
Expand Down
2 changes: 1 addition & 1 deletion cluster/examples/kubernetes/minio/operator.yaml
Expand Up @@ -88,7 +88,7 @@ spec:
serviceAccountName: rook-minio-operator
containers:
- name: rook-minio-operator
image: rook/minio:v0.9.1
image: rook/minio:v0.9.2
args: ["minio", "operator"]
env:
- name: POD_NAME
Expand Down
2 changes: 1 addition & 1 deletion cluster/examples/kubernetes/nfs/operator.yaml
Expand Up @@ -88,7 +88,7 @@ spec:
serviceAccountName: rook-nfs-operator
containers:
- name: rook-nfs-operator
image: rook/nfs:v0.9.1
image: rook/nfs:v0.9.2
args: ["nfs", "operator"]
env:
- name: POD_NAME
Expand Down
4 changes: 2 additions & 2 deletions design/decouple-ceph-version.md
Expand Up @@ -57,7 +57,7 @@ metadata:
namespace: rook-ceph
spec:
cephVersion:
image: ceph/ceph:v13.2.2-20181023
image: ceph/ceph:v13.2.4-20190109
```

### Operator Requirements
Expand Down Expand Up @@ -131,7 +131,7 @@ spec:
allowUnsupported: false
upgradePolicy:
cephVersion:
image: ceph/ceph:v13.2.2-20181023
image: ceph/ceph:v13.2.4-20190109
allowUnsupported: false
components:
- mon
Expand Down

0 comments on commit 2da0f82

Please sign in to comment.