Skip to content

Commit

Permalink
Merge pull request #2423 from travisn/release-0-9-1-version
Browse files Browse the repository at this point in the history
Set the image tag to v0.9.1
  • Loading branch information
travisn committed Jan 1, 2019
2 parents b84ddfe + 213424d commit 978b032
Show file tree
Hide file tree
Showing 12 changed files with 20 additions and 23 deletions.
2 changes: 1 addition & 1 deletion Documentation/ceph-toolbox.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ spec:
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: rook-ceph-tools
image: rook/ceph:v0.9.0
image: rook/ceph:v0.9.1
command: ["/tini"]
args: ["-g", "--", "/usr/local/bin/toolbox.sh"]
imagePullPolicy: IfNotPresent
Expand Down
14 changes: 7 additions & 7 deletions Documentation/ceph-upgrade.md
Original file line number Diff line number Diff line change
Expand Up @@ -243,11 +243,11 @@ kubectl -n $ROOK_NAMESPACE patch rolebinding rook-ceph-osd-psp -p "{\"subjects\"
```

### 3. Update the Rook operator image
The largest portion of the upgrade is triggered when the operator's image is updated to v0.9.0, and
The largest portion of the upgrade is triggered when the operator's image is updated to v0.9.1, and
with the greatly-expanded automatic update features in the new version, this is all done
automatically.
```sh
kubectl -n $ROOK_SYSTEM_NAMESPACE set image deploy/rook-ceph-operator rook-ceph-operator=rook/ceph:v0.9.0
kubectl -n $ROOK_SYSTEM_NAMESPACE set image deploy/rook-ceph-operator rook-ceph-operator=rook/ceph:v0.9.1
```

Watch now in amazement as the Ceph MONs, MGR, OSDs, RGWs, and MDSes are terminated and replaced with
Expand Down Expand Up @@ -285,11 +285,11 @@ being used in the cluster.
kubectl -n $ROOK_NAMESPACE describe pods | grep "Image:.*" | sort | uniq
# This cluster is not yet finished:
# Image: ceph/ceph:v12.2.9-20181026
# Image: rook/ceph:v0.9.0
# Image: rook/ceph:v0.9.1
# Image: rook/ceph:v0.8.3
# This cluster is finished:
# Image: ceph/ceph:v12.2.9-20181026
# Image: rook/ceph:v0.9.0
# Image: rook/ceph:v0.9.1
```

### 6. Remove unused resources
Expand All @@ -314,7 +314,7 @@ kubectl -n $ROOK_NAMESPACE patch rolebinding rook-ceph-osd-psp -p "{\"subjects\"
```

### 7. Verify the updated cluster
At this point, your Rook operator should be running version `rook/ceph:v0.9.0`, and the Ceph daemons
At this point, your Rook operator should be running version `rook/ceph:v0.9.1`, and the Ceph daemons
should be running image `ceph/ceph:v12.2.9-20181026`. The Rook operator version and the Ceph version
are no longer tied together, and we'll cover how to upgrade Ceph later in this document.

Expand Down Expand Up @@ -379,10 +379,10 @@ kubectl -n $ROOK_NAMESPACE describe pods | grep "Image:.*ceph/ceph" | sort | uni
# This cluster is not yet finished:
# Image: ceph/ceph:v12.2.9-20181026
# Image: ceph/ceph:v13.2.2-20181023
# Image: rook/ceph:v0.9.0
# Image: rook/ceph:v0.9.1
# This cluster is finished:
# Image: ceph/ceph:v13.2.2-20181023
# Image: rook/ceph:v0.9.0
# Image: rook/ceph:v0.9.1
```

#### 2. Update dashboard external service if applicable
Expand Down
2 changes: 1 addition & 1 deletion Documentation/helm-operator.md
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ The following tables lists the configurable parameters of the rook-operator char
| Parameter | Description | Default |
| ------------------------- | --------------------------------------------------------------- | ------------------------------------------------------ |
| `image.repository` | Image | `rook/ceph` |
| `image.tag` | Image tag | `v0.9.0` |
| `image.tag` | Image tag | `v0.9.1` |
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `rbacEnable` | If true, create & use RBAC resources | `true` |
| `pspEnable` | If true, create & use PSP resources | `true` |
Expand Down
2 changes: 1 addition & 1 deletion cluster/examples/coreos/after-reboot-daemonset.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ spec:
effect: NoSchedule
containers:
- name: ceph-after-reboot-check
image: rook/ceph-toolbox:v0.9.0
image: rook/ceph-toolbox:v0.9.1
imagePullPolicy: IfNotPresent
command: ["/scripts/status-check.sh"]
env:
Expand Down
2 changes: 1 addition & 1 deletion cluster/examples/coreos/before-reboot-daemonset.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ spec:
effect: NoSchedule
containers:
- name: ceph-before-reboot-check
image: rook/ceph-toolbox:v0.9.0
image: rook/ceph-toolbox:v0.9.1
imagePullPolicy: IfNotPresent
command: ["/scripts/status-check.sh"]
env:
Expand Down
4 changes: 1 addition & 3 deletions cluster/examples/kubernetes/cassandra/operator.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -186,7 +186,7 @@ subjects:
serviceAccountName: rook-cassandra-operator
containers:
- name: rook-cassandra-operator
image: rook/cassandra:v0.9.0
image: rook/cassandra:v0.9.1
imagePullPolicy: "Always"
args: ["cassandra", "operator"]
env:
Expand All @@ -198,5 +198,3 @@ subjects:
valueFrom:
fieldRef:
fieldPath: metadata.namespace


2 changes: 1 addition & 1 deletion cluster/examples/kubernetes/ceph/operator.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -387,7 +387,7 @@ spec:
serviceAccountName: rook-ceph-system
containers:
- name: rook-ceph-operator
image: rook/ceph:v0.9.0
image: rook/ceph:v0.9.1
args: ["ceph", "operator"]
volumeMounts:
- mountPath: /var/lib/rook
Expand Down
2 changes: 1 addition & 1 deletion cluster/examples/kubernetes/ceph/toolbox.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ spec:
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: rook-ceph-tools
image: rook/ceph:v0.9.0
image: rook/ceph:v0.9.1
command: ["/tini"]
args: ["-g", "--", "/usr/local/bin/toolbox.sh"]
imagePullPolicy: IfNotPresent
Expand Down
2 changes: 1 addition & 1 deletion cluster/examples/kubernetes/cockroachdb/operator.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ spec:
serviceAccountName: rook-cockroachdb-operator
containers:
- name: rook-cockroachdb-operator
image: rook/cockroachdb:v0.9.0
image: rook/cockroachdb:v0.9.1
args: ["cockroachdb", "operator"]
env:
- name: POD_NAME
Expand Down
7 changes: 3 additions & 4 deletions cluster/examples/kubernetes/edgefs/operator.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -132,7 +132,7 @@ rules:
verbs: ["get", "list", "watch"]
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["get", "list", "watch", "create", "update", "delete"]
verbs: ["get", "list", "watch", "create", "update", "delete"]
- apiGroups: ["edgefs.rook.io"]
resources: ["*"]
verbs: ["*"]
Expand Down Expand Up @@ -166,7 +166,7 @@ roleRef:
subjects:
- kind: ServiceAccount
name: rook-edgefs-system
namespace: rook-edgefs-system
namespace: rook-edgefs-system
---
# Grant the rook system daemons cluster-wide access to manage the Rook CRDs, PVCs, and storage classes
kind: ClusterRoleBinding
Expand Down Expand Up @@ -205,7 +205,7 @@ spec:
serviceAccountName: rook-edgefs-system
containers:
- name: rook-edgefs-operator
image: rook/edgefs:v0.9.0
image: rook/edgefs:v0.9.1
imagePullPolicy: "Always"
args: ["edgefs", "operator"]
env:
Expand All @@ -219,4 +219,3 @@ spec:
valueFrom:
fieldRef:
fieldPath: metadata.namespace

2 changes: 1 addition & 1 deletion cluster/examples/kubernetes/minio/operator.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ spec:
serviceAccountName: rook-minio-operator
containers:
- name: rook-minio-operator
image: rook/minio:v0.9.0
image: rook/minio:v0.9.1
args: ["minio", "operator"]
env:
- name: POD_NAME
Expand Down
2 changes: 1 addition & 1 deletion cluster/examples/kubernetes/nfs/operator.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ spec:
serviceAccountName: rook-nfs-operator
containers:
- name: rook-nfs-operator
image: rook/nfs:v0.9.0
image: rook/nfs:v0.9.1
args: ["nfs", "operator"]
env:
- name: POD_NAME
Expand Down

0 comments on commit 978b032

Please sign in to comment.