Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rgw flags set to storeName even for multisite deployment #9248

Closed
olivierbouffet opened this issue Nov 25, 2021 · 4 comments
Closed

rgw flags set to storeName even for multisite deployment #9248

olivierbouffet opened this issue Nov 25, 2021 · 4 comments
Labels

Comments

@olivierbouffet
Copy link
Contributor

Is this a bug report or feature request?

  • Bug Report

Deviation from expected behavior:
Hello,

When creating a multisite objectstore the rgw_zone and rgw_zonegroup flags are set to objectstore name instead of zone and zonegroup.

How to reproduce it (minimal and precise):
create realm :

apiVersion: ceph.rook.io/v1
kind: CephObjectRealm
metadata:
  name: realm-1
  namespace: rook-ceph

create a zonegroup:

apiVersion: ceph.rook.io/v1
kind: CephObjectRealm
metadata:
  name: realm-1
  namespace: rook-ceph
root@dockerdev-263:/home/code# cat 05_zg-1.yaml
apiVersion: ceph.rook.io/v1
kind: CephObjectZoneGroup
metadata:
  name: zonegroup-1
  namespace: rook-ceph
spec:
  realm: realm-1

create a zone :

apiVersion: ceph.rook.io/v1
kind: CephObjectRealm
metadata:
  name: realm-1
  namespace: rook-ceph
root@dockerdev-263:/home/code# cat 05_zg-1.yaml
apiVersion: ceph.rook.io/v1
kind: CephObjectZoneGroup
metadata:
  name: zonegroup-1
  namespace: rook-ceph
spec:
  realm: realm-1
root@dockerdev-263:/home/code# cat 06_oz-1.yaml
apiVersion: ceph.rook.io/v1
kind: CephObjectZone
metadata:
  name: zone-1
  namespace: rook-ceph
spec:
  zoneGroup: zonegroup-1
  metadataPool:
    replicated:
      size: 1
      requireSafeReplicaSize: false
    preservePoolsOnDelete: false
  dataPool:
    replicated:
      size: 1
      requireSafeReplicaSize: false
    preservePoolsOnDelete: false

create a multisite object store :

apiVersion: ceph.rook.io/v1
kind: CephObjectStore
metadata:
  name: object-multi-1
  namespace: rook-ceph # namespace:cluster
spec:
  gateway:
    port: 80
    # securePort: 443
    instances: 1
  zone:
    name: zone-1
  # service endpoint healthcheck
  healthCheck:
    bucket:
      disabled: false
      interval: 60s
    # Configure the pod liveness probe for the rgw daemon
    livenessProbe:
      disabled: false

and then get the ceph config dump :

[root@rook-ceph-tools-5b54fb98c-k5qk9 /]# ceph config dump | grep rgw
    client.rgw.object.multi.1.a        advanced  rgw_enable_usage_log                   true
    client.rgw.object.multi.1.a        advanced  rgw_log_nonexistent_bucket             true
    client.rgw.object.multi.1.a        advanced  rgw_log_object_name_utc                true
    client.rgw.object.multi.1.a        advanced  rgw_zone                               object-multi-1                            *
    client.rgw.object.multi.1.a        advanced  rgw_zonegroup                          object-multi-1                            *

Environment:

  • OS (e.g. from /etc/os-release): debian 10 buster
  • Kernel (e.g. uname -a):
  • Cloud provider or hardware configuration:
  • Rook version (use rook version inside of a Rook Pod): v1.7.8
  • Storage backend version (e.g. for ceph do ceph -v): ceph version 16.2.6
  • Kubernetes version (use kubectl version):
  • Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift):
  • Storage backend status (e.g. for Ceph use ceph health in the Rook Ceph toolbox):
@travisn
Copy link
Member

travisn commented Nov 29, 2021

@olivierbouffet Before this issue is closed, please open a PR against master. As commented here, looks like this was only fixed in the 1.7 branch.

@olivierbouffet
Copy link
Contributor Author

hi @travisn , apparently my PR has been also merge to master.
Sorry for all the confusion , about my PR.

@travisn
Copy link
Member

travisn commented Nov 29, 2021

hi @travisn , apparently my PR has been also merge to master. Sorry for all the confusion , about my PR.

@olivierbouffet In which PR was it merged to master? I'm not seeing it, thanks.

@olivierbouffet
Copy link
Contributor Author

Hi @travisn , I think in this one : #9261

@thotz thotz closed this as completed Dec 9, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants