Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for multiple Ceph clusters with the same StorageClass based on node topology labels #4611

Open
diogenxs opened this issue May 10, 2024 · 0 comments

Comments

@diogenxs
Copy link

Describe the feature you'd like to have

I would like to be able to connect to different Ceph clusters based on node topology labels rather than being restricted to a single clusterID per StorageClass. This feature should allow clusterID to be defined within each pool configuration under a common StorageClass, leveraging topologyConstrainedPools.

What is the value to the end user? (why is it a priority?)

This feature would enable end users who manage multiple Ceph clusters across various topologies to utilize a single StorageClass configuration.

How will we know we have a good solution? (acceptance criteria)

Users can specify multiple Ceph clusters within a single StorageClass, associated with different topology labels.
ceph-csi can dynamically determine the correct Ceph cluster to interact with based on the node's topology label during volume provisioning.

apiVersion: storage.k8s.io/v1
kind: StorageClass
parameters:
...
  topologyConstrainedPools: [
      {
        "clusterID": "east",
        "poolName":"pool0",
        "dataPool":"ec-pool0" # optional, erasure-coded pool for data
        "domainSegments":[
          {"domainLabel":"region","value":"east"},
          {"domainLabel":"zone","value":"zone1"}
        ]
      },
      {
        "clusterID": "east",
        "poolName":"pool1",
        "dataPool":"ec-pool1" # optional, erasure-coded pool for data
        "domainSegments":[
          {"domainLabel":"region","value":"east"},
          {"domainLabel":"zone","value":"zone2"}
        ]
      },
      {
        "clusterID": "west",
        "poolName":"pool2",
        "dataPool":"ec-pool2" # optional, erasure-coded pool for data
        "domainSegments":[
          {"domainLabel":"region","value":"west"},
          {"domainLabel":"zone","value":"zone1"}
        ]
      }
    ]

Additional context

https://ceph-storage.slack.com/archives/C05522L7P60/p1715118579305879

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant