You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would like to be able to connect to different Ceph clusters based on node topology labels rather than being restricted to a single clusterID per StorageClass. This feature should allow clusterID to be defined within each pool configuration under a common StorageClass, leveraging topologyConstrainedPools.
What is the value to the end user? (why is it a priority?)
This feature would enable end users who manage multiple Ceph clusters across various topologies to utilize a single StorageClass configuration.
How will we know we have a good solution? (acceptance criteria)
Users can specify multiple Ceph clusters within a single StorageClass, associated with different topology labels. ceph-csi can dynamically determine the correct Ceph cluster to interact with based on the node's topology label during volume provisioning.
apiVersion: storage.k8s.io/v1kind: StorageClassparameters:
...
topologyConstrainedPools: [{"clusterID": "east","poolName":"pool0","dataPool":"ec-pool0" # optional, erasure-coded pool for data"domainSegments":[{"domainLabel":"region","value":"east"},{"domainLabel":"zone","value":"zone1"}]},{"clusterID": "east","poolName":"pool1","dataPool":"ec-pool1" # optional, erasure-coded pool for data"domainSegments":[{"domainLabel":"region","value":"east"},{"domainLabel":"zone","value":"zone2"}]},{"clusterID": "west","poolName":"pool2","dataPool":"ec-pool2" # optional, erasure-coded pool for data"domainSegments":[{"domainLabel":"region","value":"west"},{"domainLabel":"zone","value":"zone1"}]}]
Describe the feature you'd like to have
I would like to be able to connect to different Ceph clusters based on node topology labels rather than being restricted to a single clusterID per StorageClass. This feature should allow clusterID to be defined within each pool configuration under a common StorageClass, leveraging topologyConstrainedPools.
What is the value to the end user? (why is it a priority?)
This feature would enable end users who manage multiple Ceph clusters across various topologies to utilize a single StorageClass configuration.
How will we know we have a good solution? (acceptance criteria)
Users can specify multiple Ceph clusters within a single StorageClass, associated with different topology labels.
ceph-csi
can dynamically determine the correct Ceph cluster to interact with based on the node's topology label during volume provisioning.Additional context
https://ceph-storage.slack.com/archives/C05522L7P60/p1715118579305879
The text was updated successfully, but these errors were encountered: