New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Try to make import and export more eaiser #14077
Comments
Update, look for the monstore metadata values and update it using the python script, |
So the new design will look like. Installation:
Updation:
|
I like this idea, it just requires a Ceph change to allow certain mon store data to be granted access for a specific keyring.
We don't want the key in the CR. Instead, the key should be directly stored in the
After Rook connects, it would need to load the JSON and apply it from Ceph and to the consumer cluster, right?
So Rook would need to check for changes to the mon store setting and apply any updates when the JSON changes? |
Yes, the cluster will watch for mon-config updates and will create the resources, somewhat like #14076
Exactly it would watch for the changes in it |
For ceph side changes thinking for a design, The ceph config should have user authentication based on the user keys,
I believe So,
I should be updated like
I think the current key is optional which can be marked a non-optional field.
|
Let's open a Ceph tracker with the feature request, and just summarize the requirement there. We won't gain much in trying to design it here, since the Ceph team would own the design. The feature request is just to have some mon store settings that are only available depending on the keyring. |
Added a ceph tracker https://tracker.ceph.com/issues/65583 |
Offline discussion: We had 2 new proposals for this feature,
|
Is this a bug report or feature request?
Currently user runs a Python script at the RHCS cluster to export the data and then manually copy-paste it from the RHCS terminal and then paste it to the Kubernetes terminal
What if it is possible to expose the data directly to a server endpoint and the k8s cluster can auto-fetch the details from there directly?
The possible design will be:
Run the Python script at the RHCS cluster
1.1) Python script will store/output data
1.2) The Python script should expose the data to the server endpoint
The K8s cluster will watch for the event if the server as an update on data and will reconcile the Ceph controller
But the important point to think about is, where the server will be running so both the RHCS and k8s cluster can access the endpoint, would be possible to get the data like any other ceph or radosgw-admin command?
ps: maybe we can just read the file with the JSON output from the ceph cluster node?
Environment:
The text was updated successfully, but these errors were encountered: