Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

subvolumegroup: add new crd #9384

Merged
merged 1 commit into from Dec 21, 2021
Merged

subvolumegroup: add new crd #9384

merged 1 commit into from Dec 21, 2021

Conversation

leseb
Copy link
Member

@leseb leseb commented Dec 10, 2021

Description of your changes:

This introduces a new CRD to add the ability to create subvolumegroup
for a given ceph filesystem volume. Typically the name of the volume is
the name of the filesystem created by rook.

Closes: #7036
Signed-off-by: Sébastien Han seb@redhat.com

Which issue is resolved by this Pull Request:
Resolves #7036

Checklist:

  • Commit Message Formatting: Commit titles and messages follow guidelines in the developer guide.
  • Skip Tests for Docs: Add the flag for skipping the build if this is only a documentation change. See here for the flag.
  • Skip Unrelated Tests: Add a flag to run tests for a specific storage provider. See test options.
  • Reviewed the developer guide on Submitting a Pull Request
  • Documentation has been updated, if necessary.
  • Unit tests have been added, if necessary.
  • Integration tests have been added, if necessary.
  • Pending release notes updated with breaking and/or notable changes, if necessary.
  • Upgrade from previous release is tested and upgrade user guide is updated, if necessary.
  • Code generation (make codegen) has been run to update object specifications, if necessary.

Documentation/ceph-fs-subvolumegroup.md Outdated Show resolved Hide resolved
PendingReleaseNotes.md Outdated Show resolved Hide resolved
pkg/operator/ceph/file/subvolumegroup/controller.go Outdated Show resolved Hide resolved
pkg/operator/ceph/file/subvolumegroup/controller.go Outdated Show resolved Hide resolved
@BlaineEXE
Copy link
Member

I like the breakdown here: #7036 (comment)

I still have concerns about the increased complexity that multiple CRDs/resources interacting have. But if we set dependencies between CephFilesystem and any referencing CephFilesystemSubVolumeGroup, then it at least still protects during deletion.

TBH, I don't think most of the reasons from the link are definitive reasons for me. At the end of the day, subvolumes are tied to a specific CephFS, and having separate resources is its own kind of pain when it comes to deletion dependencies. Also, we have a new pattern idea now that we can name items in a list to identify them better.

However, this one is particularly compelling:

for external mode, one could query the creation of a subvolume without touching the main cephfilesystem which is shared by all consumers

Do we think that having this as a separate PR might improve our multi-tenancy story? Do you think a storage admin might want someone like a sub-admin to be able to create subvolume groups?

@BlaineEXE
Copy link
Member

I don't see this type added as a top-level dependent of CephCluster here: pkg/operator/ceph/cluster/dependents.go.

As far as making sure that a CephFilesystem that has referencing CephFilesystemSubvolumeGroups added to it, do we want to handle dependents now or in a follow-up PR?

@leseb
Copy link
Member Author

leseb commented Dec 13, 2021

I like the breakdown here: #7036 (comment)

I still have concerns about the increased complexity that multiple CRDs/resources interacting have. But if we set dependencies between CephFilesystem and any referencing CephFilesystemSubVolumeGroup, then it at least still protects during deletion.

TBH, I don't think most of the reasons from the link are definitive reasons for me. At the end of the day, subvolumes are tied to a specific CephFS, and having separate resources is its own kind of pain when it comes to deletion dependencies. Also, we have a new pattern idea now that we can name items in a list to identify them better.

However, this one is particularly compelling:

for external mode, one could query the creation of a subvolume without touching the main cephfilesystem which is shared by all consumers

Do we think that having this as a separate PR might improve our multi-tenancy story? Do you think a storage admin might want someone like a sub-admin to be able to create subvolume groups?

Yes, such a scenario exists where a sub-admin consumes an external cluster, I don't know how RBACs are set but not touching the CephFilesystem is highly desired.

@leseb
Copy link
Member Author

leseb commented Dec 13, 2021

I don't see this type added as a top-level dependent of CephCluster here: pkg/operator/ceph/cluster/dependents.go.

As far as making sure that a CephFilesystem that has referencing CephFilesystemSubvolumeGroups added to it, do we want to handle dependents now or in a follow-up PR?

It's still a draft so I can surely handle this here.


### CephFilesystemSubVolumeGroup spec

- `volumeName`: The name of the Ceph Filesystem volume.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure there is a best choice for what to name this. If it is filesystem or filesystemName, it doesn't match Ceph docs as closely. If it's volumeName, it doesn't match Rook docs as closely.

I think it may be slightly easier for users to leave this with the Ceph terminology so that users can glean information more easily from the Ceph docs. However, doing this, I think we should be explicit here in the docs that in Rook, this is the metadata name of a CephFilesystem resource.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is expected to be the filesystem CR name, it really feels like it should be named filesystemName. Then in this doc we can say that a Rook filesystem corresponds to a Ceph volume.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was hesitant to rename this but now that I have a better understanding (hopefully), I feel like we should rename it to filesystemName so this matches better with what we have in Rook and like Travis said, explaining what this means in Ceph terminology.

deploy/examples/subvolumegroup.yaml Outdated Show resolved Hide resolved
pkg/apis/ceph.rook.io/v1/types.go Outdated Show resolved Hide resolved
pkg/apis/ceph.rook.io/v1/types.go Outdated Show resolved Hide resolved
pkg/apis/ceph.rook.io/v1/types.go Outdated Show resolved Hide resolved
pkg/operator/ceph/cluster/dependents.go Show resolved Hide resolved
Documentation/ceph-fs-subvolumegroup.md Outdated Show resolved Hide resolved
pkg/operator/ceph/file/subvolumegroup/controller.go Outdated Show resolved Hide resolved

### CephFilesystemSubVolumeGroup spec

- `volumeName`: The name of the Ceph Filesystem volume.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is expected to be the filesystem CR name, it really feels like it should be named filesystemName. Then in this doc we can say that a Rook filesystem corresponds to a Ceph volume.

Documentation/ceph-fs-subvolumegroup.md Show resolved Hide resolved
@leseb
Copy link
Member Author

leseb commented Dec 20, 2021

Quotas coming in ceph/ceph#44347

Documentation/ceph-fs-subvolumegroup.md Outdated Show resolved Hide resolved
Documentation/ceph-fs-subvolumegroup.md Outdated Show resolved Hide resolved
Documentation/ceph-fs-subvolumegroup.md Outdated Show resolved Hide resolved
Documentation/ceph-fs-subvolumegroup.md Outdated Show resolved Hide resolved
pkg/operator/ceph/file/subvolumegroup/controller.go Outdated Show resolved Hide resolved
pkg/operator/ceph/file/subvolumegroup/controller.go Outdated Show resolved Hide resolved
@leseb leseb force-pushed the fix-7036 branch 2 times, most recently from e82da10 to 3e44966 Compare December 21, 2021 09:01
@leseb leseb dismissed travisn’s stale review December 21, 2021 14:25

comments have been addressed.

This introduces a new CRD to add the ability to create subvolumegroup
for a given ceph filesystem volume. Typically the name of the volume is
the name of the filesystem created by rook.

Closes: rook#7036
Signed-off-by: Sébastien Han <seb@redhat.com>
@BlaineEXE BlaineEXE merged commit bb0d0d3 into rook:master Dec 21, 2021
@leseb leseb deleted the fix-7036 branch December 21, 2021 17:24
mergify bot added a commit that referenced this pull request Dec 21, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add support to create multiple subvolume groups in a filesytem
4 participants