Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mds: create EC pool as secondary pool #8452

Merged
merged 1 commit into from Dec 7, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
4 changes: 4 additions & 0 deletions Documentation/ceph-filesystem-crd.md
Expand Up @@ -86,6 +86,8 @@ spec:
replicated:
size: 3
dataPools:
- replicated:
size: 3
- erasureCoded:
dataChunks: 2
codingChunks: 1
Expand All @@ -94,6 +96,8 @@ spec:
activeStandby: true
```

**IMPORTANT**: For erasure coded pools, we have to create a replicated pool as the default data pool and an erasure-coded pool as a secondary pool.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we get a confirmation from the cephfs team about the setattr that was suggested before? It isn't needed?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIU, if you want a CephFS's dir tree data to be stored in a specific data pool (an ec pool in this scenario), then you'd need to set the dir.layout.pool attribute as done here, https://docs.ceph.com/en/latest/cephfs/file-layouts/#adding-a-data-pool-to-the-file-system

@batrick @kotreshhr ^^

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is what we wanted to confirm as while testing I didn't run any command for file layout and we can see here data is in the erasure-coded pool.


(These definitions can also be found in the [`filesystem-ec.yaml`](https://github.com/rook/rook/blob/{{ branchName }}/deploy/examples/filesystem-ec.yaml) file.
Also see an example in the [`storageclass-ec.yaml`](https://github.com/rook/rook/blob/{{ branchName }}/deploy/examples/csi/cephfs/storageclass-ec.yaml) for how to configure the volume.)

Expand Down
5 changes: 4 additions & 1 deletion deploy/examples/csi/cephfs/storageclass-ec.yaml
Expand Up @@ -14,7 +14,10 @@ parameters:

# Ceph pool into which the volume shall be created
# Required for provisionVolume: "true"
pool: myfs-ec-data0

# For erasure coded pools, we have to create a replicated pool as the default data pool and an erasure-coded
# pool as a secondary pool.
pool: myfs-ec-data1

# The secrets contain Ceph admin credentials. These are generated automatically by the operator
# in the same namespace as the cluster.
Expand Down
4 changes: 4 additions & 0 deletions deploy/examples/filesystem-ec.yaml
Expand Up @@ -17,6 +17,10 @@ spec:
size: 3
# The list of data pool specs
dataPools:
# For erasure coded pools, we have to create a replicated pool as the default
# data pool and an erasure-coded pool as a secondary pool.
- replicated:
size: 3
# You need at least three `bluestore` OSDs on different nodes for this config to work
- erasureCoded:
dataChunks: 2
Expand Down