New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pg_autoscaler configuration is needed #14075
Comments
The pool-specific settings can be specified today in the spec:
parameters:
pg_autoscale_mode: "on"
bulk: "true" Setting I believe this covers all cases you are suggesting, except the global setting |
Re 256PGs is not excessive unless there are fewer than 3 OSDs fwiw, modulo the number of coresident pools. |
So do you wan to turn off the autoscale on the auto-created pools? I think probably you can do it from the rook toolbox. Or if there is any specific auto created pool that is concerning we can have a env variable setting for it |
If one has to do common things from a shell by hand instead of having IaC, why have Rook at all? |
So having such a configuration is as per design to make rook work with more productivity, I don't see any reason to change what we have in internal design. Btw for .mgr pool you can specify here https://github.com/rook/rook/blob/master/deploy/examples/cluster-test.yaml#L57 similar for .rgw pool |
What about |
Okay I agree we need to have a setting, Do you prefer to have individual spec field to set for each pre-defined pool, or just a common spec to change the pgs of all predefined pools? |
Ideally, to mirror the behavior available in non-Rook Ceph, the ability set both the global config default value and per-pool pg_num for any pool Rook/Ceph will deploy. |
All of the rgw metadata pools are created with the settings from the CephObjectStore CR under the |
We should be sure that's documented. |
Are you suggesting to add the names of all the metadata pools in this comment, or perhaps in this doc? I was hoping the "metadata pools" could be self-explanatory to most users and that level of detail wouldn't be needed in the docs, but it certainly could be added if helpful. |
The pool names can vary by release -- we've seen that with RGW -- but we do have the case with Rook that a user might not predict them in advance, so for RGW at least, just documenting like "all of the rgw metadata pools are created with the settings from the CephObjectStore CR under the metadataPool settings. So I expect all of the pools being created can be controlled with CRs today." In this context the index pool is separate I hope? Since it typically warrants individual planning. |
You're referring to the |
I believe the features you're requesting are able to be configured in Rook today.
This is possible via Rook config options here (https://rook.io/docs/rook/latest-release/CRDs/Cluster/ceph-cluster-crd/#ceph-config), or by using the
Travis did a much better job of explaining these points here: #14075 (comment) As added notes, Ceph docs for these pool-focused params are here: https://docs.ceph.com/en/latest/rados/operations/pools/ And Rook docs about using These configs should allow modifying pre-existing pools without the need to use the toolbox CLI, which as you mentioned is a non-ideal workflow in the context that Rook is supposed to be a desired-state system. And it is a good point that something that might help other Rook users is to add some documentation sections that show users how to configure pools with advanced features like these using the
I think the best "answer" for these is the object store shared pools feature that was added recently, mentioned here: #14075 (comment)
For other pools like |
No, e.g. ceph-objectstore.rgw.buckets.index
Insufficient PGs in this pool significantly bottleneck RGW operations -- this is sadly quite common.
… On Apr 17, 2024, at 15:11, Travis Nielsen ***@***.***> wrote:
In this context the index pool is separate I hope? Since it typically warrants individual planning.
You're referring to the .rgw.root pool? This one does get some special treatment, but if you have multiple object stores, they should have the same metadataPool settings or else the .rgw.root pool would attempt to apply the different settings. If there are multiple object stores, at least with v1.14 now this problem can be remedied with the shared pools for object stores <https://rook.io/docs/rook/v1.14/Storage-Configuration/Object-Storage-RGW/object-storage/#create-local-object-stores-with-shared-pools> where .rgw.root can be explicitly configured.
—
Reply to this email directly, view it on GitHub <#14075 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ADTVONEQQ6S6SMBQ742BP5TY53CNHAVCNFSM6AAAAABGHLOHGWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANRSGAZDENRQGU>.
You are receiving this because you authored the thread.
|
This is an area I'm curious to hear more about in the context of shared pools. Currently, I think we assume shared pools are broken into 2 categories:
But I wondered when we were implementing if there would be any users who need additional breakdowns. Perhaps this could also make sense:
Is this part of what you're expressing, @anthonyeleven ? |
No, e.g. ceph-objectstore.rgw.buckets.index
Insufficient PGs in this pool significantly bottleneck RGW operations -- this is sadly quite common.
This is an area I'm curious to hear more about in the context of shared pools. Currently, I think we assume shared pools are broken into 2 categories:
metadata (must be replicated, cannot be erasure coded)
data (can be replica/ec)
And hopefully one day we can configure multiple (probably not more than a handful of) data (bucket) pools within a single objectstore, which would probably want to share a single index pool.
I guess a shared pool makes sense if someone has like dozens of bucket pools, especially given that Rook creates a CRUSH rule for each and ever pool. I can't see that I personally would ever need to do that.
But I wondered when we were implementing if there would be any users who need additional breakdowns. Perhaps this could also make sense:
metadata (replica with count X)
index (replica with count Y)
data (whatever you want)
Is this part of what you're expressing, @anthonyeleven <https://github.com/anthonyeleven> ?
The rgw.index pool stores RGW S3 / Swift bucket indexes. With smaller objects and/or buckets with a lot of objects in them, this is often an RGW service's bottleneck. To work well, the index pool needs:
* to be on SSDs
* preferably NVMe of course
* with a decent number of PGs, since both the OSD and the PG code have serializations that limit performance. On SATA SSDs I'd aim for a PG ratio of 200-250, for NVMe SSDs 300 easily. The pg_autoscaler unless forced will only do a fraction of these numbers.
* to be across a decent number of OSDs. 3 isn't a decent number. 12 is maybe a start. As a cluster grows so should the index pool, so OSD nodes that have 1-2 SSDs in them for the index pool scale well, and we use deviceclasses to segregate the OSDs if they aren't all TLC
* The SSDs don't have to be big, this is all omap data
… —
Reply to this email directly, view it on GitHub <#14075 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ADTVONGWD4KRL7B3BU3ZW5LY53HPDAVCNFSM6AAAAABGHLOHGWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANRSGEYDIMRQGE>.
You are receiving this because you were mentioned.
|
What is the scenario to have a single object store with multiple data pools? Perhaps creating separate object stores that have their own data pools meets the same requirements?
Thanks for the context on the index pool. Sounds like we need a separate option for its configuration from other metadata pools. |
On Apr 18, 2024, at 12:51, Travis Nielsen ***@***.***> wrote:
And hopefully one day we can configure multiple (probably not more than a handful of) data (bucket) pools within a single objectstore, which would probably want to share a single index pool.
I guess a shared pool makes sense if someone has like dozens of bucket pools, especially given that Rook creates a CRUSH rule for each and ever pool. I can't see that I personally would ever need to do that.
What is the scenario to have a single object store with multiple data pools? Perhaps creating separate object stores that have their own data pools meets the same requirements?
It doesn't -- we discussed this in the other issue thread.
Multiple Rook objectstores require separate RGW instances and thus endpoints to use different types of storage.
By having multiple bucket pools within a single Rook objectstore, and thus available to a common set of RGWs, clients or Lua scripts or RGW per-user config can direct uploads to different S3 storage classes and thus pools/media according to needs. For example, one might have:
* default bucket pool on TLC SSDs with 3x replication
* bucket pool for large, read-mostly objects on QLC SSDs with replication or EC
* bucket pool for large, performance-tolerant objects on HDDs with EC
If a given RGW endpoint can't access all of the storageclasses, that limits our ability to steer uploads via Lua, and forces clients who may have mixed use cases to juggle multiple endpoints.
The rgw.index pool stores RGW S3 / Swift bucket indexes. With smaller objects and/or buckets with a lot of objects in them, this is often an RGW service's bottleneck. To work well, the index pool needs:
Thanks for the context on the index pool. Sounds like we need a separate option for its configuration from other metadata pools.
Absolutely. And maybe RGW logs too. The other minor RGW pools don't get much data.
… —
Reply to this email directly, view it on GitHub <#14075 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ADTVONFVY4MAYTFSENEP4Z3Y572Z5AVCNFSM6AAAAABGHLOHGWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANRUGUZTONJRGI>.
You are receiving this because you were mentioned.
|
Thanks for the background on separate data pools for the same store. Looks like we need to consider the storageclasses capability of rgw to allow the placement targets to different data pools. |
This issue has delved into various different issues. @anthonyeleven would you mind opening new issues for the separate topics? We can keep this issue focused on the small doc clarification for pg management of rgw pools. |
… On Apr 18, 2024, at 15:17, Travis Nielsen ***@***.***> wrote:
Thanks for the background on separate data pools for the same store. Looks like we need to consider the storageclasses <https://docs.ceph.com/en/latest/radosgw/placement/#storage-classes> capability of rgw to allow the placement targets to different data pools.
—
Reply to this email directly, view it on GitHub <#14075 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ADTVONCYJGE25AZDQNKFI7LY6AL5DAVCNFSM6AAAAABGHLOHGWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANRVGAYTQMRWGY>.
You are receiving this because you were mentioned.
|
#12824
Multiple RGW pools and storageclasses · Issue #12824 · rook/rook
github.com
already covers the storageclasses.
… On Apr 18, 2024, at 15:19, Travis Nielsen ***@***.***> wrote:
This issue has delved into various different issues. @anthonyeleven <https://github.com/anthonyeleven> would you mind opening new issues for the separate topics? We can keep this issue focused on the small doc clarification for pg management of rgw pools.
—
Reply to this email directly, view it on GitHub <#14075 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ADTVONE3RO5PSBT3QAXTWH3Y6AMFZAVCNFSM6AAAAABGHLOHGWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANRVGAZDKNRSGQ>.
You are receiving this because you were mentioned.
|
Thanks, I missed the connection there! |
Based on my interpretation of these requirements, I don't think they explicitly suggest that the index pool can't also provide storage for other metadata (in a shared pools case). In the interest of simplicity, I think it would make sense for users to configure the "metadata" pool with solid state and many PGs to give the index best performance, and then other metadata can also reap those benefits. The only reason I can imagine right now that someone might want to separate index from "other metadata" is to save money by buying as few metadata NVMe drives as possible. But I also can't imagine that the other metadata includes a significant percentage of data that it makes much difference. If that all is correct, then I think what we have today with shared pools can meet these needs. If not, then we can consider splitting index and non-index metadata pools (similar to OSD md and db). And this is obviously separate from needing to develop and implement support for multiple RGW "s3 storage classes". |
Pools can share OSDs, so it's not like anyone would save anything.
Trying to lump the minor RGW pools into the index pool would be a bad idea. I dunno what RADOS object names are used, but Ceph for sure will not be expecting that.
***@***.*** ~]# kubectl exec $(kubectl get pods -n rook-ceph | grep rook-ceph-tools | cut -f1 -d\ ) -i -t -n rook-ceph -- /bin/bash
***@***.*** /]$ ceph osd dump | grep pool
pool 1 '.mgr' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 1296 flags hashpspool stripe_width 0 pg_num_max 32 pg_num_min 1 application mgr
pool 2 'rbd-nvme-ssd' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 2048 pgp_num 2048 autoscale_mode off last_change 384295 lfor 0/156660/158816 flags hashpspool,selfmanaged_snaps stripe_width 0 compression_algorithm lz4 compression_mode aggressive application rbd
pool 12 'ceph-objectstore.rgw.control' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode off last_change 64674 flags hashpspool stripe_width 0 application rook-ceph-rgw
pool 13 'ceph-objectstore.rgw.meta' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode off last_change 64679 flags hashpspool stripe_width 0 application rook-ceph-rgw
pool 14 'ceph-objectstore.rgw.log' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode off last_change 64684 flags hashpspool stripe_width 0 application rook-ceph-rgw
pool 15 'ceph-objectstore.rgw.buckets.index' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 256 pgp_num 256 autoscale_mode off last_change 153680 lfor 0/0/153678 flags hashpspool stripe_width 0 application rook-ceph-rgw
pool 16 'ceph-objectstore.rgw.buckets.non-ec' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode off last_change 64695 flags hashpspool stripe_width 0 application rook-ceph-rgw
pool 17 'ceph-objectstore.rgw.otp' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode off last_change 64709 flags hashpspool stripe_width 0 application rook-ceph-rgw
pool 18 '.rgw.root' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode off last_change 64705 flags hashpspool stripe_width 0 application rook-ceph-rgw
pool 19 'ceph-objectstore.rgw.buckets.data' erasure profile ceph-objectstore.rgw.buckets.data_ecprofile size 6 min_size 5 crush_rule 10 object_hash rjenkins pg_num 8192 pgp_num 8192 autoscale_mode off last_change 384299 lfor 0/156300/165341 flags hashpspool,ec_overwrites stripe_width 16384 application rook-ceph-rgw
pool 21 'ceph-objectstore.rgw.buckets.data.hdd' erasure profile ceph-objectstore.rgw.buckets.data_ecprofile_hdd size 6 min_size 5 crush_rule 11 object_hash rjenkins pg_num 8192 pgp_num 8192 autoscale_mode off last_change 167193 lfor 0/0/164453 flags hashpspool,ec_overwrites stripe_width 16384 application rook-ceph-rgw
pool 22 'rbd-sata-hdd' replicated size 3 min_size 2 crush_rule 12 object_hash rjenkins pg_num 1024 pgp_num 1024 autoscale_mode off last_change 384298 lfor 0/0/344532 flags hashpspool,selfmanaged_snaps stripe_width 0 compression_algorithm lz4 compression_mode aggressive application rbd```
… On Apr 18, 2024, at 16:15, Blaine Gardner ***@***.***> wrote:
The rgw.index pool stores RGW S3 / Swift bucket indexes. With smaller objects and/or buckets with a lot of objects in them, this is often an RGW service's bottleneck. To work well, the index pool needs:
to be on SSDs
preferably NVMe of course
with a decent number of PGs, since both the OSD and the PG code have serializations that limit performance. On SATA SSDs I'd aim for a PG ratio of 200-250, for NVMe SSDs 300 easily. The pg_autoscaler unless forced will only do a fraction of these numbers.
to be across a decent number of OSDs. 3 isn't a decent number. 12 is maybe a start. As a cluster grows so should the index pool, so OSD nodes that have 1-2 SSDs in them for the index pool scale well, and we use deviceclasses to segregate the OSDs if they aren't all TLC
The SSDs don't have to be big, this is all omap data
Based on my interpretation of these requirements, I don't think they explicitly suggest that the index pool can't also provide storage for other metadata (in a shared pools case). In the interest of simplicity, I think it would make sense for users to configure the "metadata" pool with solid state and many PGs to give the index best performance, and then other metadata can also reap those benefits.
The only reason I can imagine right now that someone might want to separate index from "other metadata" is to save money by buying as few metadata NVMe drives as possible. But I also can't imagine that the other metadata includes a significant percentage of data that it makes much difference.
If that all is correct, then I think what we have today with shared pools can meet these needs. If not, then we can consider splitting index and non-index metadata pools (similar to OSD md and db).
And this is obviously separate from needing to develop and implement support for multiple RGW "s3 storage classes".
—
Reply to this email directly, view it on GitHub <#14075 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ADTVONGIPVE27GCVTADKRDTY6ASVXAVCNFSM6AAAAABGHLOHGWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANRVGIYTGMJSGI>.
You are receiving this because you were mentioned.
|
The shared pool feature does not "lump pools together" as it seems you are thinking. With shared pools, objects are separated by namespaces (instead of pools) to avoid name collisions. My assertion is that with shared pools, there is no need to separate index and "minor" pools because I can't find evidence of substantial benefit to doing so when both index and minor metadata can share the "index-optimized" pool easily. |
I don’t know how one would direct Ceph to do so. On Apr 18, 2024, at 5:46 PM, Blaine Gardner ***@***.***> wrote:
Trying to lump the minor RGW pools into the index pool would be a bad idea. I dunno what RADOS object names are used, but Ceph for sure will not be expecting that.
The shared pool feature does not "lump pools together" as it seems you are thinking. With shared pools, objects are separated by namespaces (instead of pools) to avoid name collisions. My assertion is that with shared pools, there is no need to separate index and "minor" pools because I can't find evidence of substantial benefit to doing so when both index and minor metadata can share the "index-optimized" pool easily.
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: ***@***.***>
|
That is the feature that is provided by shared pools: https://rook.io/docs/rook/v1.14/Storage-Configuration/Object-Storage-RGW/object-storage/#create-local-object-stores-with-shared-pools |
https://github.com/rook/rook/pull/13766/files
While the module itself is always on, upstream does not force us to set it to be active. If they did that would likely be the last straw for those who clamor for a StableCeph fork. We very much need the means to configure it. Notably, getting it to do what it's supposed to requires prognostication, and it is very, very prone to undersizing pools at significant performance cost - especially since it is not media-aware.
bulk
mode to minimize impactful PG splitting later onosd_pool_default_pg_autoscale_mode
on
oroff
off
,on
,warn
for each pool@travisn
Is this a bug report or feature request?
Deviation from expected behavior:
Expected behavior:
How to reproduce it (minimal and precise):
File(s) to submit:
cluster.yaml
, if necessaryLogs to submit:
Operator's logs, if necessary
Crashing pod(s) logs, if necessary
To get logs, use
kubectl -n <namespace> logs <pod name>
When pasting logs, always surround them with backticks or use the
insert code
button from the Github UI.Read GitHub documentation if you need help.
Cluster Status to submit:
Output of kubectl commands, if necessary
To get the health of the cluster, use
kubectl rook-ceph health
To get the status of the cluster, use
kubectl rook-ceph ceph status
For more details, see the Rook kubectl Plugin
Environment:
uname -a
):rook version
inside of a Rook Pod):ceph -v
):kubectl version
):ceph health
in the Rook Ceph toolbox):The text was updated successfully, but these errors were encountered: