Replies: 3 comments
-
Is this due to concurrent I/O? There is a way to leverage Storage Classes to create new chunks on SSD-backed chunkservers. If your read performance is so bad that it timeouts (without concurrent writes) then you need to optimise configuration in regards to disks, file systems, etc. Maybe use Storage Classes to place small files to faster hardware. Compressed file system might be of help, or
|
Beta Was this translation helpful? Give feedback.
-
Also, take a look at your network. With small files latency is CRUCIAL. |
Beta Was this translation helpful? Give feedback.
-
IIRC SeaweedFS bundles small files into larger chunks to deal with this; perhaps on MooseFS a sparsebundle (or similar) with chunk size 64M would help? edit: discussed here |
Beta Was this translation helpful? Give feedback.
-
Hi,
I know that moosfs is not made for small-file performance.
There are few ongoing discussions on that topic.
I only wanted to ask if there are specific workarounds that could help with the situation.
my moosefs is rock-solid
performance for bigger files is fantastic even on not super powerful home machines.
But it dies slow death with small files.
I started to have backup archives (stored from their party through minio) that became unreadable due to timeouts.
What are you guys using to circumvent it?
Beta Was this translation helpful? Give feedback.
All reactions