Replies: 2 comments 4 replies
-
@aaafei123 do those chunkservers have classes assigned? do you use classes or just goals while defining replication rules? where are clients located, AWS or Ali or both? |
Beta Was this translation helpful? Give feedback.
3 replies
-
You wrote that your MooseFS instance version is 3.0.94. Between this version and the current one there are a LOT of improvements and bug fixes. Why do you keep such old version? Is it possible to upgrade your instance to 3.0.117? I'm 99% sure your problem with mark for removal will solve itself and even if not, we will be much better equipped to help you :) |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi all:
We have the following requirements:
Here we have 5 chunk servers, like below:
- mfschunk server01:
/mnt/disk01 1TB 85% used
/mnt/disk02 1TB 85% used
- mfschunk server02:
/mnt/disk01 1TB 85% used
/mnt/disk02 1TB 85% used
- mfschunk server03:
/mnt/disk01 4TB 15% used
/mnt/disk02 4TB 15% used
- mfschunk server04:
/mnt/disk01 4TB 15% used
/mnt/disk02 4TB 15% used
- mfschunk server05:
/mnt/disk01 4TB 15% used
/mnt/disk02 4TB 15% used
Servers Distribution:
1) AWS Cloud: mfschunk server01-02
2) Ali Cloud: mfschunk server03-05
Now we want to move all data from mfschunk server01-02 to mfschunk server03-05, but we found there are some issues:
if we marked one disk for removal on chunk server, like server01's: */mnt/disk02, then the capacity of disk on mfschunk server02 will exceed 98% even 100%, the speed of files read/write operation are very slow, all mfs clients can barely read and write files properly.
btw: the version of mfs is v3.0.94
So how should we do?
thanks a lot.
Beta Was this translation helpful? Give feedback.
All reactions