We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What would you like to be added:
https://github.com/juicedata/juicefs/blob/main/pkg/sync/cluster.go#L137-L166
Make
objs = append(objs, obj) if len(objs) > 100 { break LOOP }
User configurable
Why is this needed:
We are using juicefs sync --workers to get 16 very large files from tencent COS to KS3 local jfs
juicefs sync --workers
And as we have 8 workers, however 16 is lesser than 100, so all 16 files are using one worker instead of all
Currently our temporary solution is to PAD each big file with 99 empty files, which is, well, working at least
However it would be nicer if I can use juicefs sync --workers on 16 files directly, so
The text was updated successfully, but these errors were encountered:
We should have a limit on the total size of fetched keys.
Sorry, something went wrong.
Successfully merging a pull request may close this issue.
What would you like to be added:
https://github.com/juicedata/juicefs/blob/main/pkg/sync/cluster.go#L137-L166
Make
User configurable
Why is this needed:
We are using
juicefs sync --workers
to get 16 very large files from tencent COS to KS3 local jfsAnd as we have 8 workers, however 16 is lesser than 100, so all 16 files are using one worker instead of all
Currently our temporary solution is to PAD each big file with 99 empty files, which is, well, working at least
However it would be nicer if I can use
juicefs sync --workers
on 16 files directly, soThe text was updated successfully, but these errors were encountered: