Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possibility to increase thread count & change DOKAN_OPTION_ALLOW_IPC_BATCHING? (FUSE) #1202

Open
4 of 5 tasks
algj opened this issue Feb 11, 2024 · 10 comments
Open
4 of 5 tasks

Comments

@algj
Copy link

algj commented Feb 11, 2024

Environment

  • Windows version: Win10
  • Processor architecture: x64, 2 threads
  • Dokany version: 2.1.0
  • Library type (Dokany/FUSE): FUSE

Check List

  • I checked my issue doesn't exist yet
  • My issue is valid with mirror default sample and not specific to my user-mode driver implementation
  • I can always reproduce the issue with the provided description below.
  • I have updated Dokany to the latest version and have reboot my computer after.
  • I tested one of the last snapshot from appveyor CI

Description

I have been experimenting with the FUSE wrapper, but I got stuck with 2 threads. Is there a way to increase the amount of threads or spawn more threads without recompiling Dokany? On Linux, FUSE is a lot faster when dealing with multiple high latency reads, because it doesn't block IO, that's why I think min thread count should be increased, or at least customizable.

CHANGELOG: DOKAN_OPTIONS.ThreadCount was replaced by DOKAN_OPTIONS.SingleThread since the library now uses a thread pool that allocates workers depending on workload and the available resources.

I assume this was possible before, but now it depends on the CPU thread count:

dokany/dokan/dokan.c

Lines 791 to 798 in 415ac36

DWORD mainPullThreadCount = 0;
if (GetProcessAffinityMask(GetCurrentProcess(), &processAffinityMask,
&systemAffinityMask)) {
while (processAffinityMask) {
mainPullThreadCount += 1;
processAffinityMask >>= 1;
}
} else {

I'd suggest adding a way of changing thread count in Dokany & FUSE.

@Liryna
Copy link
Member

Liryna commented Feb 11, 2024

Hi @algj ,
Have you tried DOKAN_OPTION_ALLOW_IPC_BATCHING ? There is no FUSE option (yet) but if you could force it and see if it helps your case.
Increasing main pull thread count is a limited solution that does not scale well with increase of IO activity.

@algj
Copy link
Author

algj commented Feb 11, 2024

Oh! Thank you @Liryna! This is exactly what I was looking for! Unfortunately (from my understanding) you cannot change it without recompiling Dokany. There should be a way to change this option somehow (possibly thread count too if we're at it)

I hope someone makes a PR regarding changing this option easily... 🙂

@algj algj changed the title Possibility to increase thread count? (FUSE) Possibility to increase thread count & change DOKAN_OPTION_ALLOW_IPC_BATCHING? (FUSE) Feb 12, 2024
@LTRData
Copy link
Contributor

LTRData commented Feb 12, 2024

No, the DOKAN_OPTIONS_* flags can be set in Options field in DOKAN_OPTIONS structure when mounting a new file system. It does not need recompiling driver or library.

@Liryna
Copy link
Member

Liryna commented Feb 12, 2024

The problem is that those DOKAN_OPTIONS_* are not directly available for FUSE, they need to be wrapped for the FUSE interface. See here for DOKAN_OPTION_MOUNT_MANAGER

@algj
Copy link
Author

algj commented Feb 12, 2024

I'm a little confused... I don't mind the source code having both Dokany & FUSE code, but how does one do that?

@LTRData
Copy link
Contributor

LTRData commented Feb 12, 2024

I'm a little confused... I don't mind the source code having both Dokany & FUSE code, but how does one do that?

The problem here was apparently that if you create a native Dokany implementation, this is very easy to set from your implementation by setting a flag in the options structure when mounting a new file system.

If on the other hand you implement a file system using the Fuse emulation layer on top of Dokany, this option is hidden in the emulation layer and not exposed to your implementation.

@Liryna Liryna closed this as completed in 5c25310 Feb 13, 2024
@algj
Copy link
Author

algj commented Feb 13, 2024

Oh my!!! Thank you very much @Liryna for the PR!! ❤️ I thought I'll have to do the PR myself 😅

@algj
Copy link
Author

algj commented Feb 18, 2024

I have tried out IPC_BATCHING. There is a clear performance difference when I did my tests (it's slower by a few ms consistently), but for some reason it still sticks to 2 threads...? I did the same tests on Linux too, it was definitely handling concurrent reads/writes on Linux much better. I'm not sure if I'm misunderstanding something, I thought it's supposed to spawn new threads when there's a few of concurrent reads instead of blocking it all.

@algj
Copy link
Author

algj commented Feb 18, 2024

Also, I think that ThreadCount should be added back. I guess SingleThread option could be removed, but that would break some things.

ThreadCount = 0, library decides what thread count to use
ThreadCount = 1, single threaded
ThreadCount >= 2, multi thread, user defines how many threads application should use

@Liryna Liryna reopened this Feb 18, 2024
@Liryna
Copy link
Member

Liryna commented Feb 18, 2024

Looks like the IPC_BATCHING option was not correctly set in the library aef92bc :(
(The CI tests seems to fail, I will need to look at that https://ci.appveyor.com/project/Maxhy/dokany/builds/49214019 )

Regarding ThreadCount, I don't think that's a good idea as like I said before, it does not scale correctly. Right now the best performance-wise thread count is allocated and for slow IO we need to use the thread pool.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants