Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change number of dedicated threads #44

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Conversation

rob-p
Copy link
Contributor

@rob-p rob-p commented Feb 13, 2024

Hi @Guilucand — When processing some data with GGCAT, @jamshed and I noticed that sometimes the number of threads in use can substantially exceed the thread count requested by the user. This increased CPU usage typically does not last the entire duration of the run, but it does happen for an extended duration of time during the initial phases of the algorithm. Specifically, we noticed that when requesting 16 threads, we were seeing peak CPU usage ~2800% or about (7/4) * the user requested thread count.

There are several places in the code where the thread count is passes around between components, but this was the first one that stood out. Here, the number of compute and read threads is set based on dividing up the total number of threads, and later those threads are spawned. However, the current allocation allots a sum equal to substantially more threads than the initial threads_count. This addresses that discrepancy by ensuring that compute_threads_count + read_threads_count doesn't exceed threads_count.

Of course, I'm not sure if this is the partition of the work you want, or if you want to use a fixed partition to begin with, but hopefully this will be useful in figuring out how to address the discrepancy between the requested thread count and the actual peak usage.

The total number of threads can substantially exceed what is requested by the user.  This addresses that discrepancy by ensuring that `compute_threads_count` + `read_threads_count` doesn't exceed `threads_count`.
@rob-p
Copy link
Contributor Author

rob-p commented Feb 21, 2024

Any thoughts on this @Guilucand? It's possible we missed something and the thread utilization issue we observed is coming from elsewhere in the code. This was just the most obvious candidate among what we found.

@Guilucand
Copy link
Collaborator

Hi @rob-p,
the fact that GGCAT in some cases uses more threads is a partially intended behavior.
This because in some phases it splits the work between computing and reading (&decompressing), delegating each category to a different thread pool. Here's the catch: in mid-end systems with slow hdds, the reading threads will be stuck most of the time waiting for the disk, so it is useful that more computing threads are available to allow a higher thread usage. On the other hand, when running on high-end systems the disk is much faster, thus the reading threads will use a higher percentage of time for decompression, resulting in a higher thread usage.

At the moment I find quite difficult to just lower the number of computing threads, as it could slow down execution on some kind of machines, but I could do some additional waiting mechanism to ensure that the number of threads doing cpu intensive work does not exceed the threshold.

For now, the methods I used to hard limit the number of threads for the benchmarks in the article are two: either by using all available cores or by employing some external way of limiting the cores (ex. slurm).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants