Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce edm::async(), and use it in CUDA and Alpaka modules #44901

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

makortel
Copy link
Contributor

@makortel makortel commented May 3, 2024

PR description:

This PR adds edm::async() facility described in #29188 . This PR also replaces the use of cudaStreamAddCallback() with edm::async() accompanied with cudaEventSynchronize(), and makes the CUDA/Alpaka events to be created with cudaEventBlockingSync flag.

Measurements that I showed in CHEP 2023 https://indico.jlab.org/event/459/contributions/11810/ suggested possible 1 % throughput improvement at the HLT (of that time, many things have changed since) over cudaStreamAddCallback(). Earlier studies done with a prototype in cms-patatrack/pixeltrack-standalone#321 that somehow the thread pool with cudaEventSynchronize() used less CPU than cudaStreamAddCallback().

During the CHEP study I also tested polling with cudaEventQuery(), but the "waiting thread pool" approach was more performant.

Another benefit over cudaStreamAddCallback() is that that function "is slated for eventual deprecation and removal", and the "replacement" cudaLaunchHostFunc() does not call the callback function in case of an error in the associated CUDA stream.

Resolves #29188
Resolves cms-sw/framework-team#916

PR validation:

Unit tests in FWCore/Concurrency, HeterogeneousCore/Alpaka{Core,Test}, HeterogeneousCore/CUDA{Utilities,Core,Test} succeed.

The deployment on CUDA and Alpaka modules still needs performance testing

If this PR is a backport please specify the original PR and why you need to backport that PR. If this PR will be backported please specify to which release cycle the backport is meant for:

Possibly to be backported to 14_0_X.

@cmsbuild
Copy link
Contributor

cmsbuild commented May 3, 2024

cms-bot internal usage

@cmsbuild
Copy link
Contributor

cmsbuild commented May 3, 2024

+code-checks

Logs: https://cmssdt.cern.ch/SDT/code-checks/cms-sw-PR-44901/40168

  • This PR adds an extra 28KB to repository

  • There are other open Pull requests which might conflict with changes you have proposed:

@cmsbuild
Copy link
Contributor

cmsbuild commented May 3, 2024

A new Pull Request was created by @makortel for master.

It involves the following packages:

  • FWCore/Concurrency (core)
  • HeterogeneousCore/AlpakaCore (heterogeneous)
  • HeterogeneousCore/CUDACore (heterogeneous)
  • HeterogeneousCore/CUDAUtilities (heterogeneous)

@cmsbuild, @makortel, @Dr15Jones, @smuzaffar, @fwyzard can you please review it and eventually sign? Thanks.
@missirol, @wddgit, @rovere this is something you requested to watch as well.
@antoniovilela, @rappoccio, @sextonkennedy you are the release manager for this.

cms-bot commands are listed here

@makortel
Copy link
Contributor Author

makortel commented May 3, 2024

enable gpu

@makortel
Copy link
Contributor Author

makortel commented May 3, 2024

@cmsbuild, please test

@cmsbuild
Copy link
Contributor

cmsbuild commented May 3, 2024

+1

Summary: https://cmssdt.cern.ch/SDT/jenkins-artifacts/pull-request-integration/PR-d3cb6f/39230/summary.html
COMMIT: b758c10
CMSSW: CMSSW_14_1_X_2024-05-03-1100/el8_amd64_gcc12
Additional Tests: GPU
User test area: For local testing, you can use /cvmfs/cms-ci.cern.ch/week1/cms-sw/cmssw/44901/39230/install.sh to create a dev area with all the needed externals and cmssw changes.

Comparison Summary

Summary:

  • You potentially removed 4 lines from the logs
  • Reco comparison results: 6 differences found in the comparisons
  • DQMHistoTests: Total files compared: 48
  • DQMHistoTests: Total histograms compared: 3331548
  • DQMHistoTests: Total failures: 6
  • DQMHistoTests: Total nulls: 0
  • DQMHistoTests: Total successes: 3331522
  • DQMHistoTests: Total skipped: 20
  • DQMHistoTests: Total Missing objects: 0
  • DQMHistoSizes: Histogram memory added: 0.0 KiB( 47 files compared)
  • Checked 202 log files, 165 edm output root files, 48 DQM output files
  • TriggerResults: no differences found

GPU Comparison Summary

Summary:

  • No significant changes to the logs found
  • Reco comparison results: 0 differences found in the comparisons
  • DQMHistoTests: Total files compared: 3
  • DQMHistoTests: Total histograms compared: 39740
  • DQMHistoTests: Total failures: 18
  • DQMHistoTests: Total nulls: 0
  • DQMHistoTests: Total successes: 39722
  • DQMHistoTests: Total skipped: 0
  • DQMHistoTests: Total Missing objects: 0
  • DQMHistoSizes: Histogram memory added: 0.0 KiB( 2 files compared)
  • Checked 8 log files, 10 edm output root files, 3 DQM output files
  • TriggerResults: no differences found

#include "FWCore/Concurrency/interface/async.h"

namespace edm::impl {
WaitingThread::WaitingThread() { thread_ = std::thread(&WaitingThread::threadLoop, this); }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you set a meaningful thread name, so it's easier to identify this thread pool in a GDB trace ?

For example:

Suggested change
WaitingThread::WaitingThread() { thread_ = std::thread(&WaitingThread::threadLoop, this); }
WaitingThread::WaitingThread() {
thread_ = std::thread(&WaitingThread::threadLoop, this);
pthread_setname_np(thread_.native_handle(), "edm async pool");
}

Or even something more elaborate with a static constexpr name, and a check that its length is 15 or less.

Copy link
Contributor Author

@makortel makortel May 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the suggestion, I added the name. While verifying the behavior by running the unit test in gdb I discovered many threads (~8) with the edm async pool name. Further investigation showed that the test ended up creating two WaitingThreads (which I can believe), but the use of global_control to set allowed parallelism to 1 lead the call to onetbb::task_arena::enqueue() in edm::WaitingTaskWithArenaHolder::doneWaiting() to create a new TBB-controlled thread that inherited the edm async pool name (and all subsequent TBB threads created by that thread inherited the name too).

When I set the allowed parallelism to 2 (which is now done in the test), I saw 2 threads with the edm async pool name.

While nearly all production jobs are configured to use multiple threads, maybe it would be time to look more into trying to give names to the TBB threads etc. I'll open a separate issue on that. Here: #44912

@cmsbuild
Copy link
Contributor

cmsbuild commented May 6, 2024

+code-checks

Logs: https://cmssdt.cern.ch/SDT/code-checks/cms-sw-PR-44901/40191

  • This PR adds an extra 20KB to repository

  • There are other open Pull requests which might conflict with changes you have proposed:

@cmsbuild
Copy link
Contributor

cmsbuild commented May 6, 2024

Pull request #44901 was updated. @smuzaffar, @Dr15Jones, @fwyzard, @cmsbuild, @makortel can you please check and sign again.

@makortel
Copy link
Contributor Author

makortel commented May 6, 2024

@cmsbuild, please test

@cmsbuild
Copy link
Contributor

cmsbuild commented May 7, 2024

+1

Summary: https://cmssdt.cern.ch/SDT/jenkins-artifacts/pull-request-integration/PR-d3cb6f/39266/summary.html
COMMIT: f8322a0
CMSSW: CMSSW_14_1_X_2024-05-06-1100/el8_amd64_gcc12
Additional Tests: GPU
User test area: For local testing, you can use /cvmfs/cms-ci.cern.ch/week0/cms-sw/cmssw/44901/39266/install.sh to create a dev area with all the needed externals and cmssw changes.

Comparison Summary

Summary:

  • You potentially added 3 lines to the logs
  • Reco comparison results: 4 differences found in the comparisons
  • DQMHistoTests: Total files compared: 48
  • DQMHistoTests: Total histograms compared: 3332476
  • DQMHistoTests: Total failures: 3
  • DQMHistoTests: Total nulls: 0
  • DQMHistoTests: Total successes: 3332453
  • DQMHistoTests: Total skipped: 20
  • DQMHistoTests: Total Missing objects: 0
  • DQMHistoSizes: Histogram memory added: 0.0 KiB( 47 files compared)
  • Checked 202 log files, 165 edm output root files, 48 DQM output files
  • TriggerResults: no differences found

GPU Comparison Summary

Summary:

  • No significant changes to the logs found
  • Reco comparison results: 0 differences found in the comparisons
  • DQMHistoTests: Total files compared: 3
  • DQMHistoTests: Total histograms compared: 39740
  • DQMHistoTests: Total failures: 19
  • DQMHistoTests: Total nulls: 0
  • DQMHistoTests: Total successes: 39721
  • DQMHistoTests: Total skipped: 0
  • DQMHistoTests: Total Missing objects: 0
  • DQMHistoSizes: Histogram memory added: 0.0 KiB( 2 files compared)
  • Checked 8 log files, 10 edm output root files, 3 DQM output files
  • TriggerResults: no differences found

@fwyzard
Copy link
Contributor

fwyzard commented May 7, 2024

I've compared some different approaches using the current HLT menu and recent data.

alpaka::HostOnlyTask

Using CMSSW_14_0_6_MULTIARCHS as the baseline, without any changes, I get:

Running 4 times over 10300 events with 8 jobs, each with 32 threads, 24 streams and 1 GPUs
   692.8 ±   0.2 ev/s (10000 events, 98.2% overlap)
   691.2 ±   0.2 ev/s (10000 events, 98.3% overlap)
   692.4 ±   0.2 ev/s (10000 events, 98.2% overlap)
   690.7 ±   0.2 ev/s (10000 events, 98.9% overlap)
 --------------------
   691.8 ±   1.0 ev/s

alpaka native host tasks

Using CMSSW_14_0_6_MULTIARCHS with a modified HeterogeneousCore/AlpakaCore/src/alpaka/EDMetadata.cc:

  void EDMetadata::enqueueCallback(edm::WaitingTaskWithArenaHolder holder) {
    alpaka::enqueue(*queue_, [holder = std::move(holder)]() {
      // The functor is required to be const, but the original waitingTaskHolder_
      // needs to be notified...
      std::exception_ptr eptr;
      const_cast<edm::WaitingTaskWithArenaHolder&>(holder).doneWaiting(eptr);
    });
  }

I get:

Running 4 times over 10300 events with 8 jobs, each with 32 threads, 24 streams and 1 GPUs
   671.0 ±   0.2 ev/s (10000 events, 98.1% overlap)
   668.1 ±   0.2 ev/s (10000 events, 98.3% overlap)
   673.5 ±   0.2 ev/s (10000 events, 99.2% overlap)
   671.9 ±   0.2 ev/s (10000 events, 97.3% overlap)
 --------------------
   671.1 ±   2.3 ev/s

edm::async

Using CMSSW_14_0_6_MULTIARCHS with the changes from this PR (as of f8322a0):

Running 4 times over 10300 events with 8 jobs, each with 32 threads, 24 streams and 1 GPUs
   703.8 ±   0.2 ev/s (10000 events, 97.2% overlap)
   703.5 ±   0.2 ev/s (10000 events, 98.8% overlap)
   700.3 ±   0.2 ev/s (10000 events, 98.4% overlap)
   701.1 ±   0.2 ev/s (10000 events, 99.0% overlap)
 --------------------
   702.2 ±   1.8 ev/s

Bottom line, this PR speeds up the HLT by about 1.5% ± 0.3% .

@fwyzard
Copy link
Contributor

fwyzard commented May 7, 2024

+heterogeneous

@makortel
Copy link
Contributor Author

makortel commented May 9, 2024

Notes from review discussion with @Dr15Jones

  • Have edm::async() to take the WaitingThreadPool as an argument
  • Move the ownership of WaitingThreadPool object from a function-static (i.e. global) to a Service (that would be part of the default cmsRun services)
  • Create a base class for the Service in order to decouple things. The base class would be placed in FWCore/Concurrency, and the Service implementation in FWCore/Services
  • The Service base class would have a member function with similar interface as the edm::async() in this PR presently
  • Additional benefits/opportunities of the Service approach
    • We can destruct the WaitingThreadPool at the endJob, or Service destructor, instead of "after main()"
    • We can add ActivityRegistry signals around the caller-supplied functor (beyond this PR)
    • We can add caller-supplied context information in the exception message from the functor
    • When Service receives an early termination signal, it stops issuing async calls to the threads, and throws an exception for the caller
    • We can limit the async launching to specific parts of the data processing (like only in module acquire() functions)
  • Add test for exception being thrown during the data processing, and some WaitingThreads running functions, that the framework keeps the data processing (i.e. events) open until the WaitingThreads' functions' return. This is necessary to guarantee safe destruction of WaitingThreadPool around the endJob

Comments beyond this PR

  • We should add a test where an exception (from the external worker) is passed to the WaitingTaskWithArenaHolder::doneWaiting(), and see what context is in the message

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Open a PR and address review comments Consider edm::async()
3 participants