You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
trio.to_thread.run_sync() is called, creating a worker thread. The worker thread is left in the global THREAD_CACHE.
The Python process forks for some reason (perhaps via the multiprocessing module)
The child process now calls trio.to_thread.run_sync(). The global THREAD_CACHE still contains a reference to the worker thread, so the child process thinks it has an idle worker thread, and tries to dispatch a task to it. However, the worker thread doesn't actually exist in the child process. So trio.to_thread.run_sync() hangs forever.
Because THREAD_CACHE is interpreter-global, this can happen even if the two Trio run loop are completely separate. For example, in a test suite, one test might call trio.to_thread.run_sync(), and then later a completely separate test might use multiprocessing to spawn a process that calls trio.to_thread.run_sync().
I think it should be fairly simple to fix this by using os.register_at_fork() to ensure THREAD_CACHE is cleared in the child whenever the interpreter forks.
The text was updated successfully, but these errors were encountered:
Note this can happen even if the fork isn't inside trio.run(). For example:
async def foo():
... do some async stuff ...
await trio.to_thread.run_sync(...)
... do some async stuff ...
trio.run(foo)
# we are now outside trio.run(), but the worker thread is still in THREAD_CACHE
os.fork()
# this is fine on the parent, but fails on the child, due to the bug
trio.run(foo)
My understanding is it's not really practical to support fork() inside trio.run(), but it generally should be fine to have fork() and trio.run() in the same program at different times -- except for this bug :)
Consider the following bug:
trio.to_thread.run_sync()
is called, creating a worker thread. The worker thread is left in the globalTHREAD_CACHE
.multiprocessing
module)trio.to_thread.run_sync()
. The globalTHREAD_CACHE
still contains a reference to the worker thread, so the child process thinks it has an idle worker thread, and tries to dispatch a task to it. However, the worker thread doesn't actually exist in the child process. Sotrio.to_thread.run_sync()
hangs forever.Because
THREAD_CACHE
is interpreter-global, this can happen even if the two Trio run loop are completely separate. For example, in a test suite, one test might calltrio.to_thread.run_sync()
, and then later a completely separate test might usemultiprocessing
to spawn a process that callstrio.to_thread.run_sync()
.I think it should be fairly simple to fix this by using
os.register_at_fork()
to ensureTHREAD_CACHE
is cleared in the child whenever the interpreter forks.The text was updated successfully, but these errors were encountered: