Can not use torch.compile with dynamic=Ture when using multi-threads #126024
Labels
module: dynamic shapes
oncall: pt2
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
馃悰 Describe the bug
When running a model processed by torch.compile in a multi-threaded environment, the following error will be reported:
RuntimeError: Detected that you are using FX to symbolically trace a dynamo-optimized function. This is not supported at the moment.
There is a case where the same model is run in multiple threads:
Is this error expected and is it possible to support this case?
Error logs
RuntimeError: Detected that you are using FX to symbolically trace a dynamo-optimized function. This is not supported at the moment.
Minified repro
No response
Versions
torch 2.4.0.dev20240417+cpu
cc @ezyang @msaroufim @bdhirsh @anijain2305 @chauhang
The text was updated successfully, but these errors were encountered: