Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method #3662

Open
serdarildercaglar opened this issue May 2, 2024 · 12 comments
Labels

Comments

@serdarildercaglar
Copy link

serdarildercaglar commented May 2, 2024

/kind bug

What steps did you take and what happened:

  • When I set the workers parameter to 2 or greater while cuda is active, it gives this error.

Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method

  • If cuda is not active and the number of workers has a value of 2 or more, the model does not give prediction. timeout happens.

I am attaching 3 files as zipped.
issue-open.py file can be used for reproducing the bug.
Please run python issue-open.py
Please use demo.ipynb file for sending a request to model.

image

Environment:

@oss-prow-bot oss-prow-bot bot added the kind/bug label May 2, 2024
@murat-gunay
Copy link

murat-gunay commented May 2, 2024

Hello

The same issue applies for me as @serdarildercaglar mentioned above. It will be very helpful for us to fix this please..

@bunyaminkeles
Copy link

hi guys,
I'm having the same problems as @serdarildercaglar has. I urgently need a solution. Thx

@serdarildercaglar
Copy link
Author

@sivanantha321 Could you please help me with this problem? Please let me know if you can address it.

@sivanantha321
Copy link
Member

@sivanantha321 Could you please help me with this problem? Please let me know if you can address it.

Will look into it

@serdarildercaglar
Copy link
Author

Hi @bunyaminkeles. Could you fix the issue? Is there any improvement on your side?
I am about to launch my project to production but I couldn't fix this issue. If I cannot employ workers, I cannot use multi-process.

@bunyaminkeles
Copy link

bunyaminkeles commented May 14, 2024 via email

@sivanantha321
Copy link
Member

For now, you can try with ray serve for take advantage of multiple workers. https://kserve.github.io/website/latest/modelserving/v1beta1/custom/custom_model/#parallel-model-inference. I recommend using kserve 0.11 as 0.12 release seems broken with ray serve

@serdarildercaglar
Copy link
Author

I used ray serve and it was successful in multiprocessing. However, when I use ray serve, as the number of replicas increases, resource consumption increases, and whether the server is busy or not, these replicas are constantly running, which negatively affects the cost. I will manage with ray serve until the Workers' problem is solved.
Is there any work planned to solve the problem depending on the increase in the number of Workers?

Thank you very much. Sincerely best wishes.

@yuzisun
Copy link
Member

yuzisun commented May 27, 2024

@serdarildercaglar are you using multiprocessing mainly to increase the gpu utilization ? Just curious about the motivation

@serdarildercaglar
Copy link
Author

Thanks for response @yuzisun.
Yes,
I need to increase the number of workers to process requests to the model simultaneously. When I increase the number of workers using Fastapi and send requests at the same time or close to the same time, the gpu processes the requests simultaneously. If the number of workers is 1, it processes the requests one by one and returns a response. Therefore, the response time becomes very long.
Since we are using kubernetes and kserve in our project, it is vital that I set the number of workers to 2 or more.

@yuzisun
Copy link
Member

yuzisun commented May 27, 2024

Thanks for response @yuzisun. Yes, I need to increase the number of workers to process requests to the model simultaneously. When I increase the number of workers using Fastapi and send requests at the same time or close to the same time, the gpu processes the requests simultaneously. If the number of workers is 1, it processes the requests one by one and returns a response. Therefore, the response time becomes very long. Since we are using kubernetes and kserve in our project, it is vital that I set the number of workers to 2 or more.

Why not setting the replica to 2 or more as that’s how kubernetes scales? The worker count is mainly for saving expensive compute resource like gpu to scale up within the container, but at some point it is bounded by the resource limit of the container and you can‘t scale as much as it can with kubernetes replicas.

@serdarildercaglar
Copy link
Author

  • Increasing the number of “workers” can process incoming requests simultaneously based on the number of “workers” without creating more computational resources. Suppose the model generates prediction using CPU. For example, let the number of “workers” be 3. When a request arrives, the model runs normally. When 3 requests arrive at the same time, the same model can process requests with 3 workers using the available resources. If I create 3 replicas, 3 replicas will always consume resources when a single request comes or when 3 requests come.
    The best solution for me with my limited resources is to use workers for CPU or GPU.

  • Another issue: When using CPU, when I make the number of workers more than 2, the response does not return and waits until timeout. But when I deploy a model in onnx format, workers for CPU works fine. For a normal transformers model, workers does not work.

I may not have been able to explain it fully because of the language barrier. Thank you for trying to help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants