New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: RuntimeError: Couldn't install torch. [Python 3.10.12] #15749
Comments
try activating a venv and then try running it |
I have already done the usual. Activated venv by running the command. Deleted venv directory and activated it again. Didn't help. (Side note not directly relating to this build - I have tried the lshqqytiger build as well, it failed to install onnx directml library. So I am blocked either ways. ) |
python3.10 -m venv venv, does not activate a virtualenvironment, it only creates it. so i assume that you used the right command to activate it. |
umm, the guide didnt mention anything about it "activating".
|
Well there you go, used that command, still got the same problem
|
The same error doesn't work for me either. It can't find version 2.1.2, and it doesn't work on the other one. |
what happens if you activate the venv environment and then run |
what is there to restart? It doesn't even download. The SD web UI cannot be installed |
Try installing torch yourself first, then run the webui script. Use either of these commands to install torch:
|
I use Arch Linux. In which folder should I install "torch" with this command? |
There's no folder to pick. Just activate the venv that the webui script created and then run the command. Also, my comments were actually targeted @arkni8 who is running on WSL and AMD GPU so not sure how much this applies to your case. |
@hsm207 Oh okay, let me spin up Ubuntu again because it was frustrating me all the way and nothing was working xD. I will report back in a bit. |
and after you activated the venv and before you run the pip install command, run |
So I was able to run these command, within the venv.
So, now Torch is unable to use GPU. Using --skip-torch-cuda-test skipped the check, but while downloading safetensors, it shows this error So, did it ignore rocm and just gonna use the CPU? |
Bonus question - what do I do, to scrap my current attempt and redo the whole build without having to download all the bigger resources? Or do I just run |
are you sure your gpu supports rocm? after you install the torch, you should try running some sample torch programs directly. You can see here for examples: https://pytorch.org/docs/stable/notes/hip.html
i don't know |
pip install torch==2.1.2 torchvision==0.16.2 --index-url https://download.pytorch.org/whl/cpu
pip install torch==2.1.2 torchvision==0.16.2 --index-url https://download.pytorch.org/whl/rocm5.6 Was I supposed to run one of the two commands btw? Because on running the second command, nothing happened. Just a bunch of "Requirement already satisfied" and didn't download anything new. But okay, I will see if I can run some sample torch programs. |
yes, only choose one. either you want to install pytorch + cpu or pytorch + rocm support. |
Okay, I uninstalled the torch+cpu and reinstalled torch+rocm. Did not work in that case either. After a lot of fiddling, it seems like rocm is just not the way to go if using someone is trying ROCm. ROCm apparently still doesn't support WSL. Well, its unclear at the moment and googling gives a lot of unclear result and lot of people just chattering. I will keep looking but I wil close the issue soon if nothing comes up. :) SIDE EDIT - Running SD on CPU is fun. A whopping 8-9 min waiting for a 512x512 image with 20 iterations. xD |
but they already support running on windows directly: https://rocm.docs.amd.com/projects/install-on-windows/en/latest/how-to/install.html |
Yea, I know. I figured I will try WSL first....I don't know why. Probably because managing python would be easier on Linux than on Windows. Anyway, I have it running now, with lshqqytiger's DirectML build (plus his build has got some gotcha apparently, though I have not found one?). I will make a post on the Discussion pages of both the builds since my GPU is literally the lowest tier 4gigs Polaris GPU on the market. Maybe that will give them hope :) lol |
Either ways, at the moment, I was unsuccessful in making SD webUI work on WSL2 with ROCm for anyone reading for a conclusion. I could not give it more time though to try other ways like directml in wsl2, because thats supported. potential for anyone wondering. |
Does not work. SD is not even installed. Looking in indexes: https://download.pytorch.org/whl/rocm5.6 |
Then it could be that pytorch is not supported on your platform. I'd check their start locally page to make you sure your system meet their requirements. |
My system is working fine and has been working. It says there that I can't download version 2.1.2 because it simply doesn't exist. I have Arch Linux and I have been using SD for a long time and everything worked. |
@LibertyGM Mate, maybe start another issue? Why are you talking on a closed issue page? |
What happened?
On running the following:
./webui.sh --precision full --no-half --upcast-sampling -lowvram
I get this error. I am on Python 3.10.12. (I realize its not 3.10.6, but is that an issue? I can't seem to figure out how to even downgrade to that subversion)
Anyways, any help appreciated. I am not sure what I am doing wrong.
Steps to reproduce the problem
./webui.sh --precision full --no-half --upcast-sampling -lowvram
What should have happened?
It should have successfully installed. But it did not.
What browsers do you use to access the UI ?
Mozilla Firefox
Sysinfo
Couldn't run --dump-sysinfo either.
Running on WSL2, AMD CPU, AMD GPU, following the AMD guide.
The text was updated successfully, but these errors were encountered: