You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I followed the installation guide and installed nvidia-container-toolkit successfully. But no matter what docker image I use, the container always uses the cuda of the host machine. How to use docker image's own cuda? Thank you!
The text was updated successfully, but these errors were encountered:
The CUDA Version displayed by nvidia-smi is the CUDA DRIVER version and not the runtime version. The driver libraries are injected from the host so as to be able to communicate with the kernel mode driver that is installed there.
Running a device query sample should show the RUNTIME CUDA version in addition to the driver CUDA version.
I want to run cuda 10.1 inside a docker container, but I require different drivers, and nvidia-smi displays the host machine drivers.
Plus I really think that there is a cuda version issue because I need to use cuda 10.1 to run tensorflow 2.3.0 (https://www.tensorflow.org/install/source#tested_build_configurations) and when running python3 -c "import tensorflow as tf; print('GPU(s) disponibles:' if len(tf.config.list_physical_devices('GPU')) > 0 else 'Aucun GPU disponible.')" in the docker,
I have
Could not load dynamic library 'libcublas.so.10'; dlerror: libcublas.so.10: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda-10.1/lib64:
And I saw on stackoverflow that this error is linked to using non-coherent cuda version
I followed the installation guide and installed nvidia-container-toolkit successfully. But no matter what docker image I use, the container always uses the cuda of the host machine. How to use docker image's own cuda? Thank you!
The text was updated successfully, but these errors were encountered: