π 2017-Oct-12 ⬩ βοΈ Ashwin Nanjappa ⬩ π·οΈ error, nvidia docker ⬩ π Archive
NVIDIA Docker makes it easy to use Docker containers across machines with differing NVIDIA graphics drivers. After installing it, I ran a sample NVIDIA Docker command and got this error:
$ nvidia-docker run --rm nvidia/cuda nvidia-smi
docker: Error response from daemon: Post http://%2Frun%2Fdocker%2Fplugins%2Fnvidia-docker.sock/VolumeDriver.Mount: dial unix /run/docker/plugins/nvidia-docker.sock: connect: no such file or direct
Investigating the log files showed this:
$ cat /tmp/nvidia-docker.log
nvidia-docker-plugin | 2017/10/11 10:10:07 Loading NVIDIA unified memory
nvidia-docker-plugin | 2017/10/11 10:10:07 Loading NVIDIA management library
nvidia-docker-plugin | 2017/10/11 10:10:07 Discovering GPU devices
nvidia-docker-plugin | 2017/10/11 10:10:13 Provisioning volumes at /var/lib/nvidia-docker/volumes
nvidia-docker-plugin | 2017/10/11 10:10:13 Serving plugin API at /run/docker/plugins
nvidia-docker-plugin | 2017/10/11 10:10:13 Serving remote API at localhost:3476
nvidia-docker-plugin | 2017/10/11 10:10:13 Error: listen tcp 127.0.0.1:3476: bind: address already in use
That 3476 port turned out to be owned by no process. So whatβs the problem?
I gave up and restarted Docker and everything worked fine after that (haha!):
$ sudo service docker restart
Tried with: NVIDIA Docker 1.x and Docker 1.11