- Install Docker on your local host machine.
- For GPU support on Linux, install nvidia-docker.
Docker is the easiest way to enable TensorFlow GPU support on Linux since only the NVIDIA® GPU driver is required on the host machine
(the NVIDIA® CUDA® Toolkit does not need to be installed).
docker run [-it] [--rm] [-p hostPort:containerPort] tensorflow/tensorflow[:tag] [command]
$ docker run -it --rm tensorflow/tensorflow python -c "import tensorflow as tf; print(tf.__version__)"
Note:
nvidia-docker
v1 uses the nvidia-docker
alias, where v2 uses docker --runtime=nvidia
.CUDA 9.0 for TensorFlow < 1.13.0
nvidia-docker2 intsall
prerequisites:NVIDIA driver and Docker .
If you have a custom
/etc/docker/daemon.json
, the nvidia-docker2
package might override it.Ubuntu 14.04/16.04/18.04,
Ubuntu will install
docker.io
by default which isn't the latest version of Docker Engine. This implies that you will need to pin the version of nvidia-docker. # If you have nvidia-docker 1.0 installed: we need to remove it and all existing GPU containers
docker volume ls -q -f driver=nvidia-docker | xargs -r -I{} -n1 docker ps -q -a -f volume={} | xargs -r docker rm -f
sudo apt-get purge -y nvidia-docker
# Add the package repositories
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | \
sudo apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update
# Install nvidia-docker2 and reload the Docker daemon configuration
sudo apt-get install -y nvidia-docker2
sudo pkill -SIGHUP dockerd
# Test nvidia-smi with the latest official CUDA image
docker run --runtime=nvidia --rm nvidia/cuda:9.0-base nvidia-smi
ref:
https://github.com/NVIDIA/nvidia-docker
CUDA toolkit version | Driver version | GPU architecture |
---|---|---|
6.5 | >= 340.29 | >= 2.0 (Fermi) |
7.0 | >= 346.46 | >= 2.0 (Fermi) |
7.5 | >= 352.39 | >= 2.0 (Fermi) |
8.0 | == 361.93 or >= 375.51 | == 6.0 (P100) |
8.0 | >= 367.48 | >= 2.0 (Fermi) |
9.0 | >= 384.81 | >= 3.0 (Kepler) |
9.1 | >= 387.26 | >= 3.0 (Kepler) |
9.2 | >= 396.26 | >= 3.0 (Kepler) |
10.0 | >= 384.130, < 385.00 | Tesla GPUs |
10.0 | >= 410.48 | >= 3.0 (Kepler) |
CUDA images come in three flavors and are available through the NVIDIA public hub repository.
base
: starting from CUDA 9.0, contains the bare minimum (libcudart) to deploy a pre-built CUDA application.
Use this image if you want to manually select which CUDA packages you want to install.runtime
: extends thebase
image by adding all the shared libraries from the CUDA toolkit.
Use this image if you have a pre-built application using multiple CUDA libraries.devel
: extends theruntime
image by adding the compiler toolchain, the debugging tools, the headers and the static libraries.
Use this image to compile a CUDA application from sources.
留言
張貼留言