跳到主要內容

tensorflow

TensorFlow Docker requirements
  1. Install Docker on your local host machine.
  2. For GPU support on Linux, install nvidia-docker.



Docker is the easiest way to enable TensorFlow GPU support on Linux since only the NVIDIA® GPU driver is required on the host machine 
(the NVIDIA® CUDA® Toolkit does not need to be installed).
docker run [-it] [--rm] [-p hostPort:containerPort] tensorflow/tensorflow[:tag] [command]


$ docker run -it --rm tensorflow/tensorflow    python -c "import tensorflow as tf; print(tf.__version__)"


Note: nvidia-docker v1 uses the nvidia-docker alias, where v2 uses docker --runtime=nvidia.


CUDA 9.0 for TensorFlow < 1.13.0

nvidia-docker2 intsall

prerequisites:NVIDIA driver and Docker .
If you have a custom /etc/docker/daemon.json, the nvidia-docker2 package might override it.

Ubuntu 14.04/16.04/18.04, 

Ubuntu will install docker.io by default which isn't the latest version of Docker Engine. This implies that you will need to pin the version of nvidia-docker. 
# If you have nvidia-docker 1.0 installed: we need to remove it and all existing GPU containers
docker volume ls -q -f driver=nvidia-docker | xargs -r -I{} -n1 docker ps -q -a -f volume={} | xargs -r docker rm -f
sudo apt-get purge -y nvidia-docker

# Add the package repositories
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | \
  sudo apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
  sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update

# Install nvidia-docker2 and reload the Docker daemon configuration
sudo apt-get install -y nvidia-docker2
sudo pkill -SIGHUP dockerd

# Test nvidia-smi with the latest official CUDA image
docker run --runtime=nvidia --rm nvidia/cuda:9.0-base nvidia-smi
ref:
https://github.com/NVIDIA/nvidia-docker



CUDA toolkit versionDriver versionGPU architecture
6.5>= 340.29>= 2.0 (Fermi)
7.0>= 346.46>= 2.0 (Fermi)
7.5>= 352.39>= 2.0 (Fermi)
8.0== 361.93 or >= 375.51== 6.0 (P100)
8.0>= 367.48>= 2.0 (Fermi)
9.0>= 384.81>= 3.0 (Kepler)
9.1>= 387.26>= 3.0 (Kepler)
9.2>= 396.26>= 3.0 (Kepler)
10.0>= 384.130, < 385.00Tesla GPUs
10.0>= 410.48>= 3.0 (Kepler)


CUDA images come in three flavors and are available through the NVIDIA public hub repository.

  • base: starting from CUDA 9.0, contains the bare minimum (libcudart) to deploy a pre-built CUDA application.
    Use this image if you want to manually select which CUDA packages you want to install.
  • runtime: extends the base image by adding all the shared libraries from the CUDA toolkit.
    Use this image if you have a pre-built application using multiple CUDA libraries.
  • devel: extends the runtime image by adding the compiler toolchain, the debugging tools, the headers and the static libraries.
    Use this image to compile a CUDA application from sources.


留言

這個網誌中的熱門文章

DeepRacer

Preliminary training: deepracer-github-simapp.tar.gz Reward function: ./opt/install/sagemaker_rl_agent/lib/python3.5/site-packages/markov/environments/deepracer_env.py action = [steering_angle, throttle] TRAINING_IMAGE_SIZE = (160, 120) Plotted waypoints in vertices array of hard track Parameters: on_track, x, y, distance_from_center, car_orientation, progress, steps,                                                                          throttle, steering, track_width, waypoints, closest_waypoints Note: Above picture is from https://yanpanlau.github.io/2016/10/11/Torcs-Keras.html