跳到主要內容

DDPG in Torcs within Docker Container











Dockerfile:
FROM tensorflow/tensorflow:0.10.0-gpu

WORKDIR /home/frank/old_gym_torcs
ADD . /home/old_gym_torcs 

RUN apt update
RUN apt install -y vim xautomation torcs
RUN apt-get install -y libjpeg-dev cmake swig python-pyglet python3-opengl libboost-all-dev \
        libsdl2-2.0.0 libsdl2-dev libglu1-mesa libglu1-mesa-dev libgles2-mesa-dev \
        freeglut3 xvfb libav-tools

RUN pip install gym
RUN pip install keras==1.1.0
ENV PATH="/usr/games:${PATH}"
CMD ["/bin/bash"]



viper1 $ docker run --runtime=nvidia -it -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix -v /home/frank/old_gym_torcs:/home/old_gym_torcs -v /var/run/docker.sock:/var/run/docker.sock -v /home/frank/gym_torcs:/home/gym_torcs -v /home/frank/gym:/home/gym -p 3101:3101 --workdir /home/old_gym_torcs -p 8888:8888 ddpgfrk:tf0.10 /bin/bash

(grey part is not necessary)

After 
$ docker commit id ddpg:tf0.10.0-gpu

viper1 $ docker run --runtime=nvidia -it -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix -v /home/frank/old_gym_torcs:/home/old_gym_torcs -v /var/run/docker.sock:/var/run/docker.sock -v /home/frank/gym_torcs:/home/gym_torcs -v /home/frank/gym:/home/gym -p 3101:3101 --workdir /home/old_gym_torcs -p 8888:8888 ddpg:tf0.10.0-gpu /bin/bash
..
Q:
autostart.sh: 12: autostart.sh: xte: not found
A:
sudo apt install xautomation

Q:
NameError: global name 'emsg' is not defined
A:
python2 and python 3 try except syntax conflict
snakeoil3_gym.py
-        except (socket.error, emsg):
+        except socket.error as emsg:


Reward Function:
Rt=Vxcos(θ)Vxsin(θ)VxtrackPos


Ref:
https://github.com/ugo-nama-kun/gym_torcs
https://github.com/yanpanlau/DDPG-Keras-Torcs
https://yanpanlau.github.io/2016/10/11/Torcs-Keras.html

留言

這個網誌中的熱門文章

增強式學習

   迴力球遊戲-ATARI     賽車遊戲DQN-ATARI 賽車遊戲-TORCS Ref:     李宏毅老師 YOUTUBE DRL 1-3 On-policy VS Off-policy On-policy     The agent learned and the agent interacting with the environment is the same     阿光自已下棋學習 Off-policy     The agent learned and the agent interacting with the environment is different     佐助下棋,阿光在旁邊看 Add a baseline:     It is possible that R is always positive     So R subtract a expectation value Policy in " Policy Gradient" means output action, like left/right/fire gamma-discounted rewards: 時間愈遠的貢獻,降低其權重 Reward Function & Action is defined in prior to training MC v.s. TD MC 蒙弟卡羅: critic after episode end : larger variance(cuz conditions differ a lot in every episode), unbiased (judge until episode end, more fair) TD: Temporal-difference approach: critic during episode :smaller variance, biased maybe atari : a3c  ...

DeepRacer

Preliminary training: deepracer-github-simapp.tar.gz Reward function: ./opt/install/sagemaker_rl_agent/lib/python3.5/site-packages/markov/environments/deepracer_env.py action = [steering_angle, throttle] TRAINING_IMAGE_SIZE = (160, 120) Plotted waypoints in vertices array of hard track Parameters: on_track, x, y, distance_from_center, car_orientation, progress, steps,                                                                          throttle, steering, track_width, waypoints, closest_waypoints Note: Above picture is from https://yanpanlau.github.io/2016/10/11/Torcs-Keras.html

Frameworks overview

Picture courtesy from Paul Huang