跳到主要內容

StarGAN

Ref:
https://github.com/yunjey/stargan
https://arxiv.org/pdf/1711.09020.pdf

StarGAN如圖(b)與其他GAN模型相較受到關注的特色在於其不同Domain間的轉換可以使用同一個模不需要一對一Domain的產生很多組模型如圖(a).


Picture originated from: https://arxiv.org/pdf/1711.09020.pdf








下圖是使用StarGAN生成的圖片一組有六張分別是
Input + Black_Hair +Golden_Hair +Brown_Hair +Gender_Change +Aged 5Domain

Train這個模型在FX705GE(CPU:INTEL i7-8750H, 32G RAM)上花了約39小時作Training, 總共200,000 steps, 執行速度7 sec/ 10steps ,
相對GL753VE(CPU:INTEL i7-7700HQ, 24G RAM) 90 sec/ 10steps <== 只是worker node, 未開啟NVIDIA CUDA

從下列image看來效果有好有壞視人頭比例, training dataset, 相片品質及背景等條件而異.
感謝同仁們(EX-)及致中提供玉照協助!! 使用celebrity datasettraining 及部份testing



        Input               + Black_Hair       + Golden_Hair    + Brown_Hair       + Gender_Change + Aged






               Input            + Black_Hair       + Golden_Hair     + Brown_Hair     + Gender_Change + Aged





留言

這個網誌中的熱門文章

增強式學習

   迴力球遊戲-ATARI     賽車遊戲DQN-ATARI 賽車遊戲-TORCS Ref:     李宏毅老師 YOUTUBE DRL 1-3 On-policy VS Off-policy On-policy     The agent learned and the agent interacting with the environment is the same     阿光自已下棋學習 Off-policy     The agent learned and the agent interacting with the environment is different     佐助下棋,阿光在旁邊看 Add a baseline:     It is possible that R is always positive     So R subtract a expectation value Policy in " Policy Gradient" means output action, like left/right/fire gamma-discounted rewards: 時間愈遠的貢獻,降低其權重 Reward Function & Action is defined in prior to training MC v.s. TD MC 蒙弟卡羅: critic after episode end : larger variance(cuz conditions differ a lot in every episode), unbiased (judge until episode end, more fair) TD: Temporal-difference approach: critic during episode :smaller variance, biased maybe atari : a3c  ...

tensorflow

TensorFlow Docker requirements Install Docker  on your local  host  machine. For GPU support on Linux,  install nvidia-docker . Docker is the easiest way to enable TensorFlow  GPU support  on Linux since only the  NVIDIA® GPU driver  is required on the  host  machine  (the  NVIDIA® CUDA® Toolkit  does not need to be installed). docker run [-it] [--rm] [-p hostPort:containerPort] tensorflow/tensorflow[:tag] [command] $ docker run -it --rm tensorflow/tensorflow    python -c "import tensorflow as tf; print(tf.__version__)" Note:   nvidia-docker  v1 uses the  nvidia-docker  alias, where v2 uses  docker --runtime=nvidia . CUDA 9.0 for TensorFlow < 1.13.0 nvidia-docker2 intsall prerequisites: NVIDIA driver  and Docker  . If you have a custom  /etc/docker/daemon.json , the  nvidia-docker2  package might override it....

{}大括弧在PYTHON的2種用法

Ref: https://stackoverflow.com/questions/9197324/what-is-the-meaning-of-curly-braces {}大括弧在PYTHON的2種用法: "Curly Braces" are used in Python to define a dictionary. A dictionary is a data structure that maps one value to another - kind of like how an English dictionary maps a word to its definition. Python: 1. dict = { "a" : "Apple" , "b" : "Banana" , } 2. They are also used to format strings, instead of the old C style using %, like: ds = [ 'a' , 'b' , 'c' , 'd' ] x = [ 'has_{} 1' . format ( d ) for d in ds ] print x [ 'has_a 1' , 'has_b 1' , 'has_c 1' , 'has_d 1' ]