跳到主要內容

tf.squeeze()

Ref:

tf.squeeze()

Function

tf.squeeze(input, squeeze_dims=None, name=None)
Removes dimensions of size 1 from the shape of a tensor.
从tensor中删除所有大小是1的维度
Given a tensor input, this operation returns a tensor of the same type with all dimensions of size 1 removed. If you don’t want to remove all size 1 dimensions, you can remove specific size 1 dimensions by specifying squeeze_dims. 
给定张量输入,此操作返回相同类型的张量,并删除所有尺寸为1的尺寸。 如果不想删除所有尺寸1尺寸,可以通过指定squeeze_dims来删除特定尺寸1尺寸。
如果不想删除所有大小是1的维度,可以通过squeeze_dims指定。
For example:
  1. # 't' is a tensor of shape [1, 2, 1, 3, 1, 1]
  2. shape(squeeze(t)) ==> [2, 3]
  3. Or, to remove specific size 1 dimensions:
  4. # 't' is a tensor of shape [1, 2, 1, 3, 1, 1]
  5. shape(squeeze(t, [2, 4])) ==> [1, 2, 3, 1]

留言

這個網誌中的熱門文章

DeepRacer

Preliminary training: deepracer-github-simapp.tar.gz Reward function: ./opt/install/sagemaker_rl_agent/lib/python3.5/site-packages/markov/environments/deepracer_env.py action = [steering_angle, throttle] TRAINING_IMAGE_SIZE = (160, 120) Plotted waypoints in vertices array of hard track Parameters: on_track, x, y, distance_from_center, car_orientation, progress, steps,                                                                          throttle, steering, track_width, waypoints, closest_waypoints Note: Above picture is from https://yanpanlau.github.io/2016/10/11/Torcs-Keras.html

增強式學習

   迴力球遊戲-ATARI     賽車遊戲DQN-ATARI 賽車遊戲-TORCS Ref:     李宏毅老師 YOUTUBE DRL 1-3 On-policy VS Off-policy On-policy     The agent learned and the agent interacting with the environment is the same     阿光自已下棋學習 Off-policy     The agent learned and the agent interacting with the environment is different     佐助下棋,阿光在旁邊看 Add a baseline:     It is possible that R is always positive     So R subtract a expectation value Policy in " Policy Gradient" means output action, like left/right/fire gamma-discounted rewards: 時間愈遠的貢獻,降低其權重 Reward Function & Action is defined in prior to training MC v.s. TD MC 蒙弟卡羅: critic after episode end : larger variance(cuz conditions differ a lot in every episode), unbiased (judge until episode end, more fair) TD: Temporal-difference approach: critic during episode :smaller variance, biased maybe atari : a3c  ...