Loading [MathJax]/jax/output/HTML-CSS/jax.js

Friday, December 11, 2020

Examine Output Size in Tensorflow

When we are uncertain the output size of our tensor processed by some layer, we can go through the following:
1
2
3
4
5
6
7
8
9
10
x = tf.constant([[1, 1., 1., 2., 3.],
                 [1, 1., 4., 5., 6.],
                 [1, 1., 7., 8., 9.],
                 [1, 1., 7., 8., 9.],
                 [1, 1., 7., 8., 9.]])
 
x = tf.reshape(x, [1, 5, 5, 1])
 
print(MaxPool2D((5, 5), strides=(2, 2),  padding="same")(x))
print(math.ceil(5/2))
which yields
1
2
3
4
5
6
7
8
9
10
11
12
13
print(MaxPool2D((5, 5), strides=(2, 2),  padding="same")(x))
tf.Tensor(
[[[[7.]
   [9.]
   [9.]]
 
  [[7.]
   [9.]
   [9.]]
 
  [[7.]
   [9.]
   [9.]]]], shape=(1, 3, 3, 1), dtype=float32)
1
3
For layer that has training weight, we may try the following for testing:
1
2
3
4
5
6
7
8
9
10
11
model = Conv2D(3, (3, 3), strides=(2, 2), padding="same", kernel_initializer=tf.constant_initializer(1.))
 
 
x = tf.constant([[1., 2., 3., 4., 5.],
                 [1., 2., 3., 4., 5.],
                 [1., 2., 3., 4., 5.],
                 [1., 2., 3., 4., 5.],
                 [1., 2., 3., 4., 5.]])
 
x = tf.reshape(x, (1, 5, 5, 1))
print(model(x))
which yields
1
2
3
4
5
6
7
8
9
10
11
12
13
x = tf.constant([[1., 2., 3., 4., 5.],...
tf.Tensor(
[[[[ 6.  6.  6.]
   [18. 18. 18.]
   [18. 18. 18.]]
 
  [[ 9.  9.  9.]
   [27. 27. 27.]
   [27. 27. 27.]]
 
  [[ 6.  6.  6.]
   [18. 18. 18.]
   [18. 18. 18.]]]], shape=(1, 3, 3, 3), dtype=float32)
In fact it can be proved in both MaxPooling2D and Conv2D that if stride =s and padding=same, then 
output_width=input_width1s+1=input_widths
The last equality deserves a proof as it is not highly trivial:

Fact. For any positive intergers w,s, we have w1s+1=ws. Proof. We do case by case study. If w=ks for some positive kN, then LHS=k1s+1=(k1)+1=k=k=RHS. When w=ks+j, for some kN and jN(0,s), then LHS=k+j1s+1=k+1=k+js=ks+js=ws=RHS.

No comments:

Post a Comment