1 2 3 4 5 6 7 8 9 10 | x = tf.constant([[ 1 , 1. , 1. , 2. , 3. ], [ 1 , 1. , 4. , 5. , 6. ], [ 1 , 1. , 7. , 8. , 9. ], [ 1 , 1. , 7. , 8. , 9. ], [ 1 , 1. , 7. , 8. , 9. ]]) x = tf.reshape(x, [ 1 , 5 , 5 , 1 ]) print (MaxPool2D(( 5 , 5 ), strides = ( 2 , 2 ), padding = "same" )(x)) print (math.ceil( 5 / 2 )) |
1 2 3 4 5 6 7 8 9 10 11 12 13 | print(MaxPool2D((5, 5), strides=(2, 2), padding="same")(x)) tf.Tensor( [[[[7.] [9.] [9.]] [[7.] [9.] [9.]] [[7.] [9.] [9.]]]], shape=(1, 3, 3, 1), dtype=float32) |
1 | 3 |
1 2 3 4 5 6 7 8 9 10 11 | model = Conv2D( 3 , ( 3 , 3 ), strides = ( 2 , 2 ), padding = "same" , kernel_initializer = tf.constant_initializer( 1. )) x = tf.constant([[ 1. , 2. , 3. , 4. , 5. ], [ 1. , 2. , 3. , 4. , 5. ], [ 1. , 2. , 3. , 4. , 5. ], [ 1. , 2. , 3. , 4. , 5. ], [ 1. , 2. , 3. , 4. , 5. ]]) x = tf.reshape(x, ( 1 , 5 , 5 , 1 )) print (model(x)) |
1 2 3 4 5 6 7 8 9 10 11 12 13 | x = tf.constant([[1., 2., 3., 4., 5.],... tf.Tensor( [[[[ 6. 6. 6.] [18. 18. 18.] [18. 18. 18.]] [[ 9. 9. 9.] [27. 27. 27.] [27. 27. 27.]] [[ 6. 6. 6.] [18. 18. 18.] [18. 18. 18.]]]], shape=(1, 3, 3, 3), dtype=float32) |
output_width=⌊input_width−1s⌋+1=⌈input_widths⌉
The last equality deserves a proof as it is not highly trivial:
Fact. For any positive intergers w,s, we have ⌊w−1s⌋+1=⌈ws⌉.
Proof. We do case by case study. If w=ks for some positive k∈N, then
LHS=⌊k−1s⌋+1=(k−1)+1=k=⌈k⌉=RHS.
When w=ks+j, for some k∈N and j∈N∩(0,s), then LHS=⌊k+j−1s⌋+1=k+1=⌈k+js⌉=⌈ks+js⌉=⌈ws⌉=RHS.◼
No comments:
Post a Comment