git filter-branch --tree-filter 'rm -rf file_path' HEAD
格仔 Blog
Tuesday, April 20, 2021
Thursday, March 4, 2021
Monday, February 15, 2021
yolov3 material
https://github.com/YunYang1994/tensorflow-yolov3
and his blog
https://yunyang1994.gitee.io/2018/12/28/YOLOv3/
code analysis:
Thursday, December 17, 2020
Saturday, December 12, 2020
Remove Sensitive Environment Variable / File That is too Big from Remote Repo
In case mistakenly pushed a large file/sensitive environment data and git rm --cached does not help (pushed to the remote repo but just untracked locally):
(cd to the top level of the repo first), add
1 2 3 | git filter-branch --force --index-filter \ "git rm --cached --ignore-unmatch <path to your file>" \ --prune-empty --tag-name-filter cat -- --all |
-r
flag if you want to remove the whole directory.
REF: https://docs.github.com/.../removing-sensitive-data-from...
This error may subsequently follow:
fatal: refusing to merge unrelated histories
Then
and resolve conflicts.
1 | git pull origin the-remote-branch --allow-unrelated-histories |
Friday, December 11, 2020
Examine Output Size in Tensorflow
When we are uncertain the output size of our tensor processed by some layer, we can go through the following:
which yields
For layer that has training weight, we may try the following for testing:
which yields
In fact it can be proved in both MaxPooling2D and Conv2D that if stride =s and padding=same, then
1 2 3 4 5 6 7 8 9 10 | x = tf.constant([[ 1 , 1. , 1. , 2. , 3. ], [ 1 , 1. , 4. , 5. , 6. ], [ 1 , 1. , 7. , 8. , 9. ], [ 1 , 1. , 7. , 8. , 9. ], [ 1 , 1. , 7. , 8. , 9. ]]) x = tf.reshape(x, [ 1 , 5 , 5 , 1 ]) print (MaxPool2D(( 5 , 5 ), strides = ( 2 , 2 ), padding = "same" )(x)) print (math.ceil( 5 / 2 )) |
1 2 3 4 5 6 7 8 9 10 11 12 13 | print(MaxPool2D((5, 5), strides=(2, 2), padding="same")(x)) tf.Tensor( [[[[7.] [9.] [9.]] [[7.] [9.] [9.]] [[7.] [9.] [9.]]]], shape=(1, 3, 3, 1), dtype=float32) |
1 | 3 |
1 2 3 4 5 6 7 8 9 10 11 | model = Conv2D( 3 , ( 3 , 3 ), strides = ( 2 , 2 ), padding = "same" , kernel_initializer = tf.constant_initializer( 1. )) x = tf.constant([[ 1. , 2. , 3. , 4. , 5. ], [ 1. , 2. , 3. , 4. , 5. ], [ 1. , 2. , 3. , 4. , 5. ], [ 1. , 2. , 3. , 4. , 5. ], [ 1. , 2. , 3. , 4. , 5. ]]) x = tf.reshape(x, ( 1 , 5 , 5 , 1 )) print (model(x)) |
1 2 3 4 5 6 7 8 9 10 11 12 13 | x = tf.constant([[1., 2., 3., 4., 5.],... tf.Tensor( [[[[ 6. 6. 6.] [18. 18. 18.] [18. 18. 18.]] [[ 9. 9. 9.] [27. 27. 27.] [27. 27. 27.]] [[ 6. 6. 6.] [18. 18. 18.] [18. 18. 18.]]]], shape=(1, 3, 3, 3), dtype=float32) |
\text{output_width} = \left\lfloor\frac{\text{input_width}-1}{s}\right\rfloor + 1 = \left\lceil\frac{\text{input_width}}{s}\right\rceil
The last equality deserves a proof as it is not highly trivial:
Fact. For any positive intergers w,s, we have
\left\lfloor \frac{w-1}{s}\right\rfloor + 1 = \left\lceil \frac{w}{s}\right\rceil.
Proof. We do case by case study. If w=ks for some positive k\in \N, then
\text{LHS} = \left\lfloor k - \frac{1}{s}\right\rfloor +1 = (k-1)+1=k = \lceil k\rceil = \text{RHS}.
When w=ks+j, for some k\in\N and j\in \N \cap (0, s), then
\text{LHS} = \left\lfloor k+\frac{j-1}{s}\right\rfloor + 1 = k+1 = \left\lceil k+\frac{j}{s}\right\rceil = \left\lceil \frac{ks+j}{s}\right\rceil = \left\lceil\frac{w}{s}\right\rceil=\text{RHS}.\qed
Sunday, December 6, 2020
conda virtual environment command
1 2 3 4 5 | conda create --name tensorflow python=3.7 conda env remove --name tensorflow conda env export --name ENVNAME > envname.yml conda env create --file envname.yml |
Subscribe to:
Posts (Atom)