Thursday, December 17, 2020
Saturday, December 12, 2020
Remove Sensitive Environment Variable / File That is too Big from Remote Repo
1 2 3 | git filter-branch --force --index-filter \ "git rm --cached --ignore-unmatch <path to your file>" \ --prune-empty --tag-name-filter cat -- --all |
-r
flag if you want to remove the whole directory.
REF: 1 | git pull origin the-remote-branch --allow-unrelated-histories |
Friday, December 11, 2020
Examine Output Size in Tensorflow
1 2 3 4 5 6 7 8 9 10 | x = tf.constant([[ 1 , 1. , 1. , 2. , 3. ], [ 1 , 1. , 4. , 5. , 6. ], [ 1 , 1. , 7. , 8. , 9. ], [ 1 , 1. , 7. , 8. , 9. ], [ 1 , 1. , 7. , 8. , 9. ]]) x = tf.reshape(x, [ 1 , 5 , 5 , 1 ]) print (MaxPool2D(( 5 , 5 ), strides = ( 2 , 2 ), padding = "same" )(x)) print (math.ceil( 5 / 2 )) |
1 2 3 4 5 6 7 8 9 10 11 12 13 | print(MaxPool2D((5, 5), strides=(2, 2), padding="same")(x)) tf.Tensor( [[[[7.] [9.] [9.]] [[7.] [9.] [9.]] [[7.] [9.] [9.]]]], shape=(1, 3, 3, 1), dtype=float32) |
1 | 3 |
1 2 3 4 5 6 7 8 9 10 11 | model = Conv2D( 3 , ( 3 , 3 ), strides = ( 2 , 2 ), padding = "same" , kernel_initializer = tf.constant_initializer( 1. )) x = tf.constant([[ 1. , 2. , 3. , 4. , 5. ], [ 1. , 2. , 3. , 4. , 5. ], [ 1. , 2. , 3. , 4. , 5. ], [ 1. , 2. , 3. , 4. , 5. ], [ 1. , 2. , 3. , 4. , 5. ]]) x = tf.reshape(x, ( 1 , 5 , 5 , 1 )) print (model(x)) |
1 2 3 4 5 6 7 8 9 10 11 12 13 | x = tf.constant([[1., 2., 3., 4., 5.],... tf.Tensor( [[[[ 6. 6. 6.] [18. 18. 18.] [18. 18. 18.]] [[ 9. 9. 9.] [27. 27. 27.] [27. 27. 27.]] [[ 6. 6. 6.] [18. 18. 18.] [18. 18. 18.]]]], shape=(1, 3, 3, 3), dtype=float32) |
Sunday, December 6, 2020
conda virtual environment command
1 2 3 4 5 | conda create --name tensorflow python=3.7 conda env remove --name tensorflow conda env export --name ENVNAME > envname.yml conda env create --file envname.yml |
Wednesday, October 28, 2020
Record model.compile options
1 2 3 | model = tf.keras.models.Sequential([tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, activation=tf.nn.relu), tf.keras.layers.Dense(10, activation=tf.nn.softmax)]) |
1 2 3 | model.compile(optimizer = tf.optimizers.Adam(), loss = 'sparse_categorical_crossentropy', metrics=['accuracy']) |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | import tensorflow as tf print(tf.__version__) class myCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs={}): if(logs.get('loss')<0.4): print("\nReached 60% accuracy so cancelling training!") self.model.stop_training = True callbacks = myCallback() mnist = tf.keras.datasets.fashion_mnist (training_images, training_labels), (test_images, test_labels) = mnist.load_data() training_images=training_images/255.0 test_images=test_images/255.0 model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(512, activation=tf.nn.relu), tf.keras.layers.Dense(10, activation=tf.nn.softmax) ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy') model.fit(training_images, training_labels, epochs=5, callbacks=[callbacks]) |
Saturday, October 10, 2020
ResNet
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 | def identity_block(X, f, filters, stage, block): """ Implementation of the identity block as defined in Figure 3 Arguments: X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev) f -- integer, specifying the shape of the middle CONV's window for the main path filters -- python list of integers, defining the number of filters in the CONV layers of the main path stage -- integer, used to name the layers, depending on their position in the network block -- string/character, used to name the layers, depending on their position in the network Returns: X -- output of the identity block, tensor of shape (n_H, n_W, n_C) """ # defining name basis conv_name_base = 'res' + str(stage) + block + '_branch' bn_name_base = 'bn' + str(stage) + block + '_branch' # Retrieve Filters F1, F2, F3 = filters # Save the input value. You'll need this later to add back to the main path. X_shortcut = X # First component of main path X = Conv2D(filters = F1, kernel_size = (1, 1), strides = (1,1), padding = 'valid', name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X) X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X) X = Activation('relu')(X) ### START CODE HERE ### # Second component of main path (≈3 lines) X = Conv2D(filters = F2, kernel_size = (f,f), strides = (1,1), padding = 'same', name = conv_name_base + '2b', kernel_initializer = glorot_uniform(seed=0))(X) X = BatchNormalization(axis=3, name=bn_name_base+'2b')(X) X = Activation('relu')(X) # Third component of main path (≈2 lines) X = Conv2D(filters = F3, kernel_size = (1,1), strides = (1,1), padding = 'valid', name = conv_name_base+'2c', kernel_initializer = glorot_uniform(seed=0))(X) X = BatchNormalization(axis=3, name=bn_name_base+'2c')(X) # Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines) X = Add()([X, X_shortcut]) X = Activation('relu')(X) ### END CODE HERE ### |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 | def convolutional_block(X, f, filters, stage, block, s = 2): """ Implementation of the convolutional block as defined in Figure 4 Arguments: X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev) f -- integer, specifying the shape of the middle CONV's window for the main path filters -- python list of integers, defining the number of filters in the CONV layers of the main path stage -- integer, used to name the layers, depending on their position in the network block -- string/character, used to name the layers, depending on their position in the network s -- Integer, specifying the stride to be used Returns: X -- output of the convolutional block, tensor of shape (n_H, n_W, n_C) """ # defining name basis conv_name_base = 'res' + str(stage) + block + '_branch' bn_name_base = 'bn' + str(stage) + block + '_branch' # Retrieve Filters F1, F2, F3 = filters # Save the input value X_shortcut = X ##### MAIN PATH ##### # First component of main path X = Conv2D(F1, (1, 1), strides = (s,s), name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X) X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X) X = Activation('relu')(X) ### START CODE HERE ### # Second component of main path (≈3 lines) X = Conv2D(F2, (f, f), strides = (1,1), padding="same", name = conv_name_base + '2b', kernel_initializer = glorot_uniform(seed=0))(X) X = BatchNormalization(axis = 3, name = bn_name_base + '2b')(X) X = Activation('relu')(X) # Third component of main path (≈2 lines) X = Conv2D(F3, (1, 1), strides = (1,1), padding="valid", name = conv_name_base + '2c', kernel_initializer = glorot_uniform(seed=0))(X) X = BatchNormalization(axis = 3, name = bn_name_base + '2c')(X) ##### SHORTCUT PATH #### (≈2 lines) X_shortcut = Conv2D(F3, (1, 1), strides = (s,s), padding="valid", name = conv_name_base + '1', kernel_initializer = glorot_uniform(seed=0))(X_shortcut) X_shortcut = BatchNormalization(axis = 3, name = bn_name_base + '1')(X_shortcut) # Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines) X = Add()([X, X_shortcut]) X = Activation("relu")(X) ### END CODE HERE ### return X |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 | def ResNet50(input_shape = (64, 64, 3), classes = 6): """ Implementation of the popular ResNet50 the following architecture: CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> CONVBLOCK -> IDBLOCK*2 -> CONVBLOCK -> IDBLOCK*3 -> CONVBLOCK -> IDBLOCK*5 -> CONVBLOCK -> IDBLOCK*2 -> AVGPOOL -> TOPLAYER Arguments: input_shape -- shape of the images of the dataset classes -- integer, number of classes Returns: model -- a Model() instance in Keras """ # Define the input as a tensor with shape input_shape X_input = Input(input_shape) # Zero-Padding X = ZeroPadding2D((3, 3))(X_input) # Stage 1 X = Conv2D(64, (7, 7), strides = (2, 2), name = 'conv1', kernel_initializer = glorot_uniform(seed=0))(X) X = BatchNormalization(axis = 3, name = 'bn_conv1')(X) X = Activation('relu')(X) X = MaxPooling2D((3, 3), strides=(2, 2))(X) # Stage 2 X = convolutional_block(X, f = 3, filters = [64, 64, 256], stage = 2, block='a', s = 1) X = identity_block(X, 3, [64, 64, 256], stage=2, block='b') X = identity_block(X, 3, [64, 64, 256], stage=2, block='c') ### START CODE HERE ### # Stage 3 (≈4 lines) X = convolutional_block(X, f = 3, filters=[128, 128, 512], stage = 3, block="a", s = 2) X = identity_block(X, 3, filters=[128,128,512], stage=3, block="b") X = identity_block(X, 3, filters=[128,128,512], stage=3, block="c") X = identity_block(X, 3, filters=[128,128,512], stage=3, block="d") # Stage 4 (≈6 lines) X = convolutional_block(X, f = 3, filters=[256, 256, 1024], stage = 4, block="a", s = 2) X = identity_block(X, 3, filters=[256, 256, 1024], stage=4, block="b") X = identity_block(X, 3, filters=[256, 256, 1024], stage=4, block="c") X = identity_block(X, 3, filters=[256, 256, 1024], stage=4, block="d") X = identity_block(X, 3, filters=[256, 256, 1024], stage=4, block="e") X = identity_block(X, 3, filters=[256, 256, 1024], stage=4, block="f") # Stage 5 (≈3 lines) X = convolutional_block(X, f = 3, filters=[512, 512, 2048], stage = 5, block="a", s = 2) X = identity_block(X, f=3, filters=[512, 512, 2048], stage=5, block="b") X = identity_block(X, f=3, filters=[512, 512, 2048], stage=5, block="c") # AVGPOOL (≈1 line). Use "X = AveragePooling2D(...)(X)" X = AveragePooling2D(pool_size=(2, 2), name="avg_pool")(X) ### END CODE HERE ### # output layer X = Flatten()(X) X = Dense(classes, activation='softmax', name='fc' + str(classes), kernel_initializer = glorot_uniform(seed=0))(X) # Create model model = Model(inputs = X_input, outputs = X, name='ResNet50') return model |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset() # Normalize image vectors X_train = X_train_orig/255. X_test = X_test_orig/255. # Convert training and test labels to one hot matrices Y_train = convert_to_one_hot(Y_train_orig, 6).T Y_test = convert_to_one_hot(Y_test_orig, 6).T model.fit(X_train, Y_train, epochs = 2, batch_size = 32) preds = model.evaluate(X_test, Y_test) print ("Loss = " + str(preds[0])) print ("Test Accuracy = " + str(preds[1])) |
Friday, October 9, 2020
Code Assignment
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 | def HappyModel(input_shape): X_input = Input(input_shape) X = ZeroPadding2D((3,3))(X_input) X = Conv2D(18,(7,7),strides=(1,1),name="conv0")(X) X = BatchNormalization(axis=3, name="bn0")(X) X = Activation("relu")(X) X = MaxPooling2D((2,2), name="max_pool")(X) X = Flatten()(X) X = Dense(1,activation="sigmoid", name="fC")(X) model = Model(input = X_input, outputs = X, name="happy model") return model happyModel = HappyModel(X_train.shape[1:]) happyModel.compile(optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"]) happyModel.fit(x=X_train, y=Y_train,epochs=10, batch_size=20) preds = happyModel.evaluate(x=X_test,y=Y_test) img_path = 'images/smile.jpg' img = image.load_img(img_path, target_size=(64, 64)) imshow(img) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) print(happyModel.predict(x)) |
Tuesday, September 29, 2020
Derive the Formula of \displaystyle \frac{\partial \mathcal L}{\partial W^{[\ell]}}
I accidentally found that by the formulas in the previous post, we can already derive the following
Proof. By repeated use of the formular dY^{[\ell]} = [W^{[\ell+1]T}dY^{[\ell+1]}] * \Phi^{[\ell+1]}(U^{[\ell+1]}) we have \begin{align*} dW^{[\ell]}& = \frac{1}{m} dU^{[\ell]} Y^{[\ell-1]T}\\ &=\frac{1}{m}\left(\left[dY^{[\ell]}\right] * \Phi^{[\ell]}{}'(U^{[\ell]})\right) Y^{[\ell-1]T}\\ &=\frac{1}{m}\left( \Phi^{[\ell]'}(U^{[\ell]})* \left[\prod_{i=\ell+1}^{L-1} \Phi^{[i]}{}'(U^{[i]}) * W^{[i]T}\right]\cdot dY^{[L-1]}\right) \cdot Y^{[\ell-1]T} \end{align*} And recall that dY^{[L]} =\displaystyle \frac{\partial \mathcal L}{\partial Y^{[L]}}. \qed
Sunday, September 27, 2020
Formulas Revisit
Saturday, September 26, 2020
Intutive derivation of Cross Entropy as a "loss" function
In defining "loss" function for classification problems given p_i=\mathbb P\{\text{$i$ occurs}\}, i=1,2,\dots,n, from emperical data, we measure the accuracy of estimated data (from our output layer in neuron network) [q_1,q_2,\dots,q_n] by the cross-entropy: L=\sum_{i=1}^n p_i\ln q_i. Recently I revisit this topic, and understand that this comes very naturally from solving maximum-likelihood estimation problem!
Let's take an example, consider flipping a coin with getting a head with probability p and tail with 1-p, then the probability of getting 2 heads out of 6 flipping is
L(p) = \binom{6}{2} p^2 (1-p)^4 = 15 p^2(1-p)^4.
Maximum-likelihood estimation ask the following problem:
The phenomenon of getting 2 heads is most likely to happen under what value of p?
In other words, the above question is the same as at what value of p the proability L(p) gets maximized? By simply solving L'(p)=0 we get the answer p=\frac{1}{3}.
But in more complex problem we could not have the probability of some phenomenon to occurs based on another probablity with explicit formula. Instead of computing the probability p directly, we try to estimate it such that our observation (the phenomenon from empirical data) is most likely to occur, and such an estimated value p is considered as a good estimation.
Now the derivation of cross-entropy will be very intuitive: Assume that
\text{mutally disjoint }E_i=\{\text{$i$ occurs}\},\quad \mathbb P(E_i) = p_i, \quad i=1,2,3,\dots,n.
And assume further that E_i's are iid events. Consider events A_1,\dots,A_N are such that A_i = \cupp_{i=1}^n E_i for each i (for example, flipping coins N times), then p_i = N_i/N, where N_i is the number of times i occures among A_1,\dots,A_N.
Now we get another estimation q_i of the same event E_i from what ever experiment we can imagine. How good is [q_1,\dots,q_n] as an estimation to the past emperical data [p_1,\dots,p_n]? The standard distance in \R^n is certainly not a good choice since an quantity \epsilon from q_i to p_i can mean huge difference from q_{i'} to p_{i'}. [q_1,\dots,q_n] is considered as good estimation if the observed phenomenon
\{\text{1 appears $N_1$ times}\}, \quad \{\text{2 appears $N_2$ times}\},\quad \dots ,\quad \{\text{n appears $N_n$ times} \}
is very likely to happen under the estimates [q_1,\dots,q_n], i.e., when
L = \prod_{i=1}^N q_i^{N_i}\iff \frac{\ln L}{N}= \sum_{i=1}^N \frac{N_i}{N}\ln q_i = \sum_{i=1}^n p_i\ln q_i.
is large, and we have derived the cross-entropy at this point.
Sunday, September 20, 2020
Useful command in git review course:
1 | git rm --cached -r build/ |
1 | --cached only remove from the index |
Sunday, August 23, 2020
babel-node template
1 | npm install babel-cli babel-preset-env --save-dev |
1 2 3 4 5 | { "presets": [ "env" ] } |
1 | nodemon src/index.js --exec babel-node |
Saturday, August 22, 2020
Completely uninstall apache2 to get fresh config
1 | apt-get remove --purge apache2 apache2-data apache2-utils |
Thursday, August 20, 2020
Powershell command to debug ios in chrome
1 | remotedebug_ios_webkit_adapter --port=9000 |
Copy all lastly updated files into a single directory if the version control is horribly not done by git:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | #!/bin/bash git add . git status updatedFiles=$(git status | awk '{print $2}' | grep -P "\..*$") touch updates/update.txt git status > updates/update.txt for file in $updatedFiles do cp --parents "$file" ./updates echo "copied $file to ./updates/$file" done; read -p "Press enter to exit" |
Monday, August 3, 2020
1 2 3 4 5 6 | public class Main { public static void main(String[] args) { System.setProperty( "spring.devtools.restart.enabled" , "false" ); SpringApplication.run(Main. class , args); } } |
Sunday, August 2, 2020
Hilbernate Database Configuration without XML
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 | package com.springboot.mvc; import java.util.Properties; import com.springboot.mvc.models.Customer; import org.hibernate.SessionFactory; import org.hibernate.boot.registry.StandardServiceRegistryBuilder; import org.hibernate.cfg.Configuration; import org.hibernate.cfg.Environment; import org.hibernate.service.ServiceRegistry; public class HibernateUtil { private static SessionFactory sessionFactory; public static SessionFactory getSessionFactory() { if (sessionFactory == null ) { try { Configuration configuration = new Configuration(); // Hibernate settings equivalent to hibernate.cfg.xml's properties Properties settings = new Properties(); settings.put(Environment.DRIVER, "com.mysql.cj.jdbc.Driver" ); settings.put(Environment.URL, settings.put(Environment.USER, "cclee" ); settings.put(Environment.PASS, "ccleedb12345" ); settings.put(Environment.DIALECT, "org.hibernate.dialect.MySQL55Dialect" ); settings.put(Environment.SHOW_SQL, "true" ); settings.put(Environment.CURRENT_SESSION_CONTEXT_CLASS, "thread" ); settings.put(Environment.HBM2DDL_AUTO, "create-drop" ); configuration.setProperties(settings); configuration.addAnnotatedClass(Customer. class ); // we add more and more classes here. ServiceRegistry serviceRegistry = new StandardServiceRegistryBuilder() .applySettings(configuration.getProperties()).build(); sessionFactory = configuration.buildSessionFactory(serviceRegistry); } catch (Exception e) { e.printStackTrace(); } } // Then: // Session session = sessionFactory.openSession(); // Transaction transaction = session.beginTransaction(); return sessionFactory; } } |
Tuesday, July 21, 2020
basic setup for hibernate
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | <hibernate-configuration> <session-factory> <property name="hibernate.hbm2ddl.auto">update</property> <!-- JDBC Database connection settings --> <property name="connection.driver_class">com.mysql.cj.jdbc.Driver</property> <property name="connection.url">jdbc:mysql://192.168.99.100:3306/JDBC_test?useSSL=false</property> <property name="connection.username">root</property> <property name="connection.password">cclee12345@12345</property> <!-- JDBC connection pool settings ... using built-in test pool --> <property name="connection.pool_size">1</property> <!-- Select our SQL dialect --> <property name="hibernate.dialect">org.hibernate.dialect.MySQL55Dialect</property> <!-- Echo the SQL to stdout --> <property name="show_sql">true</property> <!-- Set the current session context --> <property name="current_session_context_class">thread</property> <property name="hibernate.hbm2ddl.auto">create-drop</property> <mapping class="com.machingclee.hibernatetutorial.models.Student"> </mapping></session-factory> </hibernate-configuration> |
1 2 3 4 5 | <dependency> <groupid>mysql</groupid> <artifactid>mysql-connector-java</artifactid> <version>8.0.20</version> </dependency> |
Wednesday, July 1, 2020
Record for my Docker Files
1 2 3 4 5 6 7 8 9 10 11 | FROM node:10 WORKDIR /usr/src/app COPY package*.json ./ RUN npm install && npm rebuild bcrypt --build-from-source EXPOSE 3000 CMD ["npm", "start"] |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 | version: "3.7" services: db: container_name: postgres_screencapdic_db image: postgres restart: always environment: POSTGRES_USER: cclee11111 POSTGRES_PASSWORD: cclee11111 POSTGRES_DB: screencapdb volumes: - screencapdb:/var/lib/postgresql/data ports: - "5432:5432" # screencap_api: # build: # context: ./ # dockerfile: Dockerfile-screepcap-express # container_name: screencap_api # restart: always # ports: # - "8080:3000" # volumes: # - type: bind # source: ./ # target: /usr/src/app # - /usr/src/app/node_modules volumes: screencapdb: |
Tuesday, June 23, 2020
State Pattern
https://github.com/machingclee/ScreenCapDictionaryNoteApp_refactor/tree/2020-06-23-refactor-translation-by-state-pattern/ScreenCapDictionaryNoteApp/ViewModel/Helpers/TranslationHelper
Wednesday, June 17, 2020
Bash Script
1 | for f in *\ *; do mv "$f" "${f// /_}"; done |
Monday, June 15, 2020
Use Sequelize Migration with ES6 Syntax
yarn add sequelize-cli
it is clear from the --help command how to generate a migration folder and migration file. The only trouble is to use them with ES6 syntax.
From the official document:
https://sequelize.org/master/manual/migrations.html#using-babelwe add
1 | yarn add babel-register |
1 2 3 4 5 6 7 8 9 10 11 | // .sequelizerc require( "babel-register" ); const path = require( 'path' ); module.exports = { 'config' : path.resolve( 'config' , 'config.json' ), 'models-path' : path.resolve( 'models' ), 'seeders-path' : path.resolve( 'seeders' ), 'migrations-path' : path.resolve( 'migrations' ) } |
https://sequelize.org/master/manual/query-interface.htmlOfficial document also says that in migration file we can export async function up and async function down instead of returning a chain of promises (i.e., a promise)! For example it happens that I want to add a column for users to implement mobile push notification, then I need to add a column called push_notification_token, I can do the following in our migration file:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | "use strict" ; import { modelNames } from "../src/enums/modelNames" ; import { Sequelize, DataTypes } from "sequelize" ; module.exports = { async up(queryInterface, Sequelize) { await queryInterface.addColumn(modelNames.USER + "s" , "push_notification_token" , { type: DataTypes.STRING, allowNull: true }); }, async down(queryInterface, Sequelize) { await queryInterface.removeColumn( modelNames.USER + "s" , "push_notification_token" , {} ); } }; |
1 2 3 4 5 | Loaded configuration file "config\config.js". Using environment "development". == 20200615141047-add-push-notification-token-to-users-table: migrating ======= ERROR: regeneratorRuntime is not defined |
1 | yarn add babel-plugin-transform-runtime |
1 2 3 4 5 6 7 8 | { "presets": ["env"], "plugins": [ ["transform-runtime", { "regenerator": true }] ] } |
Sunday, June 14, 2020
SQL injection 及 prepared statement already exists
今天突然想起這個問題,而我也在為自己的手機 app 寫一個 backend 及想有一個自己 customize 的 query 結果。翻查 sequelize 的 doc 這件事沒有比寫 raw query 簡單,所以就開始自己寫 raw sequel。嘗試 sql inject 自己一下。發現如果沒有做任何預防操施真的很危險,甚至把我整個 database 毀掉:
所以開始學習寫 prepare statement:
1 2 3 4 5 6 7 | PREPARE get_notes (int) AS SELECT v."id", v."word", v."pronounciation", v."explanation", p."dateTime", p."croppedScreenshot" FROM vocabs v INNER JOIN pages p ON v."sqlitePageId"=p."sqliteId" WHERE p."sqliteNoteId"=$1; EXECUTE get_notes(${sqliteNoteId}); |
1 | error: prepared statement "get_notes" already exists |
1 2 3 4 5 6 7 8 | PREPARE get_notes (int) AS SELECT v."id", v."word", v."pronounciation", v."explanation", p."dateTime", p."croppedScreenshot" FROM vocabs v INNER JOIN pages p ON v."sqlitePageId"=p."sqliteId" WHERE p."sqliteNoteId"=$1; EXECUTE get_notes(${sqliteNoteId}); DEALLOCATE get_notes; |
Thursday, June 4, 2020
Sequelize API for CRUD operation
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 | module.exports = (app, db) => { app.get( "/posts" , (req, res) => db.post.findAll().then( (result) => res.json(result) ) ); app.get( "/post/:id" , (req, res) => db.post.findByPk(req.params.id).then( (result) => res.json(result)) ); app.post( "/post" , (req, res) => db.post.create({ title: req.body.title, content: req.body.content }).then( (result) => res.json(result) ) ); app.put( "/post/:id" , (req, res) => db.post.update({ title: req.body.title, content: req.body.content }, { where: { id: req.params.id } }).then( (result) => res.json(result) ) ); app. delete ( "/post/:id" , (req, res) => db.post.destroy({ where: { id: req.params.id } }).then( (result) => res.json(result) ) ); } |
Monday, June 1, 2020
Use docker-compose up instead of docker container run -v blablabla for local developement
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | version: "3.7" services: app: container_name: docker-node-mongo restart: always build: . ports: - "80:3000" volumes: - type: bind source: ./ target: /usr/src/app mongo: container_name: mongo image: mongo ports: - "27017:27017" |
Thursday, May 28, 2020
Redux Setup that also applies to React-Native
1 2 3 4 5 6 7 8 9 10 11 12 | import { createStore, combineReducers } from "redux"; import trackFormReducer from "./reducers/trackFormReducer"; const rootReducer = combineReducers({ trackFormReducer: trackFormReducer }); const store = createStore( rootReducer /* preloadedState, */, window.__REDUX_DEVTOOLS_EXTENSION__ && window.__REDUX_DEVTOOLS_EXTENSION__() ); export default store; |
1 | import { Provider as StoreProvider } from "react-redux"; |
Tuesday, May 26, 2020
Make sure swarm manager role not to run any container
1 | docker node update --availability drain <manager host name> |
Wednesday, May 20, 2020
Docker Exercise: docker-compose
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 | version: "2" services: drupal: image: drupal ports: - "8080:80" volumes: - drupal-modules:/var/www/html/modules - drupal-profiles:/var/www/html/profiles - drupal-sites:/var/www/html/sites - drupal-themes:/var/www/html/themes postgres: image: postgres environment: - POSTGRES_PASSWORD=1234 volumes: drupal-modules: drupal-profiles: drupal-sites: drupal-themes: |
When we are done, we use docker-compose down -v to remove everything.
Note that the service name is implicitly also the hostname of the service. For instance, when we try to connect to postgres database inside the network from one of the container (for example, our drupal service), the hostname is can be put as postgres.
Another example of docker-compose:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | version: "2" services: proxy: build: context: . dockerfile: nginx.Dockerfile image: nginx-custom ports: - "80:80" web: image: httpd volumes: - ./html:/usr/local/apache2/htdocs/ |
- use COPY in Dockerfile
- use volumes in docker-compose.yml.
Docker Exercise
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | FROM node:14.2.0-alpine3.10 EXPOSE 3000 RUN apk add --update tini RUN mkdir -p /usr/src/app WORKDIR /usr/src/app COPY package.json package.json RUN npm install COPY . . CMD ["tini", "--", "node", "./bin/www"] |
1 | docker build -t testnode |
1 | docker tag testnode machingclee/testnode |
1 | docker push machingclee/testnode |
Tuesday, May 12, 2020
Standard docker command
1 | docker container run -d -p 3306:3306 --name db -e MYSQL_RANDOM_ROOT_PASSWORD=yes mysql |
1 | docker container run -it --name ubuntu ubuntu |
1 | exit |
1 | docker container start -ai ubuntu |
Friday, May 8, 2020
Bash Script to Batch Renaming
1 | for mp3 in *.mp3; do renamedmp3=$(echo "$mp3" | sed 's/_mixdown.mp3$/.mp3/'); mv ./"$mp3" ./"$renamedmp3; done" |
1 2 3 4 5 6 7 8 9 | #!/bin/bash list=$(find -name '*.mp3' | grep '_mixdown\.mp3'); for mp3FilePath in $list do newMp3FilePath=$(echo "$mp3FilePath" | sed 's/_mixdown\.mp3/_vo\.mp3/') mv $mp3FilePath $newMp3FilePath done |
Friday, March 27, 2020
webpack config
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 | const path = require( "path" ); module.exports = { entry: "./src/index.js" , output: { path: __dirname, filename: "app.js" }, module: { rules: [ { test: /\.js$/, exclude: /node_modules/, use: { loader: "babel-loader" , options: { presets: [ "@babel/preset-env" ], plugins: [ [ "@babel/plugin-transform-runtime" , { regenerator: true } ] ] } } } ] }, target: "node" , node: { __dirname: false , __filename: false }, externals: { fs: "commonjs fs" }, mode: "development" }; |
Sunday, March 22, 2020
Data Validation using Decorator in Typescript
1 2 3 4 5 | < form > < input type = "text" placeholder = "Course Title" id = "title" /> < input type = "text" placeholder = "Price" id = "price" /> < button type = "submit" >Submit</ button > </ form > |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 | enum Validation { required = "required" , positive = "positive" } interface ValidatorConfig { [property: string]: { [validatableProp: string]: Validation[]; //e.g., [Validation.required, Validation.positive] }; } const registeredValidators: ValidatorConfig = {}; function Required(target: any, propName: string) { //we don't have propertyDescriptor for property, it exists only for methods registeredValidators[target.constructor.name] = { ...registeredValidators[target.constructor.name], [propName]: [Validation.required] }; } function Positive(target: any, propName: string) { registeredValidators[target.constructor.name] = { ...registeredValidators[target.constructor.name], [propName]: [Validation.positive] }; } function validate(obj: any) { const objValidatorConfig = registeredValidators[obj.constructor.name]; if (!objValidatorConfig) { return true ; } else { let validated = true ; for (const prop in objValidatorConfig) { for (const validator of objValidatorConfig[prop]) { if (validator === (Validation.required as string)) { console.log( "run?" ); validated = validated && obj[prop].trim().length > 0; } if (validator === (Validation.positive as string)) { validated = validated && obj[prop] > 0; } } } return validated; } } class Course { @Required public title: string; @Positive public price: number; constructor(t: string, p: number) { this .title = t; this .price = p; } } const courseForm = document.querySelector( "form" )!; courseForm.addEventListener( "submit" , e => { e.preventDefault(); const titleEl = document.getElementById( "title" ) as HTMLInputElement; const priceEl = document.getElementById( "price" ) as HTMLInputElement; const title = titleEl.value; const price = +priceEl.value; const newCourse = new Course(title, price); if (!validate(newCourse)) { alert( "Invalid input, please try again!" ); return ; } console.log(newCourse); }); |
Sunday, March 15, 2020
Wordpress Study Notes
Extension in VSCode
- I install beautify for cleaning up the indentations of both php and html code at the same time.
- I also install PHP Intelephense to give auto completion on html tag.
Prelude
Hierarchy of wordpress php files: Assume that we have a post type called "program", as registered by using1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 | <?php function university_post_types() { //Program Post Type register_post_type( "program" , array ( 'supports' => array ( 'title' , 'editor' ), 'rewrite' => array ( 'slug' => 'programs' ), 'has_archive' => true, 'public' => true, 'labels' => array ( 'name' => 'Programs' , 'add_new_item' => 'Add New Program' , 'edit_item' => 'Edit Program' , 'all_items' => 'All Programs' , 'singular_name' => 'Program' ), 'menu_icon' => 'dashicons-awards' )); } add_action( "init" , "university_post_types" ); ?> |
\wp-content\mu-plugins\*.php
(mu stands for "must-use") , then we get:in our dashboard. This is a kind of customized post type, therefore we will create a php file that is dedicated to customizing post of type "program".
Wednesday, March 4, 2020
php installation
- Follow the link to install php
https://www.youtube.com/watch?v=4_-12QSaaFg - Following this to install xampp:
https://www.youtube.com/watch?v=TjFRTkw6GDQ - visual studio plug-in that I have used:
1. phpfmt - PHP formatter
2. Format HTML in PHP
refer to my config file sent in gmail
Tuesday, March 3, 2020
Factory Design Pattern in Typescript
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | class Point { x: number; y: number; private constructor(x: number, y: number) { this .x = x; this .y = y; } static Factory = class { static pointXY(x: number, y: number) { return new Point(x, y); } static pointPolar(r: number, theta: number) { return new Point( r * Math.cos((theta * Math.PI) / 180), r * Math.sin((theta * Math.PI) / 180) ); } }; } |
1 | const pt = Point.Factory.pointXY(10, 20); |
Monday, February 17, 2020
Certbot
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | $ sudo apt update $ clear $ sudo apt install apache2 $ cd /etc/apache2/sites-available/ $ clear $ ls $ sudo vi ridiculous-inc.com.conf $ cd /var/www $ sudo git clone https://github.com/ridiculous-ijquery-todo.git ridic $ sudo a2ensite ridiculous-inc.com.conf $ sudo service apache2 restart $ sudo apt-get update $ clear $ sudo apt-get install software-properties-common $ sudo add-apt-repository ppa:certbot/certbot $ clear $ sudo apt-get update $ sudo apt-get install python-certbot-apache $ clear $ sudo certbot --apache $ history |
1 2 3 4 5 6 7 8 9 10 | <VirtualHost *:80> DocumentRoot /var/www/ridic ServerName ridiculous-inc.com <Directory "/var/www/ridic"> allow from all AllowOverride All Order allow,deny Options +Indexes </Directory> </VirtualHost> |
1 2 3 4 5 6 7 8 | <VirtualHost *:80> ServerName api.screencapdictionary.com <Location "/"> ProxyPreserveHost On ProxyPass http://localhost:5000/ ProxyPassReverse http://localhost:5000/ </Location> </VirtualHost> |
Saturday, February 8, 2020
Git command that I should know
Configuration
1 | git config --global --edit |
1 | git config --global --list |
Branches
List out all branches:1 | git branch -a |
1 | git branch mynewbranch |
1 | git checkout mynewbranch |
1 | git branch -m mynewbranch newbranch |
1 | git branch -d newbranch |
Specific branch
git clone a specific branch instead of all the branches and checkout specific one. For example, in my private repo I want to clone the "forth-branch" only, then write:1 | git clone --single-branch --branch forth-branch https://github.com/machingclee/2020-English-Learning-Website.git |
P4Merge Configuration
1 2 3 4 5 6 | git config --global merge.tool p4merge git config --global mergetool.p4merge.path "C:/Program Files/Perforce/p4merge.exe" git config --global mergetool.prompt false git config --global diff.tool p4merge git config --global difftool.p4merge.path "C:/Program Files/Perforce/p4merge.exe" git config --global difftool.prompt false |
git config --global --list
to double check the configuration:
1 2 3 4 5 6 7 8 9 10 | core.editor="C:\Users\Ching-Cheong Lee\AppData\Local\Programs\Microsoft VS Code\Code.exe" --wait user.name=James Lee user.email=machingclee@gmail.com color.ui=true merge.tool=p4merge mergetool.p4merge.path=C:/Program Files/Perforce/p4merge.exe mergetool.prompt=false diff.tool=p4merge difftool.p4merge.path=C:/Program Files/Perforce/p4merge.exe difftool.prompt=false |