Thursday, December 17, 2020
Saturday, December 12, 2020
Remove Sensitive Environment Variable / File That is too Big from Remote Repo
git filter-branch --force --index-filter \ "git rm --cached --ignore-unmatch <path to your file>" \ --prune-empty --tag-name-filter cat -- --all(cd to the top level of the repo first), add
-r
flag if you want to remove the whole directory.
REF: git pull origin the-remote-branch --allow-unrelated-historiesand resolve conflicts.
Friday, December 11, 2020
Examine Output Size in Tensorflow
x = tf.constant([[1, 1., 1., 2., 3.], [1, 1., 4., 5., 6.], [1, 1., 7., 8., 9.], [1, 1., 7., 8., 9.], [1, 1., 7., 8., 9.]]) x = tf.reshape(x, [1, 5, 5, 1]) print(MaxPool2D((5, 5), strides=(2, 2), padding="same")(x)) print(math.ceil(5/2))which yields
print(MaxPool2D((5, 5), strides=(2, 2), padding="same")(x)) tf.Tensor( [[[[7.] [9.] [9.]] [[7.] [9.] [9.]] [[7.] [9.] [9.]]]], shape=(1, 3, 3, 1), dtype=float32)
3For layer that has training weight, we may try the following for testing:
model = Conv2D(3, (3, 3), strides=(2, 2), padding="same", kernel_initializer=tf.constant_initializer(1.)) x = tf.constant([[1., 2., 3., 4., 5.], [1., 2., 3., 4., 5.], [1., 2., 3., 4., 5.], [1., 2., 3., 4., 5.], [1., 2., 3., 4., 5.]]) x = tf.reshape(x, (1, 5, 5, 1)) print(model(x))which yields
x = tf.constant([[1., 2., 3., 4., 5.],... tf.Tensor( [[[[ 6. 6. 6.] [18. 18. 18.] [18. 18. 18.]] [[ 9. 9. 9.] [27. 27. 27.] [27. 27. 27.]] [[ 6. 6. 6.] [18. 18. 18.] [18. 18. 18.]]]], shape=(1, 3, 3, 3), dtype=float32)In fact it can be proved in both MaxPooling2D and Conv2D that if stride $=s$ and padding$=$same, then
Sunday, December 6, 2020
conda virtual environment command
conda create --name tensorflow python=3.7 conda env remove --name tensorflow conda env export --name ENVNAME > envname.yml conda env create --file envname.yml
Wednesday, October 28, 2020
Record model.compile options
model = tf.keras.models.Sequential([tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, activation=tf.nn.relu), tf.keras.layers.Dense(10, activation=tf.nn.softmax)])Then our model.compile might have the following as arguments:
model.compile(optimizer = tf.optimizers.Adam(), loss = 'sparse_categorical_crossentropy', metrics=['accuracy'])With a callback that stop training at desired loss:
import tensorflow as tf print(tf.__version__) class myCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs={}): if(logs.get('loss')<0.4): print("\nReached 60% accuracy so cancelling training!") self.model.stop_training = True callbacks = myCallback() mnist = tf.keras.datasets.fashion_mnist (training_images, training_labels), (test_images, test_labels) = mnist.load_data() training_images=training_images/255.0 test_images=test_images/255.0 model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(512, activation=tf.nn.relu), tf.keras.layers.Dense(10, activation=tf.nn.softmax) ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy') model.fit(training_images, training_labels, epochs=5, callbacks=[callbacks])
Saturday, October 10, 2020
ResNet
def identity_block(X, f, filters, stage, block): """ Implementation of the identity block as defined in Figure 3 Arguments: X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev) f -- integer, specifying the shape of the middle CONV's window for the main path filters -- python list of integers, defining the number of filters in the CONV layers of the main path stage -- integer, used to name the layers, depending on their position in the network block -- string/character, used to name the layers, depending on their position in the network Returns: X -- output of the identity block, tensor of shape (n_H, n_W, n_C) """ # defining name basis conv_name_base = 'res' + str(stage) + block + '_branch' bn_name_base = 'bn' + str(stage) + block + '_branch' # Retrieve Filters F1, F2, F3 = filters # Save the input value. You'll need this later to add back to the main path. X_shortcut = X # First component of main path X = Conv2D(filters = F1, kernel_size = (1, 1), strides = (1,1), padding = 'valid', name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X) X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X) X = Activation('relu')(X) ### START CODE HERE ### # Second component of main path (≈3 lines) X = Conv2D(filters = F2, kernel_size = (f,f), strides = (1,1), padding = 'same', name = conv_name_base + '2b', kernel_initializer = glorot_uniform(seed=0))(X) X = BatchNormalization(axis=3, name=bn_name_base+'2b')(X) X = Activation('relu')(X) # Third component of main path (≈2 lines) X = Conv2D(filters = F3, kernel_size = (1,1), strides = (1,1), padding = 'valid', name = conv_name_base+'2c', kernel_initializer = glorot_uniform(seed=0))(X) X = BatchNormalization(axis=3, name=bn_name_base+'2c')(X) # Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines) X = Add()([X, X_shortcut]) X = Activation('relu')(X) ### END CODE HERE ###
def convolutional_block(X, f, filters, stage, block, s = 2): """ Implementation of the convolutional block as defined in Figure 4 Arguments: X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev) f -- integer, specifying the shape of the middle CONV's window for the main path filters -- python list of integers, defining the number of filters in the CONV layers of the main path stage -- integer, used to name the layers, depending on their position in the network block -- string/character, used to name the layers, depending on their position in the network s -- Integer, specifying the stride to be used Returns: X -- output of the convolutional block, tensor of shape (n_H, n_W, n_C) """ # defining name basis conv_name_base = 'res' + str(stage) + block + '_branch' bn_name_base = 'bn' + str(stage) + block + '_branch' # Retrieve Filters F1, F2, F3 = filters # Save the input value X_shortcut = X ##### MAIN PATH ##### # First component of main path X = Conv2D(F1, (1, 1), strides = (s,s), name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X) X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X) X = Activation('relu')(X) ### START CODE HERE ### # Second component of main path (≈3 lines) X = Conv2D(F2, (f, f), strides = (1,1), padding="same", name = conv_name_base + '2b', kernel_initializer = glorot_uniform(seed=0))(X) X = BatchNormalization(axis = 3, name = bn_name_base + '2b')(X) X = Activation('relu')(X) # Third component of main path (≈2 lines) X = Conv2D(F3, (1, 1), strides = (1,1), padding="valid", name = conv_name_base + '2c', kernel_initializer = glorot_uniform(seed=0))(X) X = BatchNormalization(axis = 3, name = bn_name_base + '2c')(X) ##### SHORTCUT PATH #### (≈2 lines) X_shortcut = Conv2D(F3, (1, 1), strides = (s,s), padding="valid", name = conv_name_base + '1', kernel_initializer = glorot_uniform(seed=0))(X_shortcut) X_shortcut = BatchNormalization(axis = 3, name = bn_name_base + '1')(X_shortcut) # Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines) X = Add()([X, X_shortcut]) X = Activation("relu")(X) ### END CODE HERE ### return X
def ResNet50(input_shape = (64, 64, 3), classes = 6): """ Implementation of the popular ResNet50 the following architecture: CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> CONVBLOCK -> IDBLOCK*2 -> CONVBLOCK -> IDBLOCK*3 -> CONVBLOCK -> IDBLOCK*5 -> CONVBLOCK -> IDBLOCK*2 -> AVGPOOL -> TOPLAYER Arguments: input_shape -- shape of the images of the dataset classes -- integer, number of classes Returns: model -- a Model() instance in Keras """ # Define the input as a tensor with shape input_shape X_input = Input(input_shape) # Zero-Padding X = ZeroPadding2D((3, 3))(X_input) # Stage 1 X = Conv2D(64, (7, 7), strides = (2, 2), name = 'conv1', kernel_initializer = glorot_uniform(seed=0))(X) X = BatchNormalization(axis = 3, name = 'bn_conv1')(X) X = Activation('relu')(X) X = MaxPooling2D((3, 3), strides=(2, 2))(X) # Stage 2 X = convolutional_block(X, f = 3, filters = [64, 64, 256], stage = 2, block='a', s = 1) X = identity_block(X, 3, [64, 64, 256], stage=2, block='b') X = identity_block(X, 3, [64, 64, 256], stage=2, block='c') ### START CODE HERE ### # Stage 3 (≈4 lines) X = convolutional_block(X, f = 3, filters=[128, 128, 512], stage = 3, block="a", s = 2) X = identity_block(X, 3, filters=[128,128,512], stage=3, block="b") X = identity_block(X, 3, filters=[128,128,512], stage=3, block="c") X = identity_block(X, 3, filters=[128,128,512], stage=3, block="d") # Stage 4 (≈6 lines) X = convolutional_block(X, f = 3, filters=[256, 256, 1024], stage = 4, block="a", s = 2) X = identity_block(X, 3, filters=[256, 256, 1024], stage=4, block="b") X = identity_block(X, 3, filters=[256, 256, 1024], stage=4, block="c") X = identity_block(X, 3, filters=[256, 256, 1024], stage=4, block="d") X = identity_block(X, 3, filters=[256, 256, 1024], stage=4, block="e") X = identity_block(X, 3, filters=[256, 256, 1024], stage=4, block="f") # Stage 5 (≈3 lines) X = convolutional_block(X, f = 3, filters=[512, 512, 2048], stage = 5, block="a", s = 2) X = identity_block(X, f=3, filters=[512, 512, 2048], stage=5, block="b") X = identity_block(X, f=3, filters=[512, 512, 2048], stage=5, block="c") # AVGPOOL (≈1 line). Use "X = AveragePooling2D(...)(X)" X = AveragePooling2D(pool_size=(2, 2), name="avg_pool")(X) ### END CODE HERE ### # output layer X = Flatten()(X) X = Dense(classes, activation='softmax', name='fc' + str(classes), kernel_initializer = glorot_uniform(seed=0))(X) # Create model model = Model(inputs = X_input, outputs = X, name='ResNet50') return model
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset() # Normalize image vectors X_train = X_train_orig/255. X_test = X_test_orig/255. # Convert training and test labels to one hot matrices Y_train = convert_to_one_hot(Y_train_orig, 6).T Y_test = convert_to_one_hot(Y_test_orig, 6).T model.fit(X_train, Y_train, epochs = 2, batch_size = 32) preds = model.evaluate(X_test, Y_test) print ("Loss = " + str(preds[0])) print ("Test Accuracy = " + str(preds[1]))
Friday, October 9, 2020
Code Assignment
def HappyModel(input_shape): X_input = Input(input_shape) X = ZeroPadding2D((3,3))(X_input) X = Conv2D(18,(7,7),strides=(1,1),name="conv0")(X) X = BatchNormalization(axis=3, name="bn0")(X) X = Activation("relu")(X) X = MaxPooling2D((2,2), name="max_pool")(X) X = Flatten()(X) X = Dense(1,activation="sigmoid", name="fC")(X) model = Model(input = X_input, outputs = X, name="happy model") return model happyModel = HappyModel(X_train.shape[1:]) happyModel.compile(optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"]) happyModel.fit(x=X_train, y=Y_train,epochs=10, batch_size=20) preds = happyModel.evaluate(x=X_test,y=Y_test) img_path = 'images/smile.jpg' img = image.load_img(img_path, target_size=(64, 64)) imshow(img) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) print(happyModel.predict(x))
Tuesday, September 29, 2020
Derive the Formula of $\displaystyle \frac{\partial \mathcal L}{\partial W^{[\ell]}}$
I accidentally found that by the formulas in the previous post, we can already derive the following
Proof. By repeated use of the formular $dY^{[\ell]} = [W^{[\ell+1]T}dY^{[\ell+1]}] * \Phi^{[\ell+1]}(U^{[\ell+1]})$ we have \[\begin{align*} dW^{[\ell]}& = \frac{1}{m} dU^{[\ell]} Y^{[\ell-1]T}\\ &=\frac{1}{m}\left(\left[dY^{[\ell]}\right] * \Phi^{[\ell]}{}'(U^{[\ell]})\right) Y^{[\ell-1]T}\\ &=\frac{1}{m}\left( \Phi^{[\ell]'}(U^{[\ell]})* \left[\prod_{i=\ell+1}^{L-1} \Phi^{[i]}{}'(U^{[i]}) * W^{[i]T}\right]\cdot dY^{[L-1]}\right) \cdot Y^{[\ell-1]T} \end{align*} \] And recall that $dY^{[L]} =\displaystyle \frac{\partial \mathcal L}{\partial Y^{[L]}}. \qed$
Sunday, September 27, 2020
Formulas Revisit
Saturday, September 26, 2020
Intutive derivation of Cross Entropy as a "loss" function
In defining "loss" function for classification problems given $p_i=\mathbb P\{\text{$i$ occurs}\}$, $i=1,2,\dots,n$, from emperical data, we measure the accuracy of estimated data (from our output layer in neuron network) $[q_1,q_2,\dots,q_n]$ by the cross-entropy: \[L=\sum_{i=1}^n p_i\ln q_i.\] Recently I revisit this topic, and understand that this comes very naturally from solving maximum-likelihood estimation problem!
Let's take an example, consider flipping a coin with getting a head with probability $p$ and tail with $1-p$, then the probability of getting 2 heads out of 6 flipping is \[
L(p) = \binom{6}{2} p^2 (1-p)^4 = 15 p^2(1-p)^4.
\] Maximum-likelihood estimation ask the following problem:
The phenomenon of getting 2 heads is most likely to happen under what value of $p$?
In other words, the above question is the same as at what value of $p$ the proability $L(p)$ gets maximized? By simply solving $L'(p)=0$ we get the answer $p=\frac{1}{3}$.
But in more complex problem we could not have the probability of some phenomenon to occurs based on another probablity with explicit formula. Instead of computing the probability $p$ directly, we try to estimate it such that our observation (the phenomenon from empirical data) is most likely to occur, and such an estimated value $p$ is considered as a good estimation.
Now the derivation of cross-entropy will be very intuitive: Assume that \[
\text{mutally disjoint }E_i=\{\text{$i$ occurs}\},\quad \mathbb P(E_i) = p_i, \quad i=1,2,3,\dots,n.
\]
And assume further that $E_i$'s are iid events. Consider events $A_1,\dots,A_N$ are such that $A_i = \cupp_{i=1}^n E_i$ for each $i$ (for example, flipping coins $N$ times), then $p_i = N_i/N$, where $N_i$ is the number of times $i$ occures among $A_1,\dots,A_N$.
Now we get another estimation $q_i$ of the same event $E_i$ from what ever experiment we can imagine. How good is $[q_1,\dots,q_n]$ as an estimation to the past emperical data $[p_1,\dots,p_n]$? The standard distance in $\R^n$ is certainly not a good choice since an quantity $\epsilon$ from $q_i$ to $p_i$ can mean huge difference from $q_{i'}$ to $p_{i'}$. $[q_1,\dots,q_n]$ is considered as good estimation if the observed phenomenon \[
\{\text{1 appears $N_1$ times}\}, \quad \{\text{2 appears $N_2$ times}\},\quad \dots ,\quad \{\text{n appears $N_n$ times} \}
\] is very likely to happen under the estimates $[q_1,\dots,q_n]$, i.e., when \[
L = \prod_{i=1}^N q_i^{N_i}\iff \frac{\ln L}{N}= \sum_{i=1}^N \frac{N_i}{N}\ln q_i = \sum_{i=1}^n p_i\ln q_i.
\] is large, and we have derived the cross-entropy at this point.
Sunday, September 20, 2020
Useful command in git review course:
git rm --cached -r build/If we type git rm -h we get an explanaation that:
--cached only remove from the indexby index it means tracked files inside the staging area (files are always tracked once they get committed.) A gitignore has no effect to files that is already tracked, so we use git rm --cached.
Sunday, August 23, 2020
babel-node template
npm install babel-cli babel-preset-env --save-dev
{ "presets": [ "env" ] }
nodemon src/index.js --exec babel-node
Saturday, August 22, 2020
Completely uninstall apache2 to get fresh config
apt-get remove --purge apache2 apache2-data apache2-utils
Thursday, August 20, 2020
Powershell command to debug ios in chrome
remotedebug_ios_webkit_adapter --port=9000Open safari, browse to the page that is going to be inspected. Then in chrome go to chrome://inspect/#devices and choose the device.
Copy all lastly updated files into a single directory if the version control is horribly not done by git:
#!/bin/bash git add . git status updatedFiles=$(git status | awk '{print $2}' | grep -P "\..*$") touch updates/update.txt git status > updates/update.txt for file in $updatedFiles do cp --parents "$file" ./updates echo "copied $file to ./updates/$file" done; read -p "Press enter to exit"mkdir updates and then run the bash script above. Files will be copied into updates directory, and we can manage it by date.
Monday, August 3, 2020
public class Main { public static void main(String[] args) { System.setProperty("spring.devtools.restart.enabled", "false"); SpringApplication.run(Main.class, args); } }不然不知為甚麼它有 restart 機制,restart 前後的同一個 class 將不視為同一個 class,database transaction 將發生錯誤。
Sunday, August 2, 2020
Hilbernate Database Configuration without XML
package com.springboot.mvc; import java.util.Properties; import com.springboot.mvc.models.Customer; import org.hibernate.SessionFactory; import org.hibernate.boot.registry.StandardServiceRegistryBuilder; import org.hibernate.cfg.Configuration; import org.hibernate.cfg.Environment; import org.hibernate.service.ServiceRegistry; public class HibernateUtil { private static SessionFactory sessionFactory; public static SessionFactory getSessionFactory() { if (sessionFactory == null) { try { Configuration configuration = new Configuration(); // Hibernate settings equivalent to hibernate.cfg.xml's properties Properties settings = new Properties(); settings.put(Environment.DRIVER, "com.mysql.cj.jdbc.Driver"); settings.put(Environment.URL, "jdbc:mysql://192.168.99.100:3306/JDBC_spring_mvc_tutorial?useSSL=false&serverTimezone=UTC"); settings.put(Environment.USER, "cclee"); settings.put(Environment.PASS, "ccleedb12345"); settings.put(Environment.DIALECT, "org.hibernate.dialect.MySQL55Dialect"); settings.put(Environment.SHOW_SQL, "true"); settings.put(Environment.CURRENT_SESSION_CONTEXT_CLASS, "thread"); settings.put(Environment.HBM2DDL_AUTO, "create-drop"); configuration.setProperties(settings); configuration.addAnnotatedClass(Customer.class); // we add more and more classes here. ServiceRegistry serviceRegistry = new StandardServiceRegistryBuilder() .applySettings(configuration.getProperties()).build(); sessionFactory = configuration.buildSessionFactory(serviceRegistry); } catch (Exception e) { e.printStackTrace(); } } // Then: // Session session = sessionFactory.openSession(); // Transaction transaction = session.beginTransaction(); return sessionFactory; } }
Tuesday, July 21, 2020
basic setup for hibernate
and pom.xml we need:update com.mysql.cj.jdbc.Driver jdbc:mysql://192.168.99.100:3306/JDBC_test?useSSL=false root cclee12345@12345 1 org.hibernate.dialect.MySQL55Dialect true thread create-drop
in additional to the hibernate maven dependencies.mysql mysql-connector-java 8.0.20
Wednesday, July 1, 2020
Record for my Docker Files
FROM node:10 WORKDIR /usr/src/app COPY package*.json ./ RUN npm install && npm rebuild bcrypt --build-from-source EXPOSE 3000 CMD ["npm", "start"]and
version: "3.7" services: db: container_name: postgres_screencapdic_db image: postgres restart: always environment: POSTGRES_USER: cclee11111 POSTGRES_PASSWORD: cclee11111 POSTGRES_DB: screencapdb volumes: - screencapdb:/var/lib/postgresql/data ports: - "5432:5432" # screencap_api: # build: # context: ./ # dockerfile: Dockerfile-screepcap-express # container_name: screencap_api # restart: always # ports: # - "8080:3000" # volumes: # - type: bind # source: ./ # target: /usr/src/app # - /usr/src/app/node_modules volumes: screencapdb:
Tuesday, June 23, 2020
State Pattern
https://github.com/machingclee/ScreenCapDictionaryNoteApp_refactor/tree/2020-06-23-refactor-translation-by-state-pattern/ScreenCapDictionaryNoteApp/ViewModel/Helpers/TranslationHelper
Wednesday, June 17, 2020
Bash Script
for f in *\ *; do mv "$f" "${f// /_}"; doneThis change all spaces in file name by a "_".
Monday, June 15, 2020
Use Sequelize Migration with ES6 Syntax
yarn add sequelize-cli
it is clear from the --help command how to generate a migration folder and migration file. The only trouble is to use them with ES6 syntax.
From the official document:
https://sequelize.org/master/manual/migrations.html#using-babelwe add
yarn add babel-registerand add a .sequelizerc runtime config with
// .sequelizerc require("babel-register"); const path = require('path'); module.exports = { 'config': path.resolve('config', 'config.json'), 'models-path': path.resolve('models'), 'seeders-path': path.resolve('seeders'), 'migrations-path': path.resolve('migrations') }We can copy the implementation of altering, creating, deleting table from official documentation:
https://sequelize.org/master/manual/query-interface.htmlOfficial document also says that in migration file we can export async function up and async function down instead of returning a chain of promises (i.e., a promise)! For example it happens that I want to add a column for users to implement mobile push notification, then I need to add a column called push_notification_token, I can do the following in our migration file:
"use strict"; import { modelNames } from "../src/enums/modelNames"; import { Sequelize, DataTypes } from "sequelize"; module.exports = { async up(queryInterface, Sequelize) { await queryInterface.addColumn(modelNames.USER + "s", "push_notification_token", { type: DataTypes.STRING, allowNull: true }); }, async down(queryInterface, Sequelize) { await queryInterface.removeColumn( modelNames.USER + "s", "push_notification_token", {} ); } };Now if you run the code, we encounter the following error
Loaded configuration file "config\config.js". Using environment "development". == 20200615141047-add-push-notification-token-to-users-table: migrating ======= ERROR: regeneratorRuntime is not definedso we need the transform runtime plugin by babel,
yarn add babel-plugin-transform-runtimeand in our .babelrc add:
{ "presets": ["env"], "plugins": [ ["transform-runtime", { "regenerator": true }] ] }and we are done!
Sunday, June 14, 2020
SQL injection 及 prepared statement already exists
今天突然想起這個問題,而我也在為自己的手機 app 寫一個 backend 及想有一個自己 customize 的 query 結果。翻查 sequelize 的 doc 這件事沒有比寫 raw query 簡單,所以就開始自己寫 raw sequel。嘗試 sql inject 自己一下。發現如果沒有做任何預防操施真的很危險,甚至把我整個 database 毀掉:
所以開始學習寫 prepare statement:
PREPARE get_notes (int) AS SELECT v."id", v."word", v."pronounciation", v."explanation", p."dateTime", p."croppedScreenshot" FROM vocabs v INNER JOIN pages p ON v."sqlitePageId"=p."sqliteId" WHERE p."sqliteNoteId"=$1; EXECUTE get_notes(${sqliteNoteId});在 postman get request 了一次,一切都很美好,再 get request 多一次,誒?
error: prepared statement "get_notes" already exists搜尋了一下解決方法,最後只要每一次完成 EXECUTE 後把儲存好的 prepared statement 移除就好,整句變成:
PREPARE get_notes (int) AS SELECT v."id", v."word", v."pronounciation", v."explanation", p."dateTime", p."croppedScreenshot" FROM vocabs v INNER JOIN pages p ON v."sqlitePageId"=p."sqliteId" WHERE p."sqliteNoteId"=$1; EXECUTE get_notes(${sqliteNoteId}); DEALLOCATE get_notes;
Thursday, June 4, 2020
Sequelize API for CRUD operation
module.exports = (app, db) => { app.get( "/posts", (req, res) => db.post.findAll().then( (result) => res.json(result) ) ); app.get( "/post/:id", (req, res) => db.post.findByPk(req.params.id).then( (result) => res.json(result)) ); app.post("/post", (req, res) => db.post.create({ title: req.body.title, content: req.body.content }).then( (result) => res.json(result) ) ); app.put( "/post/:id", (req, res) => db.post.update({ title: req.body.title, content: req.body.content }, { where: { id: req.params.id } }).then( (result) => res.json(result) ) ); app.delete( "/post/:id", (req, res) => db.post.destroy({ where: { id: req.params.id } }).then( (result) => res.json(result) ) ); }
Monday, June 1, 2020
Use docker-compose up instead of docker container run -v blablabla for local developement
version: "3.7" services: app: container_name: docker-node-mongo restart: always build: . ports: - "80:3000" volumes: - type: bind source: ./ target: /usr/src/app mongo: container_name: mongo image: mongo ports: - "27017:27017"
Thursday, May 28, 2020
Redux Setup that also applies to React-Native
import { createStore, combineReducers } from "redux"; import trackFormReducer from "./reducers/trackFormReducer"; const rootReducer = combineReducers({ trackFormReducer: trackFormReducer }); const store = createStore( rootReducer /* preloadedState, */, window.__REDUX_DEVTOOLS_EXTENSION__ && window.__REDUX_DEVTOOLS_EXTENSION__() ); export default store;and
import { Provider as StoreProvider } from "react-redux";and wrap the app component by StoreProvider
Tuesday, May 26, 2020
Make sure swarm manager role not to run any container
docker node update --availability drain <manager host name>
Wednesday, May 20, 2020
Docker Exercise: docker-compose
version: "2" services: drupal: image: drupal ports: - "8080:80" volumes: - drupal-modules:/var/www/html/modules - drupal-profiles:/var/www/html/profiles - drupal-sites:/var/www/html/sites - drupal-themes:/var/www/html/themes postgres: image: postgres environment: - POSTGRES_PASSWORD=1234 volumes: drupal-modules: drupal-profiles: drupal-sites: drupal-themes:The options can be found in the official docker page from hub.docker.com. cd into the directory that contains the above docker-compose.yml and run docker-compose up.
When we are done, we use docker-compose down -v to remove everything.
Note that the service name is implicitly also the hostname of the service. For instance, when we try to connect to postgres database inside the network from one of the container (for example, our drupal service), the hostname is can be put as postgres.
Another example of docker-compose:
version: "2" services: proxy: build: context: . dockerfile: nginx.Dockerfile image: nginx-custom ports: - "80:80" web: image: httpd volumes: - ./html:/usr/local/apache2/htdocs/If image is not found, it will run the build command and tag it with the image name. There are at least two ways to customize what to build.
- use COPY in Dockerfile
- use volumes in docker-compose.yml.
Docker Exercise
FROM node:14.2.0-alpine3.10 EXPOSE 3000 RUN apk add --update tini RUN mkdir -p /usr/src/app WORKDIR /usr/src/app COPY package.json package.json RUN npm install COPY . . CMD ["tini", "--", "node", "./bin/www"]cd into the directory that contains the above as Dockerfile, and run
docker build -t testnodeThe above will be built as an image and tagged with testnode. After we make sure testnode can be run properly (by running docker container run --rm -p 80:3000 testnode), we change the tag name as follows:
docker tag testnode machingclee/testnodeand then
docker push machingclee/testnodeif we want.
Tuesday, May 12, 2020
Standard docker command
docker container run -d -p 3306:3306 --name db -e MYSQL_RANDOM_ROOT_PASSWORD=yes mysqlTo test linux, we can run another container that run an image of minimal version of ubuntu as follow:
docker container run -it --name ubuntu ubuntuHere -it (i.e., -i and -t) allows us to kind of SSH into the container and get the shell ready. By typing
exitinside the shell we return to our docker prompt. To go back to our shell inside ubuntu container, we run
docker container start -ai ubuntu
Friday, May 8, 2020
Bash Script to Batch Renaming
for mp3 in *.mp3; do renamedmp3=$(echo "$mp3" | sed 's/_mixdown.mp3$/.mp3/'); mv ./"$mp3" ./"$renamedmp3; done"The next one goes even further, it does the same thing with all current and subdirectory:
#!/bin/bash list=$(find -name '*.mp3' | grep '_mixdown\.mp3'); for mp3FilePath in $list do newMp3FilePath=$(echo "$mp3FilePath" | sed 's/_mixdown\.mp3/_vo\.mp3/') mv $mp3FilePath $newMp3FilePath done
Friday, March 27, 2020
webpack config
const path = require("path"); module.exports = { entry: "./src/index.js", output: { path: __dirname, filename: "app.js" }, module: { rules: [ { test: /\.js$/, exclude: /node_modules/, use: { loader: "babel-loader", options: { presets: ["@babel/preset-env"], plugins: [ [ "@babel/plugin-transform-runtime", { regenerator: true } ] ] } } } ] }, target: "node", node: { __dirname: false, __filename: false }, externals: { fs: "commonjs fs" }, mode: "development" };
Sunday, March 22, 2020
Data Validation using Decorator in Typescript
<form> <input type="text" placeholder="Course Title" id="title" /> <input type="text" placeholder="Price" id="price" /> <button type="submit">Submit</button> </form>and inside our ts (transpiled into js file and link it into our index.html), we write:
enum Validation { required = "required", positive = "positive" } interface ValidatorConfig { [property: string]: { [validatableProp: string]: Validation[]; //e.g., [Validation.required, Validation.positive] }; } const registeredValidators: ValidatorConfig = {}; function Required(target: any, propName: string) { //we don't have propertyDescriptor for property, it exists only for methods registeredValidators[target.constructor.name] = { ...registeredValidators[target.constructor.name], [propName]: [Validation.required] }; } function Positive(target: any, propName: string) { registeredValidators[target.constructor.name] = { ...registeredValidators[target.constructor.name], [propName]: [Validation.positive] }; } function validate(obj: any) { const objValidatorConfig = registeredValidators[obj.constructor.name]; if (!objValidatorConfig) { return true; } else { let validated = true; for (const prop in objValidatorConfig) { for (const validator of objValidatorConfig[prop]) { if (validator === (Validation.required as string)) { console.log("run?"); validated = validated && obj[prop].trim().length > 0; } if (validator === (Validation.positive as string)) { validated = validated && obj[prop] > 0; } } } return validated; } } class Course { @Required public title: string; @Positive public price: number; constructor(t: string, p: number) { this.title = t; this.price = p; } } const courseForm = document.querySelector("form")!; courseForm.addEventListener("submit", e => { e.preventDefault(); const titleEl = document.getElementById("title") as HTMLInputElement; const priceEl = document.getElementById("price") as HTMLInputElement; const title = titleEl.value; const price = +priceEl.value; const newCourse = new Course(title, price); if (!validate(newCourse)) { alert("Invalid input, please try again!"); return; } console.log(newCourse); });
Sunday, March 15, 2020
Wordpress Study Notes
Extension in VSCode
- I install beautify for cleaning up the indentations of both php and html code at the same time.
- I also install PHP Intelephense to give auto completion on html tag.
Prelude
Hierarchy of wordpress php files: Assume that we have a post type called "program", as registered by using<?php function university_post_types() { //Program Post Type register_post_type("program", array( 'supports' => array('title', 'editor'), 'rewrite'=>array('slug'=>'programs'), 'has_archive' => true, 'public' => true, 'labels' => array( 'name' => 'Programs', 'add_new_item'=>'Add New Program', 'edit_item'=>'Edit Program', 'all_items'=> 'All Programs', 'singular_name' => 'Program' ), 'menu_icon' => 'dashicons-awards' )); } add_action("init", "university_post_types"); ?>inside
\wp-content\mu-plugins\*.php
(mu stands for "must-use") , then we get:in our dashboard. This is a kind of customized post type, therefore we will create a php file that is dedicated to customizing post of type "program".
Wednesday, March 4, 2020
php installation
- Follow the link to install php
https://www.youtube.com/watch?v=4_-12QSaaFg - Following this to install xampp:
https://www.youtube.com/watch?v=TjFRTkw6GDQ - visual studio plug-in that I have used:
1. phpfmt - PHP formatter
2. Format HTML in PHP
refer to my config file sent in gmail
Tuesday, March 3, 2020
Factory Design Pattern in Typescript
class Point { x: number; y: number; private constructor(x: number, y: number) { this.x = x; this.y = y; } static Factory = class { static pointXY(x: number, y: number) { return new Point(x, y); } static pointPolar(r: number, theta: number) { return new Point( r * Math.cos((theta * Math.PI) / 180), r * Math.sin((theta * Math.PI) / 180) ); } }; }now we can create our point by calling
const pt = Point.Factory.pointXY(10, 20);
Monday, February 17, 2020
Certbot
$ sudo apt update $ clear $ sudo apt install apache2 $ cd /etc/apache2/sites-available/ $ clear $ ls $ sudo vi ridiculous-inc.com.conf $ cd /var/www $ sudo git clone https://github.com/ridiculous-ijquery-todo.git ridic $ sudo a2ensite ridiculous-inc.com.conf $ sudo service apache2 restart $ sudo apt-get update $ clear $ sudo apt-get install software-properties-common $ sudo add-apt-repository ppa:certbot/certbot $ clear $ sudo apt-get update $ sudo apt-get install python-certbot-apache $ clear $ sudo certbot --apache $ historyand
<VirtualHost *:80> DocumentRoot /var/www/ridic ServerName ridiculous-inc.com <Directory "/var/www/ridic"> allow from all AllowOverride All Order allow,deny Options +Indexes </Directory> </VirtualHost>To redirect request to port 80 to other internal port we use the following setting of virtualhost:
<VirtualHost *:80> ServerName api.screencapdictionary.com <Location "/"> ProxyPreserveHost On ProxyPass http://localhost:5000/ ProxyPassReverse http://localhost:5000/ </Location> </VirtualHost>
Saturday, February 8, 2020
Git command that I should know
Configuration
git config --global --editThis will open the .gitconfig file by our default editor. Make changes, save, and close the editor, changes will take place. To view the changes:
git config --global --list
Branches
List out all branches:git branch -aCreate new branch:
git branch mynewbranchswitch to new branch:
git checkout mynewbranchRename a branch (-m for move, as in mv in bash command):
git branch -m mynewbranch newbranchDelete a branch:
git branch -d newbranch
Specific branch
git clone a specific branch instead of all the branches and checkout specific one. For example, in my private repo I want to clone the "forth-branch" only, then write:git clone --single-branch --branch forth-branch https://github.com/machingclee/2020-English-Learning-Website.gitwithout --single-branch the above will fetch all branches and checkout the forth-branch.
P4Merge Configuration
git config --global merge.tool p4merge git config --global mergetool.p4merge.path "C:/Program Files/Perforce/p4merge.exe" git config --global mergetool.prompt false git config --global diff.tool p4merge git config --global difftool.p4merge.path "C:/Program Files/Perforce/p4merge.exe" git config --global difftool.prompt falseand
git config --global --list
to double check the configuration:
core.editor="C:\Users\Ching-Cheong Lee\AppData\Local\Programs\Microsoft VS Code\Code.exe" --wait user.name=James Lee user.email=machingclee@gmail.com color.ui=true merge.tool=p4merge mergetool.p4merge.path=C:/Program Files/Perforce/p4merge.exe mergetool.prompt=false diff.tool=p4merge difftool.p4merge.path=C:/Program Files/Perforce/p4merge.exe difftool.prompt=false