\( \newcommand{\N}{\mathbb{N}} \newcommand{\R}{\mathbb{R}} \newcommand{\C}{\mathbb{C}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\P}{\mathcal P} \newcommand{\B}{\mathcal B} \newcommand{\F}{\mathbb{F}} \newcommand{\E}{\mathcal E} \newcommand{\brac}[1]{\left(#1\right)} \newcommand{\abs}[1]{\left|#1\right|} \newcommand{\matrixx}[1]{\begin{bmatrix}#1\end {bmatrix}} \newcommand{\vmatrixx}[1]{\begin{vmatrix} #1\end{vmatrix}} \newcommand{\lims}{\mathop{\overline{\lim}}} \newcommand{\limi}{\mathop{\underline{\lim}}} \newcommand{\limn}{\lim_{n\to\infty}} \newcommand{\limsn}{\lims_{n\to\infty}} \newcommand{\limin}{\limi_{n\to\infty}} \newcommand{\nul}{\mathop{\mathrm{Nul}}} \newcommand{\col}{\mathop{\mathrm{Col}}} \newcommand{\rank}{\mathop{\mathrm{Rank}}} \newcommand{\dis}{\displaystyle} \newcommand{\spann}{\mathop{\mathrm{span}}} \newcommand{\range}{\mathop{\mathrm{range}}} \newcommand{\inner}[1]{\langle #1 \rangle} \newcommand{\innerr}[1]{\left\langle #1 \right \rangle} \newcommand{\ol}[1]{\overline{#1}} \newcommand{\toto}{\rightrightarrows} \newcommand{\upto}{\nearrow} \newcommand{\downto}{\searrow} \newcommand{\qed}{\quad \blacksquare} \newcommand{\tr}{\mathop{\mathrm{tr}}} \newcommand{\bm}{\boldsymbol} \newcommand{\cupp}{\bigcup} \newcommand{\capp}{\bigcap} \newcommand{\sqcupp}{\bigsqcup} \newcommand{\re}{\mathop{\mathrm{Re}}} \newcommand{\im}{\mathop{\mathrm{Im}}} \newcommand{\comma}{\text{,}} \newcommand{\foot}{\text{。}} \)

Wednesday, May 28, 2014

An Explicit Example of a Sequence Which has Convergent Subnet but no Convergent Subsequence

I get confused by my (faked) intuition that all subnets of a sequence should also be a subsequence. This turns out to be wrong, and I try to seek for easy example.

The following is the easiest one:

Example. Let $\mathcal I$ be the set of vectors $(n_1,n_2,\dots)$, where $n_i\in \N$ for each $i$ and $n_1<n_2<\cdots$. The cardinality of $\mathcal I$ is easily seen to be $|\R|$. Now our desired sequence will be a sequence of functions defined on $\mathcal I$ as follows:

Let $i=(n_1,n_2,\dots)\in \mathcal I$, we define \[
f_n(i)=\begin{cases}
(-1)^k,&\text{if }n=n_k,\\
0,&\text{otherwise}.
\end{cases}
\] It is easy to see that given an $i=(n_1,n_2,\dots)\in \mathcal I$, $\{f_{n_k}(i)\}=\{(-1)^k\}$ diverges.

With a little abuse of notation, we define $f_n = (f_n(i))_{\mathcal I}$. Now $f_1,f_2,\dots \in [-1,1]^{\mathcal I}$. By Tychonoff's Theorem $f_1,f_2,\dots \in [-1,1]^{\mathcal I}$ is compact w.r.t. the product topology, therefore $\{f_n\}$, being a net, must have a convergent subnet by a standard exercise on nets.

We claim that $\{f_n\}$ has no convergent subsequence. Suppose it does, then there is $\{n_k\}$ such that $\{f_{n_k}\}$ converges w.r.t. product topology. By definition, it has coordinatewise convergence: For every $i\in \mathcal I$, $f_{n_k}(i)$ converges. This is a contradiction if we choose $i=(n_1,n_2,\dots)$.$\qed$

Friday, May 16, 2014

Polar Decomposition of Matrices with an Application in Deriving SVD (to be added in my linear algebra notes)

In this post I want to record a standard result in linear algebra that I haven't paid attention before---the polar decomposition. An immediate consequence that I come up with is the SVD decompsition theorem, which I prove as two consequences, one is for square matrices and one is for general matrices.

In what follows we say that a matrix $A$ is positive if $x\mapsto x\cdot Ax$ is always nonnegative, and strictly positive if $x\mapsto x\cdot Ax$ forms an inner product.

Digression. When we say that an operator $T:H\to H$ on a complex Hilbert space is positive: $\inner{Tx,x}\ge 0$ for every $x$, it is necessarily (i.e., can be proved to be) self-adjoint! The key is that we can always recover the value $\inner{Tx,y}$ by just knowing what is $\inner{Tx,x}$ for every $x\in H$, while this may not happen in real Hilbert spaces.

Theorem (Polar Decomposition). Let $A$ be an $n\times n$ matrix over $\F=\R$ or $\C$, then there is a unitary (orthogonal when $\F=\R$) matrix $U$ and a positive matrix $P$ such that \[
A=UP,
\] where $P=\sqrt{A^*A}$, the unique positive square root of the positive matrix $A^*A$.


Thursday, May 15, 2014

PhD Qualifying Exam for Real Analysis, Spring 2013-14 by Dr Li

I just solved all of them except for 4. As a usual practice answer of qualifying exam will not be unveiled (even to PG students in UST), let me record my solution below for future use.

The following is the set of problems a few days ago:

Problem 1. Let $F$ be a Lebesgue nonmeasurable subset of $[0,1]$. Prove that there is $c\in (0,1)$ such that whenever $E\subseteq[0,1]$ is Lebesgue measurable and $m(E)\ge c$, then $F\cap E$ is Lebesgue nonmeasurable.

Problem 2. Let $r_1,r_2,r_3,\dots$ be a sequence containing each rational number in $[0,1]$ exactly once. Let $f:[0,1]\setminus \{r_1,r_2,r_3\dots\}\to \R$ be defined by $\dis f(x)=\sum_{k=1}^\infty \frac{1}{k^2|x-r_k|^{1/2}}$. Prove that $f(x)<\infty$ a.e. on $[0,1]\setminus \{r_1,r_2,r_3\dots\}$.

Problem 3. Let $S$ be a vector subspace of $L^2[0,1]$ with $\|f\|=(\int_{[0,1]}|f|^2\,dm )^{1/2}$ and suppose that there is a constant $K$ such that for every $f\in S$ and every $x\in [0,1]$, $|f(x)|\leq K\|f\|$. Show that the dimension of $S$ is finite.

My Remark. The converse is also true, and this can be generalized to every $L^p(K)$, where $p\in (1,\infty)$ and $K$ is compact.

Saturday, May 3, 2014

某人真係好鬼污糟

話說我地 office 有一個起碼讀左 7 年 PhD 嘅怪人 (人稱包哥),出哂名污糟邋遢,成日唔沖涼 (貌似最近有人見佢沖...,但有咩用,肯定佢唔係成日洗衫)。

有一次食飯前我地三五成群去 PG office 最近嘅某一個廁所放水,喺果到我地目擊到一單好神奇嘅事:包兄痾完尿之後,竟然會好勤力咁洗手!

事後傾返,原來我地都不約而同地心諗一直錯怪左佢。

,因為我係最後一個放水嘅關係,我睇到嘅係另一回事!

包兄洗完手之後竟然又再去痾多鑊。因為我地痾嘅尿兜相近,我唔知點解夜神月上身咁一路痾一路忍笑十幾秒。最後佢痾完後立即揚長而去。__ !原來都係唔洗手果 d __ 街黎。