Probability with Martingales by David Williams\[ \newcommand{\Q}{\mathbb Q} \newcommand{\R}{\mathbb R} \newcommand{\C}{\mathbb C} \newcommand{\Z}{\mathbb Z} \newcommand{\N}{\mathbb N} \newcommand{\abs}[1]{\lvert #1 \rvert} \newcommand{\norm}[1]{\lVert #1 \rVert} \newcommand{\abs}[1]{\lvert #1 \rvert} \newcommand{\Norm}[1]{\left \lVert #1 \right \rVert} \newcommand{\Abs}[1]{\left \lvert #1 \right \rvert} \newcommand{\pind}{\bot \!\!\! \bot} \newcommand{\probto}{\buildrel P\over \to} \newcommand{\vect}[1]{\boldsymbol #1} \DeclareMathOperator{\EE}{\mathbb E} \DeclareMathOperator{\PP}{\mathbb P} \DeclareMathOperator{\E}{E} \DeclareMathOperator{\dnorm}{\mathcal N} \DeclareMathOperator{\sgn}{sgn} \DeclareMathOperator{\Var}{Var} \DeclareMathOperator{\Cov}{Cov} \DeclareMathOperator{\Leb}{Leb} \DeclareMathOperator{\Bin}{Bin} \newcommand{\wto}{\buildrel w\over \to} \] Characteristic FunctionsProblem 16.1
Prove that \[ \begin{equation} \lim_{T\uparrow \infty} \int_0^T x^{-1} \sin x \, dx = \frac \pi 2 \end{equation} \] Define \[ \begin{equation} f(\lambda) = L[x^{-1} \sin x] = \int_0^\infty \frac{e^{-\lambda x} \sin x} x \, dx \end{equation} \] Using the previously derived identity for Laplace transforms \(\frac d {d\lambda} L[f] = -L[xf]\), we have \(f’(\lambda) = -L[\sin x]\). We can compute this handily \[ \begin{align} f’(\lambda ) &= -\int_0^\infty e^{-\lambda x} \sin x\, dx \\ &= \left. \cos x e^{-\lambda x} \right|_0^\infty + \lambda \int_0^\infty e^{-\lambda x}\cos x \, dx\\ &= 1 + \lambda \left( \left. \sin x e^{-\lambda x} \right|_0^\infty + \lambda \int_0^\infty \sin x e^{-\lambda x} \right)\\ &= (1+\lambda^2) f’(\alpha) \end{align} \] From dominated convergence theorem, \(f(\infty)=0\). Thus we integrate from \(\infty\) to find \[ \begin{equation} f(\lambda) = \int_\infty^\alpha \frac{d\lambda}{1+\lambda^2} = \arctan(\lambda) +\frac \pi 2 \end{equation} \] Set \(\lambda=0\) to find \(\int_0^\infty \frac{\sin x}{x}\, dx = \frac \pi 2\) A second solution is to use complex analysis. Consider the contour \(\gamma\) in the complex plane along the real line from \(-T\) to \(-\epsilon\), then a semicircle centered at 0 from \(-\epsilon\) to \(\epsilon\), then along the real line from \(\epsilon\) to \(T\), then a semicircle centered at 0 from \(T\) to \(-T\). By Cauchy's theorem \[ \begin{equation} \lim_{\epsilon\to 0} \int_\gamma \frac{e^{iz}} z \, dz = i \pi \end{equation} \] since the integrand is holomorphic inside of the region bound by \(\gamma\), but has a residue at 0, and the arc of radius \(\epsilon\) is \(\pi\) radians. On the semicircle of radius \(T\), writing \(z=Te^{i\theta}\) we have \(dz = iTe^{i\theta} d\theta\). \[ \begin{equation} \Abs{\int_0^\pi \frac {\exp( i T e^{i\theta} )}{Te^{i\theta}} \, i Te^{i\theta}d\theta } \leq \int_0^{\pi} \Abs{ e^{ iT(\cos \theta + i\sin \theta))} }\, d\theta = \int_0^\pi e^{-T\sin\theta} \, d\theta \end{equation} \] The integrand is dominated by 1, so by the dominated convergence theorem the integral converges to 0 as \(T\to \infty\). Therefore as \(T\to \infty\) and \(\epsilon\to 0\) we have \[ \begin{equation} \lim_{T\to \infty, \epsilon\to 0}\int_\gamma \frac{e^{iz}} z \, dz = \int_{-\infty}^\infty \frac {e^{ix}} x \, dx \end{equation} \] and we conclude this integral equals \(i\pi\). Making the substitution \(x\to -x\) we conclude \[ \begin{equation} \int_{-\infty}^\infty \frac {e^{-ix}} x \, dx = -i \pi \end{equation} \] Using the fact \(\sin x = \frac{ e^{ix}-e^{-ix}} {2i}\), this implies \[ \begin{equation} \int_{-\infty}^\infty \frac{\sin x} x \, dx = \pi \qquad \Rightarrow \qquad \int_0^\infty \frac{\sin x} x \, dx = \frac \pi 2 \end{equation} \] Since the integrand on the left is even. Problem 16.2
Prove that if \(Z\) has \(U[-1,1]\) distribution, then \[ \begin{equation} \phi_Z(\theta) = \sin \theta / \theta \end{equation} \] Show there do not exist i.i.d. RV \(X\) and \(Y\) such that \[ \begin{equation} X-Y \sim U[-1,1] \end{equation} \] Computing \[ \begin{equation} \phi_Z(\theta) = \E e^{i\theta Z} = \int_{-1}^1 e^{i\theta x} \frac{1}{2}\, dx = \left. \frac{e^{i\theta x}}{2i\theta}\right|_{-1}^{1} = \frac {\sin \theta} \theta \end{equation} \] For \(X\) and \(Y\) i.i.d. and \(Z=X-Y\) \[ \begin{equation} \phi_Z(\theta) = \E e^{i\theta (X-Y)} = \E e^{i\theta X}e^{-i\theta Y} = \E e^{i\theta X}\E e^{-i\theta Y} = \phi_X(\theta) \phi_X(-\theta) \end{equation} \] where we used the property that \(\E f(X)g(Y)=\E f(X)\E g(Y)\) for independent \(X\) and \(Y\). However since \(\phi_X(-\theta) = \overline{\phi_X(\theta)}\), in order for \(Z=X-Y\) to satisfy \(Z\sim U[-1,1]\) it must be that \[ \begin{equation} \norm{\phi_X(\theta)}^2 = \frac{\sin \theta} \theta \end{equation} \] But this is impossible since the left hand side is strictly non-negative whereas the right hand side is sometimes negative. Problem 16.3
Calculate \(\phi_X(\theta)\) where \(X\) has the Cauchy distribution, which has pdf \(\frac 1 {1+x^2}\). Show that the Cauchy distribution is stable, i.e., that \((X_1+X_2+\dots+X_n)/n \sim X\) for i.i.d. Cauchy \(X_i\). First assume \(\theta>0\). Consider the contour \(\gamma\) which goes from \(-R\) to \(R\) along the real line, and then along a semicircle centered at 0 from \(R\) to \(-R\). By Cauchy's theorem \[ \begin{equation} \int_\gamma\frac{ e^{\theta i z}}{1+z^2}\, dz = 2\pi i \frac{e^{-\theta }}{2i} = \pi e^{-\theta} \end{equation} \] since there is exactly one residue at \(z=i\) (the denominator can be written \((i+z)(i-z)\)) and the residue there is \(e^{-\theta}/2\). Along the semicircle we can use the elementary bound \[ \begin{equation} \Abs{\int_{\gamma’} \frac {e^{i\theta z}}{1+z^2 }\, dz} \leq \frac{ \pi R}{R^2-1} \to 0 \end{equation} \] since for \(z=Re^{i\psi}\) the magnitude \(\Abs{ \exp( i \theta Re^{i\psi})} = e^{-R \theta \sin\psi} \leq 1\) (this is where we use the hypothesis \(\theta>0\) and that the path is a semicircle in the upper half plane). Also \(\abs{1+z^2}\geq \abs{z}^2 -1 = R^2-1\). Thus the integrand is bound by \(\frac 1 {R^2-1}\) and the length of the path is \(\pi R\). Thus in the limit only the segment along the real line makes a contribution to the path integral and we conclude for \(\theta >0\) \[ \begin{equation} \phi_X(\theta) = \frac 1 \pi \int_{-\infty}^\infty \frac {e^{i\theta x}}{1+x^2} \, dx = e^{-\theta} \qquad \text{ if }\theta \geq 0 \end{equation} \] For \(\theta < 0\) we can modify the above argument, using a semicircle in the lower half plane from \(R\) to \(-R\) instead. Then in Cauchy's theorem we pick up the residue at \(-i\) (instead of \(i\)) going along a clockwise path (instead of counterclockwise). Thus the integral equals \(\pi e^\theta\). We get the same bounds on the integral which show the contribution of the segment along the semicircle tends to 0. Thus \[ \begin{equation} \phi_X(\theta) = \frac 1 \pi \int_{-\infty}^\infty \frac {e^{i\theta x}}{1+x^2} \, dx = e^{\theta} \qquad \text{ if } \theta \leq 0 \end{equation} \] Both cases may be summarized by the formula \(\phi_X(\theta) = e^{-\abs{\theta}}\). From this it immediately follows that \(X\) is stable since \[ \begin{equation} \phi_{S_n/n}(\theta) = \phi_X(\theta/n)^n = e^{-n\abs{\theta/n}} = e^{-\abs{\theta}} \end{equation} \] By the uniqueness of characteristic functions, \(S_n/n\) is distributed like a standard Cauchy random variable. All that remains is to justify \(\phi_{S_n/n}(\theta) = \phi_X(\theta/n)^n\). This follows from the following calculation for i.i.d. \(X_k\): \[ \begin{equation} \phi_{S_n/n}(\theta) = \E e^{i\theta(X_1+\dots+X_n)/n} = (\E e^{i\theta X_1/n})(\E e^{i\theta X_2/n})\cdots (\E e^{i\theta X_n/n} )= \phi_X(\theta/n)^n \end{equation} \] Problem 16.4
Suppose that \(X\sim \dnorm(0,1)\). Show that \(\phi_X(\theta) = \exp(-\frac 1 2 \theta^2)\) Note that \[ \begin{equation} \phi_X(\theta) = \E e^{i\theta X} = \frac 1 {\sqrt{2\pi}} \int_{-\infty}^\infty e^{i\theta x} e^{-\frac 1 2 x^2}\, dx = \frac 1 {\sqrt{2\pi}}\int_{-\infty}^\infty e^{-\frac 1 2 (x- i\theta))^2 - \frac 1 2 \theta^2} = e^{-\frac 1 2 \theta^2} I(i\theta) \end{equation} \] where \(I(\alpha) = \frac 1 {\sqrt{2\pi}} \int_{-\infty}^\infty e^{-\frac 1 2 (x-\alpha)^2}\, dx\). For a real parameter \(\alpha\), \(I(\alpha) = I(0)= 1\), since we can just perform a change of variables \(u=x-\alpha\), and the range of integration is unchanged. In the complex case, consider a path integral in \(\C\) along the rectangle with corners at \(\pm R\) and \(\pm R - i\theta\) of the function \(f(z) = \frac 1 {\sqrt{2 \pi}} e^{-\frac 1 2 z^2}\). Since \(f(z)\) is entire, the path integral is 0. As \(R\to \infty\), the integral along the bottom side from \(-R-i\theta\) to \(R-i\theta\) tends to \(I(i\theta)\). The integral along the top side from \(R\) to \(-R\) tends to \(-I(0)\) (since the limits of the integral are reversed). Along the sides of the rectangle, \(z=\pm R - ix\) for \(x\in [0, \theta]\), so the magnitude of the integrand satisfies \[ \begin{equation} \Abs{\frac 1 {\sqrt{2\pi}} e^{-\frac 1 2 (\pm R-ix)^2} }= \frac 1 {\sqrt{2\pi}} \abs{e^{-\frac 1 2 (R^2 - x^2 + 2 Rx i)}} \leq \frac 1 {\sqrt{2\pi}} e^{-\frac 1 2 (R^2 -\theta^2)} \to 0 \end{equation} \] The path along this side is constant length \(\theta\) as \(R\to \infty\). Therefore the contribution to the path integral from the left and right sides of the rectangle is negligible as \(R\to \infty\), by the simple bound of the path length times the maximum integrand magnitude. We conclude \[ \begin{equation} 0 = \int_\gamma f(z)\, dz \to I(i\theta) - I(0) \end{equation} \] Thus \(I(i\theta) = I(0) = 1\) and \(\phi_X(\theta) = e^{-\frac 1 2 \theta^2}\). Problem 16.5
Prove that if \(\phi\) is the characteristic function of a RV \(X\) then \(\phi\) is non-negative definite in that for \(c_1,c_2,\dots, c_n\in \C\) and \(\theta_1,\theta_2,\dots,\theta_n \in \R\) \[ \begin{equation} \sum_{j,k} c_j \overline{c}_k \phi(\theta_j-\theta_k) \geq 0 \label{eq:nonnegative definite} \end{equation} \] Consider the RV \(Z = \sum_k c_k e^{i\theta_k X}\). Now by the positivity of expectations \[ \begin{equation} \E \abs{Z}^2 \geq 0 \end{equation} \] Since \[ \begin{equation} \abs{Z}^2 = Z \overline Z = \left( \sum_j c_j e^{i \theta_j X}\right) \left( \sum_k \overline{c}_k e^{-i\theta_k X} \right) = \sum_{j,k} c_j \overline{c}_k e^{i(\theta_j-\theta_k)X} \end{equation} \] Taking expectations of both sides and using linearity yields \(\eqref{eq:nonnegative definite}\) Problem 16.6
\[ \begin{equation} U(\omega) = \sum_{\text{odd }n} 2^{-n} Q_n(\omega) \qquad \text{where}\qquad Q_n(\omega) = 2R_n(\omega)-1 \end{equation} \] Find a random variable \(V\) independent of \(U\) such that \(U\) and \(V\) are identically distributed and \(U+\frac 1 2 V\) is uniformly distributed on \([-1,1]\)
\[ \begin{equation} X+\frac 1 2 Y \qquad \text{is uniformly distributed on } [-1,1] \end{equation} \] Let \(\phi\) be the characteristic function of \(X\). Calculate \(\phi(\theta)/\phi(\frac 1 4 \theta)\). Show that the distribution of \(X\) must be the same as that of \(U\) in part (a) and deduce that there exists a set \(F\in \mathcal B[-1,1]\) such that \(\Leb(F)=0\) and that \(\Pr(X\in F)=1\)
\[ \begin{equation} \Pr( Z< z) =\Pr\left(\omega < \frac 1 2 (z+1)\right) = \Leb\left(0,\frac 1 2 (z+1)\right) = \frac 1 2 (z- (-1)) \end{equation} \] which is exactly the distribution for \(U[-1,1]\). Let \(V\) be given by a similar expression to \(U\) \[ \begin{equation} V(\omega)= \sum_{\text{odd } n} 2^{-n}(2R_{n+1}(\omega)-1) = \sum_{\text{even } n} 2^{-n+1}(2R_n(\omega)-1) \end{equation} \] the difference being that the \(n\)th term uses the \(n+1\)st Rademacher function \(R_{n+1}\) rather than \(R_n\). Its a standard result that the Rademacher functions \(R_n(\omega)\) are i.i.d.. The variables \(U\) and \(V\) are functions of disjoint collections of \(R_n\), so they are independent. Because their expressions as functions of identically distributed Rademacher functions are the same, the variables have the same distribution. Its also clear that \[ \begin{equation} \begin{split} U + \frac 1 2 V &= \sum_{\text{odd } n} 2^{-n} (2R_n(\omega)-1) + \sum_{\text{even } n} 2^{-n} (2R_n(\omega)-1) \\ &= 2 \left(\sum_{n\in \N} 2^{-n} R_n(\omega) \right)-1= 2\omega-1 \end{split} \end{equation} \] Thus \(U+\frac 1 2 V \sim U[-1,1]\)
\[ \begin{equation} \phi_X(\theta)\,\phi_X(\frac 1 2 \theta) = \int_{-1}^1 e^{i\theta u} \cdot \frac {du} 2 = \frac{\sin\theta} \theta \end{equation} \] From this we deduce \(\phi_X(\frac 1 2 \theta) \phi_X(\frac 1 4 \theta) = \frac{\sin(\frac 1 2 \theta) }{\frac 1 2 \theta}\) and hence \[ \begin{equation} \frac{\phi_X(\theta) }{\phi_X(\frac 1 4 \theta)} = \frac{ \phi_X(\theta) \,\phi_X(\frac 1 2 \theta)}{\phi_X(\frac 1 2 \theta)\,\phi_X(\frac 1 4\theta)} = \frac{\sin \theta} {2\sin(\frac 1 2 \theta)} = \cos(\frac 1 2 \theta) \end{equation} \] Hence, by induction \[ \begin{equation} \frac{\phi_X(\theta) }{\phi_X(4^{-k}\theta )} = \frac{ \phi_X(\theta)}{\phi_X(\frac 1 4 \theta)} \frac{ \phi_X(\frac 1 4 \theta)}{\phi_X(\frac 1 {16} \theta)} \cdots \frac{ \phi(4^{-k+1} \theta)}{\phi(4^{-k} \theta)} = \cos( \frac 1 2 \theta) \cos( \frac 1 8 \theta) \cdots \cos( 2^{-2k+1} \theta) \end{equation} \] Since \(\phi\) is uniformly continuous and \(\phi(0)=0\) we conclude by taking the limit \(k\to \infty\) \[ \begin{equation} \phi(\theta) = \prod_{\text{odd } k}\cos(2^{-k}\theta) \end{equation} \] On the other hand \[ \begin{equation} \phi_{Q_n}(\theta) = \E e^{i \theta Q_n} = \frac 1 2 e^{i\theta} + \frac 1 2 e^{-i\theta} = \cos(\theta) \end{equation} \] Therefore since \(U = \sum_{\text{odd }k} 2^{-k} Q_n\) we have \[ \begin{equation} \phi_U(\theta) = \prod_{\text{odd } k} \cos( 2^{-k} \theta) = \phi_X(\theta) \end{equation} \] So it must be that \(U\) and \(X\) have the same distribution. TODO prove the measure 0 thing. Its not hard to show that the values of \(X\) lie in a cantor-like set with middle thirds excluded. However this approach doesn't use the characteristic function at all… is there some clever thing with Fourier analysis? ContactFor comments or corrections please contact Ryan McCorvie at ryan@martingale.group |