Markov chains and reversibility

  1. Show that a discrete-time Markov chain run backwards in time (from some time \(n\) and state \(i\)) is again a Markov chain (until time \(n\)).
  2. Suppose that \(p_{x,y}\) are transition probabilities for a discrete state-space Markov chain satisfying detailed balance. Show that if the system of probabilities given by \(\pi_x\) satisfy the detailed balance equations then they must also satisfy the equilibrium equations.
  3. Show that unconstrained simple symmetric random walk has period \(2\). Show that simple symmetric random walk subject to “prohibition” boundary conditions must be aperiodic.
  4. Solve the equilibrium equations \(\pi P = \pi\) for simple symmetric random walk on \(\{0, 1, \ldots, k\}\) subject to “prohibition” boundary conditions.
  5. Suppose that \(X_0, X_1, \ldots,\) is a simple symmetric random walk with “prohibition” boundary conditions as above.
    • Use the definition of conditional probability to compute \[\overline{p}_{y,x}=\frac{\operatorname{\mathbb{P}}\left(X_{n-1}=x\,,\,X_n=y\right)}{\operatorname{\mathbb{P}}\left(X_n=y\right)}\,,\]
    • then show that \[\frac{\operatorname{\mathbb{P}}\left(X_{n-1}=x\,,\,X_n=y\right)}{\operatorname{\mathbb{P}}\left(X_n=y\right)}=\frac{\operatorname{\mathbb{P}}\left(X_{n-1}=x\right)p_{x,y}}{\operatorname{\mathbb{P}}\left(X_n=y\right)}\,,\]
    • now substitute, using \(\operatorname{\mathbb{P}}\left(X_n=i\right)=\tfrac{1}{k+1}\) for all \(i\) so \(\overline{p}_{y,x}=p_{x,y}\).
    • Use the symmetry of the kernel (\(p_{x,y}=p_{y,x}\)) to show that the backwards kernel \(\overline{p}_{y,x}\) is the same as the forwards kernel \(\overline{p}_{y,x}=p_{y,x}\).
  6. Show that if \(X_0, X_1, \ldots,\) is a simple asymmetric random walk with “prohibition” boundary conditions, running in equilibrium, then it also has the same statistical behaviour as its reversed chain (i.e. solve the detailed balance equations!).
  7. Show that detailed balance doesn’t work for the \(3\)-state chain with transition probabilities \(\tfrac{1}{3}\) for \(0\to1\), \(1\to2\), \(2\to0\) and \(\tfrac{2}{3}\) for \(2\to1\), \(1\to0\), \(0\to2\).
  8. Work through the Random Chess example to compute the mean return time to a corner of the chessboard.
  9. Verify for the Ising model that \[\operatorname{\mathbb{P}}\left(\mathbf{S} = \mathbf{s}^{(i)}\Big|\mathbf{S} \in \{\mathbf{s},\mathbf{s}^{(i)}\}\right) = \frac{\exp\left(-J\sum_{j:j\sim i}s_i s_j\right)} {\exp\left(J\sum_{j:j\sim i}s_i s_j\right)+\exp\left(-J\sum_{j:j\sim i}s_i s_j\right)}\,.\] Determine how this changes in the presence of an external field. Confirm that detailed balance holds for the heat-bath Markov chain.
  10. Write down the transition probabilities for the Metropolis-Hastings sampler. Verify that it has the desired probability distribution as an equilibrium distribution.