Markov chains and reversibility

You will get more from the exercises by trying them before looking at the solutions!

  1. Show that a discrete-time Markov chain run backwards in time (from some time \(n\) and state \(i\)) is again a Markov chain (until time \(n\)).

Hint: Given a Markov chain \((X_j)_{j\ge 0}\), fix \(n\) and let \(Y_j = X_{n-j}\) for \(j=0,\ldots,n\). We want to show that \(Y\) is a Markov chain up to time \(n\), i.e. that for any \(j<n\) and \(y_0,y_1,\ldots,y_j,y_{j+1}\), \[\operatorname{\mathbb{P}}\left(Y_{j+1}=y_{j+1}|Y_0=y_0,Y_1=y_1,\ldots,Y_j=y_j\right) = \operatorname{\mathbb{P}}\left(Y_{j+1}=y_{j+1}|Y_j=y_j\right).\]

Solution: We start with the left-hand side. We have \[\begin{align*} &\operatorname{\mathbb{P}}\left(Y_{j+1}=y_{j+1}|Y_0=y_0,Y_1=y_1,\ldots,Y_j=y_j\right)\\ &= \operatorname{\mathbb{P}}\left(X_{n-(j+1)}=y_{j+1} | X_n = y_0, X_{n-1}=y_1,\ldots,X_{n-j}=y_j\right)\\ &= \frac{\operatorname{\mathbb{P}}\left(X_{n-(j+1)}=y_{j+1}, X_n = y_0, X_{n-1}=y_1,\ldots,X_{n-j}=y_j\right)}{\operatorname{\mathbb{P}}\left( X_n = y_0, X_{n-1}=y_1,\ldots,X_{n-j}=y_j\right)}\\ &= \frac{\operatorname{\mathbb{P}}\left(X_{n-(j+1)}=y_{j+1}\right)p_{y_{j+1},y_j}p_{y_{j},y_{j-1}}\ldots p_{y_1,y_0}}{\operatorname{\mathbb{P}}\left(X_{n-j}=y_j\right)p_{y_j,y_{j-1}}p_{y_{j-1},y_{j-2}}\ldots p_{y_1,y_0}}\\ &= \frac{\operatorname{\mathbb{P}}\left(X_{n-(j+1)}=y_{j+1}\right)p_{y,y_j}}{\operatorname{\mathbb{P}}\left(X_{n-j}=y_j\right)}\\ &= \frac{\operatorname{\mathbb{P}}\left(X_{n-(j+1)}=y_{j+1}, X_{n-j}=y_j\right)}{\operatorname{\mathbb{P}}\left(X_{n-j}=y_j\right)}\\ &= \operatorname{\mathbb{P}}\left(Y_{j+1}=y_{j+1}|Y_j=y_j\right). \end{align*}\]

  1. Suppose that \(p_{x,y}\) are transition probabilities for a discrete state-space Markov chain satisfying detailed balance. Show that if the system of probabilities given by \(\pi_x\) satisfy the detailed balance equations then they must also satisfy the equilibrium equations.

Hint: Start from the detailed balance equations \(\pi_x p_{xy} = \pi_y p_{yx}\) and sum over \(x\).

Solution: Doing as the hint suggests, for any \(y\), \[\sum_x \pi_x p_{xy} = \sum_x \pi_y p_{yx} = \pi_y \sum_x p_{yx} = \pi_y\] which is exactly the equilibrium equations \(\pi P = \pi\).

  1. Show that unconstrained simple symmetric random walk has period \(2\). Show that simple symmetric random walk subject to “prohibition” boundary conditions must be aperiodic.

Hint: “Unconstrained” means no boundary, i.e. on the whole of \(\mathbb{Z}\), whereas “prohibition” boundary conditions means that the random walk moves on \(\{0,1,\ldots,k\}\) for some \(k\), and when it reaches \(0\) or \(k\), it stays where it is with probability \(1/2\) (rather than moving to \(-1\) or \(k+1\)).

Solution: If the unconstrained random walk starts on an even integer, then it will always be on even integers at even times and odd integers at odd times; and vice versa if it starts on an odd integer.

Now consider the “prohibition” random walk. For any sites \(x,y\in\{0,1,\ldots,k\}\) and any time \(n>2k\), the random walk started from site \(x\) has positive probability of taking its first \(x\) steps in the negative direction, then remaining at \(0\) for the next \(n-x-y\) steps, and then taking its last \(y\) steps in the positive direction. (In fact it has probability \(1/2^n\) of doing so.) Thus the random walk has positive probability of being at any of the \(k+1\) sites after \(n\) steps, so it is aperiodic.

  1. Solve the equilibrium equations \(\pi P = \pi\) for simple symmetric random walk on \(\{0, 1, \ldots, k\}\) subject to “prohibition” boundary conditions.

Hint: Recall the equilibrium equations \(\pi P = \pi\), i.e. \(\sum_x \pi_x p_xy = \pi_y\) for all \(y\). You will also need to use \(\sum_x \pi_x = 1\).

Solution: When \(y=0\), then \(\sum_x \pi_x p_{x0} = \pi_0/2 + \pi_1/2\), so setting this equal to \(\pi_0\) we see that \(\pi_1 = \pi_0\). When \(y=1\), then \(\sum_x \pi_x p_{x1} = \pi_0/2 + \pi_2/2\), so setting this equal to \(\pi_1\) and using that \(\pi_0=\pi_1\) we see that \(\pi_2 = \pi_1\). Continuing in this way we get \(\pi_0=\pi_1=\pi_2=\ldots=\pi_k\). Finally, \(\sum_x\pi_x=1\) implies that \(\pi_x = 1/(k+1)\) for all \(x\).

  1. Suppose that \(X_0, X_1, \ldots,\) is a simple symmetric random walk with “prohibition” boundary conditions as above.
    • Use the definition of conditional probability to compute \[\overline{p}_{y,x}=\frac{\operatorname{\mathbb{P}}\left(X_{n-1}=x\,,\,X_n=y\right)}{\operatorname{\mathbb{P}}\left(X_n=y\right)}\,,\]
    • then show that \[\frac{\operatorname{\mathbb{P}}\left(X_{n-1}=x\,,\,X_n=y\right)}{\operatorname{\mathbb{P}}\left(X_n=y\right)}=\frac{\operatorname{\mathbb{P}}\left(X_{n-1}=x\right)p_{x,y}}{\operatorname{\mathbb{P}}\left(X_n=y\right)}\,,\]
    • now substitute, using \(\operatorname{\mathbb{P}}\left(X_n=i\right)=\tfrac{1}{k+1}\) for all \(i\) so \(\overline{p}_{y,x}=p_{x,y}\).
    • Use the symmetry of the kernel (\(p_{x,y}=p_{y,x}\)) to show that the backwards kernel \(\overline{p}_{y,x}\) is the same as the forwards kernel \(\overline{p}_{y,x}=p_{y,x}\).
  2. Show that if \(X_0, X_1, \ldots,\) is a simple asymmetric random walk with “prohibition” boundary conditions, running in equilibrium, then it also has the same statistical behaviour as its reversed chain (i.e. solve the detailed balance equations!).

Hint: Recall that the detailed balance equations are \(\pi_x p_{xy} = \pi_y p_{yx}\) for all \(x\) and \(y\). Note that \(p_{xy}\) is zero unless \(x\) and \(y\) are neighbours. Start from \(x=0\) and \(y=1\) and work your way up.

Solution: Say we jump from \(x\) to \(x+1\) with probability \(p\), and \(x+1\) to \(x\) with probability \(1-p\). For each \(x\in\{0,\ldots,k-1\}\) we have \(\pi_x p = \pi_{x+1}(1-p)\) so, chaining together, \(\pi_j = (\frac{p}{1-p})^j\pi_0\). All the other detailed balance equations are trivially satisfied since \(p_{xy}=0\) whenever \(x\neq y\) are not neighbours.

  1. Show that detailed balance doesn’t work for the \(3\)-state chain with transition probabilities \(\tfrac{1}{3}\) for \(0\to1\), \(1\to2\), \(2\to0\) and \(\tfrac{2}{3}\) for \(2\to1\), \(1\to0\), \(0\to2\).

Hint: Proceed as in the previous question, using detailed balance to get \(\pi_1\) in terms of \(\pi_0\) and then \(\pi_2\) in terms of \(\pi_1\). But then since \(2\) is also a neighbour of \(1\), we have a third detailed balance equation to get \(\pi_0\) in terms of \(\pi_2\). Show that these three equations have no non-trivial solution.

  1. Work through the Random Chess example to compute the mean return time to a corner of the chessboard.

The solution is in the lecture notes.

  1. Verify for the Ising model that \[\operatorname{\mathbb{P}}\left(\mathbf{S} = \mathbf{s}^{(i)}\Big|\mathbf{S} \in \{\mathbf{s},\mathbf{s}^{(i)}\}\right) = \frac{\exp\left(-J\sum_{j:j\sim i}s_i s_j\right)} {\exp\left(J\sum_{j:j\sim i}s_i s_j\right)+\exp\left(-J\sum_{j:j\sim i}s_i s_j\right)}\,.\] Determine how this changes in the presence of an external field. Confirm that detailed balance holds for the heat-bath Markov chain.
  2. Write down the transition probabilities for the Metropolis-Hastings sampler. Verify that it has the desired probability distribution as an equilibrium distribution.

Solution: For \(y\neq x\), the probability that we move from \(x\) to \(y\) is the probability that \(y\) is proposed from \(x\), and then that the proposal is accepted. That is, \(p_{xy} = q(x,y)\alpha(x,y)\). We then have \(p_{xx} = 1 - \sum_y q(x,y)\alpha(x,y)\) for each \(x\).

To show that \(\pi\) is an equilibrium distribution for this chain, we check that it solves detailed balance. Suppose first that \(\pi(y)q(y,x)\le \pi(x)q(x,y)\). Then from the above and the definition of \(\alpha\), \[\begin{align*} \pi(x)p_{xy} &= \pi(x)q(x,y)\alpha(x,y)\\ &= \pi(x)q(x,y)\cdot \frac{\pi(y)q(y,x)}{\pi(x)q(x,y)}\\ &= \pi(y)q(y,x)\\ &= \pi(y)q(y,x)\alpha(y,x)\\ &= \pi(y)p_{yx} \end{align*}\] where for the fourth equality we used the fact that \(\alpha(y,x)=1\).

If on the other hand \(\pi(y)q(y,x)>\pi(x)q(x,y)\), the same calculation works with \(x\) and \(y\) exchanged. Thus detailed balance holds, and the desired probability distribution \(\pi\) is an equilibrium distribution for this Markov chain.