\(\star\)E7.1. Consider the simple initial-value problem: \[\frac{dy}{dt}=y,\quad% t\geq 0,\quad% y(0)=1.\]

  1. Write down the exact solution \(y(h)\) for any \(h>0\).

  2. Taking \(h\) as the time step for Euler’s method, find \(Y_{1}\) and write down the error \(e_1 = y(h)-Y_{1}\) as an expansion in powers of \(h\).

  3. Repeat for the Crank–Nicolson method, as defined in (4.16).

  4. Repeat for the improved Euler method, as defined in (4.18), (4.19).

In each case, you should see that the error after one step of the method converges to \(0\) with an order one power of \(h\) faster than the truncation error.

E7.2. Consider the improved Euler method for the problem \[\frac{dy}{dt} % = f(y), \quad% y(0) = y_0.\] written like \(Y_{n} = Y_{n-1} + h F_h(Y_{n-1})\), for a suitable function \(F_h\) (as defined in the printed notes after (4.19).

Use Taylor expansions to show that the truncation error for this method satisfies \[\tau_{n+1} % = C_n \, h^2 \;+ \;\mathcal{O}(h^3),\] where \(C_n\) is a constant independent of \(h\) and \(n\), but depending on \(f^{(k)}(y(t_n))\), \(k=0,1,2\), which you should specify. You may assume that \(f\) has as many derivatives as you need.

\(\star\)E7.3. Complete the proof of Theorem 4.7: Suppose that \(f\) is Lipschitz continuous with \(L\) denoting the Lipschitz constant. Let \(h>0\) and let \(n \in \mathbb{N}\) be such that \(t_n = nh \le T\). If \(y(t_n)\) is the solution to (4.2) at time \(t_n\) and \(Y_n\) is the \(n\)th iterate of the Crank–Nicolson method defined in (4.16), you may assume that \(e_n = y(t_n) - Y_n\) satisfies \[| e_{n} |% \leq (1+hL)^2 | e_{n-1}|% + h(1+hL) | \tau_n | \quad \text{for all $n\ge 1$.}\]

  1. Show by induction that \[| e_{n} |% \leq h \sum_{j=0}^{n-1} \; (1+hL)^{2j+1} | \tau_{n-j} |.\]

  2. Hence deduce that \[| e_{n} |% \leq \frac{\exp(2TL)-1}{L} \; \max_{1 \le j \le n} | \tau_{j} |.\]