Chapter 20 \(H^\infty \) control: measurement feedback

We consider input-state-output systems with a state \(x:[0,\infty )\to \mR ^n\), external input \(w:[0,\infty )\to \mR ^{m_1}\), control input \(u:[0,\infty )\to \mR ^{m_2}\), performance output \(z:[0,\infty )\to \mR ^{p_1}\) and measured output \(y:[0,\infty )\to \mR ^{p_2}\) described by

\begin{equation} \label {eq:Hinf:xzy} \dot {x}=Ax+B_1w+B_2u,\qquad z=C_1x+D_{12}u,\qquad y=C_2x+D_{21}w, \end{equation}

with the initial condition \(x(0)=x^0\) where \(x^0\in \mR ^n\) and

\[ A\in \mR ^{n\times n},~ B_1\in \mR ^{n\times m_1},~ B_2\in \mR ^{n\times m_2},~ C_1\in \mR ^{p_1\times n},~ D_{12}\in \mR ^{p_1\times m_2},~ C_2\in \mR ^{p_2\times n},~ D_{21}\in \mR ^{p_2\times m_1}. \]

  • Definition 20.1 (\(H^\infty \) measurement feedback problem) The objective is to, for a given \(\gamma >0\), find matrices

    \[ A_c\in \mR ^{n_c\times n_c},~ B_c\in \mR ^{n_c\times p_2},~ C_c\in \mR ^{m_2\times n_c},~ D_c\in \mR ^{m_2\times p_2}, \]

    such that with the control

    \begin{equation} \label {eq:Hinfty:controller} \dot {x}_c=A_cx_c+B_cy,\quad u=C_cx_c+D_cy, \end{equation}

    we have

    • 1. \(\sup _{\omega \in \mR } \|F_{zw}(\omega )\|<\gamma \), where \(F_{zw}\) is the frequency response of the combination of (20.1) and (20.2),

    • 2. for all \(x^0\in \mR ^n\), \(x^0_c\in \mR ^{n_c}\) and \(w=0\) we have \(\lim _{t\to \infty }x(t)=0\) and \(\lim _{t\to \infty }x_c(t)=0\).

  • Theorem 20.2 Assume that \(D_{12}\) is injective, that the Rosenbrock matrix

    \[ \bbm {sI-A&-B_2\\C_1&D_{12}} \]

    is injective for all \(s\in \mC \) with \(\re (s)=0\), that \(D_{21}\) is surjective and that the Rosenbrock matrix

    \[ \bbm {sI-A&-B_1\\C_2&D_{21}} \]

    is surjective for all \(s\in \mC \) with \(\re (s)=0\).

    Let \(\gamma >0\). The following are equivalent:

    • 1. The \(H^\infty \) measurement feedback problem as given in Definition 20.1 is solvable for this \(\gamma \).

    • 2. There exist (necessarily unique) symmetric positive semi-definite solutions \(X\) and \(Y\) of the algebraic Riccati equations

      \begin{gather} A^*X+XA+C_1^*C_1+\gamma ^{-2}XB_1B_1^*X-(XB_2+C_1^*D_{12})(D_{12}^*D_{12})^{-1}(B_2^*X+D_{12}^*C_1)=0,\notag \\ AY+YA^*+B_1B_1^*+\gamma ^{-2}YC_1^*C_1Y-(YC_2^*+B_1D_{21}^*)(D_{21}D_{21}^*)^{-1}(C_2Y+D_{21}B_1^*)=0, \label {eq:Y:Hinfty} \end{gather} such that the spectral radius condition \(r(XY)<\gamma ^2\) holds and such that the following two matrices are asymptotically stable:

      \begin{gather*} A+\gamma ^{-2}B_1B_1^*X-B_2(D_{12}^*D_{12})^{-1}\left (D_{12}^*C_1+B_2^*X\right ), \\ A+\gamma ^{-2}YC_1^*C_1-(B_1D_{21}^*+YC_2^*)(D_{21}D_{21}^*)^{-1}C_2. \end{gather*}

    A specific controller which achieves the objective is given by

    \[ A_c=A+B_2F-LC_2+(B_1-LD_{21})\gamma ^{-2}B_1^*X,\qquad B_c=L,\qquad C_c=F,\qquad D_c=0, \]

    where

    \begin{gather*} F=-(D_{12}^*D_{12})^{-1}\left (D_{12}^*C_1+B_2^*X\right ) \\ L=(I-\gamma ^{-2}YX)^{-1}(B_1D_{21}^*+YC_2^*)(D_{21}D_{21}^*)^{-1}, \end{gather*}

  • Remark 20.3.  The spectral radius \(r(M)\) of a square matrix \(M\) is the maximum of the absolute values of the eigenvalues of the matrix (i.e. the radius of the smallest closed disc with center zero which contains all the eigenvalues of \(M\)).

20.1 Examples

  • Example 20.4.  Consider the first order system

    \[ \dot {x}(t)+x(t)=w_1(t)+u(t), \]

    with the performance output \(z\) and measured output \(y\)

    \[ z(t)=\bbm {x(t)\\\varepsilon u(t)},\qquad y(t)=x(t)+\delta w_2(t). \]

    where \(\varepsilon ,\delta >0\). To put this into the \(H^\infty \) state feedback framework, we have \(n=m_2=p_2=1\), \(m_1=p_1=2\) and

    \[ A=-1,\quad B_1=\bbm {1&0},\quad B_2=1,\quad C_1=\bbm {1\\0},\quad D_{12}=\bbm {0\\\varepsilon },\quad C_2=1,\quad D_{21}=\bbm {0&\delta }. \]

    The Riccati equation (20.3) is

    \[ -2Y+1+\left (\gamma ^{-2}-\delta ^{-2}\right )Y^2=0. \]

    When \(\gamma =\delta \) we obtain the unique solution

    \[ Y=\frac {1}{2}, \]

    whereas otherwise we obtain the two solutions

    \[ Y=\frac {\gamma ^2\delta ^2\pm \gamma \delta \sqrt {\gamma ^2\delta ^2+(\gamma ^2-\delta ^2)}}{\delta ^2-\gamma ^2}. \]

    For this to be real we need \(\gamma ^2\delta ^2+\gamma ^2-\delta ^2\geq 0\), i.e.

    \[ \gamma \geq \frac {\delta }{\sqrt {\delta ^2+1}}. \]

    We have

    \begin{align*} A+\gamma ^{-2}YC_1^*C_1-(B_1D_{21}^*+YC_2^*)(D_{21}D_{21}^*)^{-1}C_2 &=-1+\left (\frac {1}{\gamma ^2}-\frac {1}{\delta ^2}\right )Y \\&=-1+1\pm \gamma ^{-1}\delta ^{-1}\sqrt {\gamma ^2\delta ^2+\gamma ^2-\delta ^2} \\&=\pm \gamma ^{-1}\delta ^{-1}\sqrt {\gamma ^2\delta ^2+\gamma ^2-\delta ^2}, \end{align*} and from the requirement that this must be stable we see that we must have the minus sign and we must have \(\gamma >\frac {\delta }{\sqrt {\delta ^2+1}}\) (rather than only the non-strict inequality we saw above). We therefore have

    \[ Y=\frac {\gamma ^2\delta ^2-\gamma \delta \sqrt {\gamma ^2\delta ^2+\gamma ^2-\delta ^2}}{\delta ^2-\gamma ^2}. \]

    From Example 19.6 and above we have the conditions

    \[ \gamma >\frac {\varepsilon }{\sqrt {\varepsilon ^2+1}},\qquad \gamma >\frac {\delta }{\sqrt {\delta ^2+1}}. \]

    We further need the spectral radius condition which in the \(n=1\) case just reduces to \(XY<\gamma ^2\). With the \(Y\) computed above and the \(X\) from Example 19.6 this gives

    \[ \frac {-\varepsilon ^2\gamma ^2+\varepsilon \gamma \sqrt {\gamma ^2(\varepsilon ^2+1)-\varepsilon ^2}}{\gamma ^2-\varepsilon ^2}~~ \frac {\gamma ^2\delta ^2-\gamma \delta \sqrt {\gamma ^2\delta ^2+\gamma ^2-\delta ^2}}{\delta ^2-\gamma ^2}<\gamma ^2. \]

    Dividing both sides by \(\gamma ^2\) gives the equivalent

    \[ \frac {-\varepsilon ^2\gamma +\varepsilon \sqrt {\gamma ^2(\varepsilon ^2+1)-\varepsilon ^2}}{\gamma ^2-\varepsilon ^2}~~ \frac {\gamma \delta ^2-\delta \sqrt {\gamma ^2\delta ^2+\gamma ^2-\delta ^2}}{\delta ^2-\gamma ^2}<1. \]

    This seems difficult to express as a more explicit condition on \(\gamma \).

  • Example 20.5 Consider the undamped second order system

    \[ \ddot {q}(t)+q(t)=w_1(t)+u(t), \]

    with the state \(x=\sbm {q\\\dot {q}}\) and the performance output

    \[ z=\bbm {\dot {q}\\\varepsilon u}, \]

    where \(\varepsilon >0\), and the measured output

    \[ y=\dot {q}+\delta w_2. \]

    To put this into the \(H^\infty \) measurement feedback framework, we have \(n=2\), \(m_2=2\), \(m_2=1\), \(p_1=2\), \(p_2=1\) and

    \begin{gather*} A=\bbm {0&1\\-1&0},\quad B_1=\bbm {0&0\\1&0},\quad B_2=\bbm {0\\1},\quad C_1=\bbm {0&1\\0&0},\quad D_{12}=\bbm {0\\\varepsilon }, \\ C_2=\bbm {0&1},\quad D_{21}=\bbm {0&\delta }. \end{gather*} The Riccati equation (19.4) we already solved in Example 19.7, so that

    \[ \gamma >\varepsilon , \]

    and

    \[ X=\left (\varepsilon ^{-2}-\gamma ^{-2}\right )^{-1/2}\bbm {1&0\\0&1}, \qquad F=\bbm {0&-\varepsilon ^{-2}\left (\varepsilon ^{-2}-\gamma ^{-2}\right )^{-1/2}}. \]

    The Riccati equation (20.3) is (using that \(Y\) is symmetric, that \(B_1D_{21}^*=0\) and \(D_{21}D_{21}^*=\delta ^2\))

    \begin{multline*} \bbm {0&1\\-1&0}\bbm {Y_1&Y_0\\Y_0&Y_2} +\bbm {Y_1&Y_0\\Y_0&Y_2}\bbm {0&-1\\1&0} +\bbm {0&0\\1&0}\bbm {0&1\\0&0} \\ +\gamma ^{-2}\bbm {Y_1&Y_0\\Y_0&Y_2}\bbm {0&0\\1&0}\bbm {0&1\\0&0}\bbm {Y_1&Y_0\\Y_0&Y_2} -\delta ^{-2}\bbm {Y_1&Y_0\\Y_0&Y_2}\bbm {0\\1}\bbm {0&1}\bbm {Y_1&Y_0\\Y_0&Y_2} =\bbm {0&0\\0&0}, \end{multline*} which is

    \[ \bbm {2Y_0&Y_2-Y_1\\Y_2-Y_1&-2Y_0} +\bbm {0&0\\0&1} +\left (\gamma ^{-2}-\delta ^{-2}\right )\bbm {Y_0^2&Y_0Y_2\\Y_0Y_2&Y_2^2} =\bbm {0&0\\0&0}, \]

    which is

    \[ \bbm {2Y_0+\left (\gamma ^{-2}-\delta ^{-2}\right )Y_0^2 &Y_2-Y_1+\left (\gamma ^{-2}-\delta ^{-2}\right )Y_0Y_2 \\Y_2-Y_1+\left (\gamma ^{-2}-\delta ^{-2}\right )Y_0Y_2 &-2Y_0+1+\left (\gamma ^{-2}-\delta ^{-2}\right )Y_2^2 }=\bbm {0&0\\0&0}. \]

    From the top-left corner we obtain \(Y_0=0\) or \(Y_0=2\left (\delta ^{-2}-\gamma ^{-2}\right )^{-1}\).

    We first consider the case \(Y_0=2\left (\delta ^{-2}-\gamma ^{-2}\right )^{-1}\). The off-diagonal entry then gives

    \[ -Y_1-Y_2=0, \]

    which since \(Y_1,Y_2\geq 0\) (since \(Y\) is symmetric positive semi-definite) gives \(Y_1=Y_2=0\). The determinant of \(Y\) then equals \(-Y_0^2<0\) which contradicts that \(Y\) is symmetric positive semi-definite. We can therefore ignore this case.

    We now consider the case \(Y_0=0\). The bottom-right corner then gives that we must have \(\gamma >\delta \) and that \(Y_2=\left (\delta ^{-2}-\gamma ^{-2}\right )^{-1/2}\) and the off-diagonal entry gives \(Y_1=\left (\delta ^{-2}-\gamma ^{-2}\right )^{-1/2}\). Hence

    \[ Y=\left (\delta ^{-2}-\gamma ^{-2}\right )^{-1/2} \bbm {1&0\\0&1}. \]

    We then have (using again that \(B_1D_{21}^*=0\) and \(D_{21}D_{21}^*=\delta ^2\)):

    \begin{gather*} A+\gamma ^{-2}YC_1^*C_1-(B_1D_{21}^*+YC_2^*)(D_{21}D_{21}^*)^{-1}C_2 =\bbm {0&1\\-1&0}+\left (\gamma ^{-2}-\delta ^{-2}\right )Y\bbm {0&0\\0&1} \\ =\bbm {0&1\\-1&-\left (\delta ^{-2}-\gamma ^{-2}\right )^{1/2}}, \end{gather*} which is stable.

    For the spectral radius condition we consider

    \begin{align*} r(XY)&=r\left ( \left (\varepsilon ^{-2}-\gamma ^{-2}\right )^{-1/2}\bbm {1&0\\0&1} \left (\delta ^{-2}-\gamma ^{-2}\right )^{-1/2} \bbm {1&0\\0&1} \right ) \\&=r\left ( \left (\varepsilon ^{-2}-\gamma ^{-2}\right )^{-1/2}\left (\delta ^{-2}-\gamma ^{-2}\right )^{-1/2} \bbm {1&0\\0&1} \right ) \\&=\left (\varepsilon ^{-2}-\gamma ^{-2}\right )^{-1/2}\left (\delta ^{-2}-\gamma ^{-2}\right )^{-1/2}, \end{align*} where we used that the spectral radius of the identity matrix equals one and that the constant in front of the identity matrix is positive. The spectral radius condition then is

    \[ \left (\varepsilon ^{-2}-\gamma ^{-2}\right )^{-1/2}\left (\delta ^{-2}-\gamma ^{-2}\right )^{-1/2}<\gamma ^2. \]

    Squaring both sides (and using that both sides are positive) and re-arranging gives that this is equivalent to

    \[ 1<\gamma ^4\left (\varepsilon ^{-2}-\gamma ^{-2}\right )\left (\delta ^{-2}-\gamma ^{-2}\right ), \]

    and re-arranging further gives

    \[ \varepsilon ^{-2}\delta ^{-2}\gamma ^4-\left (\varepsilon ^{-2}+\delta ^{-2}\right )\gamma ^2>0. \]

    We then see that the spectral radius condition is equivalent to

    \[ \gamma ^2>\delta ^2+\varepsilon ^2, \]

    which we note is stronger than the individual conditions \(\gamma >\varepsilon \) and \(\gamma >\delta \) which we saw above.

(image)

Figure 20.1: \(H^2\) and \(H^\infty \) measurement feedback for second order system.

20.2 Problems

  • (a) Consider the undamped second order system

    \[ \ddot {q}(t)+q(t)=w_1(t)+u(t), \]

    with the state \(x=\sbm {q\\\dot {q}}\) and the performance output

    \[ z=\bbm {q\\\varepsilon u}, \]

    where \(\varepsilon >0\), and with \(\delta >0\), the measured output

    \[ y=q+\delta w_2. \]

    • (i) Write this in the standard form (16.1).

    • (ii) Determine whether or not \(D_{21}\) is surjective

    • (iii) Determine whether or not \((A,C_2)\) is detectable.

    • (iv) Determine whether or not the Rosenbrock surjectivity condition holds.

    • (v) Obtain the relevant solution of the Riccati equation (20.3).

  • Solution. (i) We have

    \begin{gather*} A=\bbm {0&1\\-1&0},\quad B_1=\bbm {0&0\\1&0},\quad B_2=\bbm {0\\1},\quad C_1=\bbm {1&0\\0&0},\quad D_{12}=\bbm {0\\\varepsilon }, \\ C_2=\bbm {1&0},\quad D_{21}=\bbm {0&\delta }. \end{gather*} (ii), (iii), (iv) We already checked the surjectivity conditions and the detectability condition in Section 16.2.

    The Riccati equation (20.3) is (using that \(Y\) is symmetric, that \(B_1D_{21}^*=0\) and \(D_{21}D_{21}^*=\delta ^2\))

    \begin{multline*} \bbm {0&1\\-1&0}\bbm {Y_1&Y_0\\Y_0&Y_2} +\bbm {Y_1&Y_0\\Y_0&Y_2}\bbm {0&-1\\1&0} +\bbm {0&0\\1&0}\bbm {0&1\\0&0} \\ +\gamma ^{-2}\bbm {Y_1&Y_0\\Y_0&Y_2}\bbm {1&0\\0&0}\bbm {1&0\\0&0}\bbm {Y_1&Y_0\\Y_0&Y_2} -\delta ^{-2}\bbm {Y_1&Y_0\\Y_0&Y_2}\bbm {1\\0}\bbm {1&0}\bbm {Y_1&Y_0\\Y_0&Y_2} =\bbm {0&0\\0&0}, \end{multline*} which is

    \[ \bbm {2Y_0&Y_2-Y_1\\Y_2-Y_1&-2Y_0} +\bbm {0&0\\0&1} +\left (\gamma ^{-2}-\delta ^{-2}\right )\bbm {Y_1^2&Y_0Y_1\\Y_0Y_1&Y_0^2} =\bbm {0&0\\0&0}, \]

    which is

    \[ \bbm {2Y_0+\left (\gamma ^{-2}-\delta ^{-2}\right )Y_1^2 &Y_2-Y_1+\left (\gamma ^{-2}-\delta ^{-2}\right )Y_0Y_1 \\Y_2-Y_1+\left (\gamma ^{-2}-\delta ^{-2}\right )Y_0Y_1 &-2Y_0+1+\left (\gamma ^{-2}-\delta ^{-2}\right )Y_0^2 }=\bbm {0&0\\0&0}. \]

    We first consider the special case where \(\gamma =\delta \). The above then reduces to

    \[ \bbm {2Y_0 &Y_2-Y_1 \\Y_2-Y_1 &-2Y_0+1 }=\bbm {0&0\\0&0}, \]

    the top-left corner and the bottom-right corner giving the contradiction \(1=0\). Therefore we must have \(\gamma >\delta \).

    We return to the general case. From the top-left corner we see (using that \(\gamma >\delta \)) that \(Y_0\geq 0\). The bottom-right corner gives (picking the sign so that \(Y_0\geq 0\))

    \[ Y_0=\frac {-1+\sqrt {1+\delta ^{-2}-\gamma ^{-2}}}{\delta ^{-2}-\gamma ^{-2}}. \]

    The top-left corner then gives

    \[ Y_1=\frac {\sqrt {-2+2\sqrt {1+\delta ^{-2}-\gamma ^{-2}}}}{\delta ^{-2}-\gamma ^{-2}}. \]

    The off-diagonal entry then gives

    \[ Y_2=\sqrt {1+\delta ^{-2}-\gamma ^{-2}}~\frac {\sqrt {-2+2\sqrt {1+\delta ^{-2}-\gamma ^{-2}}}}{\delta ^{-2}-\gamma ^{-2}}. \]

    Hence

    \[ Y=\bbm { \frac {\sqrt {-2+2\sqrt {1+\delta ^{-2}-\gamma ^{-2}}}}{\delta ^{-2}-\gamma ^{-2}} & \frac {-1+\sqrt {1+\delta ^{-2}-\gamma ^{-2}}}{\delta ^{-2}-\gamma ^{-2}} \\ \frac {-1+\sqrt {1+\delta ^{-2}-\gamma ^{-2}}}{\delta ^{-2}-\gamma ^{-2}} & \sqrt {1+\delta ^{-2}-\gamma ^{-2}}~\frac {\sqrt {-2+2\sqrt {1+\delta ^{-2}-\gamma ^{-2}}}}{\delta ^{-2}-\gamma ^{-2}} }. \]

     □

Bibliography

The suspension system model is based on [Hrovat, 1997] (authored by an employee of Ford Research). The fixed structure optimization is based on [Scheibe and Smith, 2009]. The tape drive model is based on [Cherubini, 2022] (authored by an employee of IBM Research) and other articles by that author.

Further background on material in part I (signals and systems) can for example be found in [Bolton, 2015] and on the material in Part II (control) in [Trentelman et al., 2002] (for the material on Controllability and Observability also [Logemann and Ryan, 2014]).

Bibliography

  • [Bolton, 2015]  Bolton, W. (2015). Mechatonics. sixth edition.

  • [Cherubini, 2022]  Cherubini, G. (2022). Advanced control systems for data storage on magnetic tape: A long-lasting success story. IEEE Control Systems Magazine, 42(4):8–11.

  • [Hrovat, 1997]  Hrovat, D. (1997). Survey of advanced suspension developments and related optimal control applications. Automatica, 33:1781–1817.

  • [Logemann and Ryan, 2014]  Logemann, H. and Ryan, E. P. (2014). Ordinary Differential Equations: Analysis, Qualitative Theory and Control.

  • [Scheibe and Smith, 2009]  Scheibe, F. and Smith, M. C. (2009). Analytical solutions for optimal ride comfort and tyre grip for passive vehicle suspensions. Vehicle Systems Dynamics, 47:1229–1252.

  • [Trentelman et al., 2002]  Trentelman, H. L., Stoorvogel, A. A., and Hautus, M. (2002). Control Theory for Linear Systems.

Mock exam

Question 3 considers input-state-output systems with a state \(x:[0,\infty )\to \mR ^n\), input \(u:[0,\infty )\to \mR ^{m}\) and output \(y:[0,\infty )\to \mR ^{p}\) described by

\[ \dot {x}=Ax+Bu,\qquad y=Cx+Du, \]

with the initial condition \(x(0)=x^0\) where \(x^0\in \mR ^n\) and

\[ A\in \mR ^{n\times n},~ B\in \mR ^{n\times m},~ C\in \mR ^{p\times n},~ D\in \mR ^{p\times m}. \]

 

Questions 2 and 4 consider input-state-output systems with a state \(x:[0,\infty )\to \mR ^n\), external input \(w:[0,\infty )\to \mR ^{m_1}\), control input \(u:[0,\infty )\to \mR ^{m_2}\), performance output \(z:[0,\infty )\to \mR ^{p_1}\) and measured output \(y:[0,\infty )\to \mR ^{p_2}\) described by

\[ \dot {x}=Ax+B_1w+B_2u,\qquad z=C_1x+D_{11}w+D_{12}u,\qquad y=C_2x+D_{21}w, \]

with the initial condition \(x(0)=x^0\) where \(x^0\in \mR ^n\) and

\begin{gather*} A\in \mR ^{n\times n},~ B_1\in \mR ^{n\times m_1},~ B_2\in \mR ^{n\times m_2},~ C_1\in \mR ^{p_1\times n},\\ D_{11}\in \mR ^{p_1\times m_1},~ D_{12}\in \mR ^{p_1\times m_2},~ C_2\in \mR ^{p_2\times n},~ D_{21}\in \mR ^{p_2\times m_1}. \end{gather*}  

Question 2 further considers an exo-system with state state \(x_e:[0,\infty )\to \mR ^{n_e}\) described by

\[ \dot {x}_e=A_ex_e,\qquad w=C_ex_e, \]

with the initial condition \(x_e(0)=x_e^0\) where \(x_e^0\in \mR ^{n_e}\) and

\[ A_e\in \mR ^{n_e\times n_e},\qquad C_e\in \mR ^{m_1\times n_e}. \]

  • 1.

    (a)

    Determine whether or not the following matrix is asymptotically stable

    \[ \bbm {-4&-1&-2\\-2&-3&-2\\3&1&1}. \]

    Recall that by the Routh–Hurwitz criterion the polynomial \(a_3s^3+a_2s^2+a_1s+a_0\) with \(a_3>0\) is stable if and only if \(a_2,a_0>0\) and \(a_2a_1-a_0a_3>0\).

    (b)

    Consider the second order system

    \[ \ddot {q}(t)+4\dot {q}(t)+q(t)=u(t),\qquad y(t)=\dot {q}(t). \]

    (i)

    Determine whether this system is underdamped, critically damped or overdamped.

    (ii)

    Calculate the step response of this system.

    (iii)

    Determine the impulse response of this system.

    (iv)

    Determine the transfer function of this system.

    (v)

    Determine the frequency response of this system.

    (vi)

    The absolute value of the frequency response of this system is of the form \(\frac {\omega }{\sqrt {p(\omega )}}\) for a polynomial \(p\). Determine this polynomial \(p\).

    (c)

    For a system with impulse response \(h(t)=\e ^{-t}\), determine the output \(y\) for the input \(u(t)=\e ^{-2t}\) and zero initial condition.

  • 2. Consider the input-state-output system \(\dot {x}=Ax+B_2u\), \(x(0)=x^0\), \(z=C_1x+D_{11}w\) with

    \[ A=\bbm {-1&0\\0&-1},\qquad B_2=\bbm {1\\1},\qquad C_1=\bbm {1&0\\0&1},\qquad D_{11}=\bbm {-1&0\\0&-1}. \]

    Denote the transfer function from \(u\) to \(z\) by \(G\), i.e. \(G(s)=C_1(sI-A)^{-1}B_2\).

    (a)

    Show that \((A,B_2)\) is stabilizable.

    (b)

    Calculate the transfer function \(G\).

    (c)

    Show that \(G(0)\) is not surjective.

    (d)

    Consider the exosystem \(\dot {x}_e=A_ex_e\), \(x_e(0)=x_e^0\), \(w=C_ex_e\) with

    \[ A_e=0,\qquad C_e=\bbm {1\\1}. \]

    Solve the regulator equations \(A\Pi +B_2V=\Pi A_e\), \(C_1\Pi +D_{11}C_e=0\) for the unknowns \(\Pi \in \mR ^{2\times 1}\) and \(V\in \mR \).

    (e)

    Consider the exosystem \(\dot {x}_e=A_ex_e\), \(x_e(0)=x_e^0\), \(w=C_ex_e\) with

    \[ A_e=\bbm {0&0\\0&0},\qquad C_e=\bbm {1&0\\0&1}. \]

    (i)

    Show that the set of regulator equations \(A\Pi +B_2V=\Pi A_e\), \(C_1\Pi +D_{11}C_e=0\) in the unknowns \(\Pi \in \mR ^{2\times 2}\) and \(V\in \mR ^{1\times 2}\) does not have a solution.

    (ii)

    By directly considering the differential equations (i.e. not using any results from the notes) show that the regulation requirement cannot be met, i.e. show that there exist initial conditions \(x^0\in \mR ^2\) and \(x_e^0\in \mR ^2\) such that there does not exist a control \(u:[0,\infty )\phantom {]}\!\!\!\to \mR \) such that \(\lim _{t\to \infty }z(t)=0\).

    (f )

    Briefly comment on the difference between the exosystems from (d) and (e), in particular with regards to which signals \(w\) they generate, and what the relevance of (c) is in the context of solvability of the regulator equations and satisfaction of the regulation requirement.

    (g)

    Consider the measured output \(y=C_2x+D_{21}w\).

    (i)

    Let

    \[ C_2=\bbm {1&0\\0&1},\qquad D_{21}=\bbm {0&0\\0&0}. \]

    Show that the pair \(\left (\sbm {A&0\\0&A_e},\bbm {C_2&D_{21}C_e}\right )\) is not detectable for both the exosystems from (d) and (e).

    (ii)

    Let

    \[ C_2=\bbm {0&0\\0&0},\qquad D_{21}=\bbm {1&0\\0&1}. \]

    Show that the pair \(\left (\sbm {A&0\\0&A_e},\bbm {C_2&D_{21}C_e}\right )\) is detectable for both the exosystems from (d) and (e).

  • 3.

    (a)

    Consider the input-state system \(\dot {x}=Ax+Bu\) with

    \[ A=\bbm {-2&0\\0&-3},\qquad B=\bbm {1&0\\0&1}. \]

    Let \(T>0\).

    (i)

    Calculate the controllability Gramian \(Q_T:=\int _0^T \e ^{At}BB^*\e ^{A^*t}\,dt\).

    (ii)

    Calculate the control \(u(t)=B^*\e ^{A^*(T-t)}Q_T^{-1}x^1\) which steers the system from the origin to the state \(x^1=\sbm {1\\1}\) in time \(T\) and which minimizes the control cost \(\int _0^T \|u(t)\|^2\,dt\).

    (b)

    Let

    \[ A=\bbm {2&1\\0&1},\qquad B=\bbm {1\\1},\qquad p(s)=s^2+10s+5. \]

    Show that there exists a matrix \(F\in \mR ^{2\times 1}\) such that the characteristic polynomial of \(A+BF\) equals \(p\).

    (c)

    Let \(\alpha \in \mR \) and

    \[ A=\bbm {1&\alpha \\0&1},\qquad B=\bbm {1\\2}. \]

    Determine the reachable subspace.

    (d)

    Let \(T,K>0\). Consider the first order system \(T\dot {y}+y=Ku\).

    (i)

    Determine \(A,B\in \mR \) such that the first order system is of the standard state-output form \(\dot {x}=Ax+Bu\), \(y=Cx\) with \(C=1\).

    (ii)

    Determine the infinite-time observability Gramian by solving the observation Lyapunov equation \(A^*S+SA+C^*C=0\).

    (iii)

    Use the formula \(\ipd {SB}{B}=\int _0^\infty |h(t)|^2\,dt\) to determine \(\int _0^\infty |h(t)|^2\,dt\) where \(h\) is the impulse response of the first order system.

    (e)

    Recall that an observer for the state-output system \(\dot {x}(t)=Ax(t)\), \(y(t)=Cx(t)\) is an input-state system \(\dot {x}_c(t)=A_cx_c(t)+B_cy(t)\) such that for all \(x(0)\) and \(x_c(0)\) we have \(\lim _{t\to \infty }x(t)-x_c(t)=0\).

    Let

    \[ A=\bbm {1&0\\0&-2},\qquad C=\bbm {1&1}. \]

    Determine an observer for the corresponding state-output system.

  • 4. Consider the input-state-output system

    \[ \dot {x}=Ax+B_1w+B_2u,\qquad z=C_1x+D_{12}u,\qquad y=C_2x+D_{21}w, \]

    with

    \begin{gather*} A=\bbm {0&1\\-1&0},\quad B_1=\bbm {0&0\\1&0},\quad B_2=\bbm {0\\1},\quad C_1=\bbm {0&1\\0&0},\quad D_{12}=\bbm {0\\\varepsilon }, \\ C_2=\bbm {0&1},\qquad D_{21}=\bbm {0&\delta }, \end{gather*} where \(\delta , \varepsilon >0\).

    (a)

    Determine whether or not \((A,B_2)\) is stabilizable.

    (b)

    Determine whether or not \((A,C_2)\) is detectable.

    (c)

    Determine whether or not the Rosenbrock matrix

    \[ \bbm {sI-A&-B_2\\C_1&D_{12}}, \]

    is injective for all \(s\in \mC \) with \(\re (s)=0\).

    (d)

    Determine whether or not the Rosenbrock matrix

    \[ \bbm {sI-A&-B_1\\C_2&D_{21}}, \]

    is surjective for all \(s\in \mC \) with \(\re (s)=0\).

    (e)

    Determine the unique solution \(X\) of the algebraic Riccati equation

    \[ A^*X+XA+C_1^*C_1-(XB_2+C_1^*D_{12})(D_{12}^*D_{12})^{-1}(B_2^*X+D_{12}^*C_1)=0, \]

    such that \(A+B_2F\) is asymptotically stable where

    \[ F=-(D_{12}^*D_{12})^{-1}\left (D_{12}^*C_1+B_2^*X\right ). \]

    Also determine this \(F\) and \(A+B_2F\) and verify that \(A+B_2F\) is asymptotically stable.

Solution

  • 1.

    (a) The characteristic polynomial \(\det (sI-A)\) is \(s^3+6s^2+11s+6\). All coefficients are positive and we have \(a_2a_1-a_0a_3=66-6=60>0\). Therefore by the Routh–Hurwitz criterion the polynomial \(A\) is stable.

    (bi) This is in the standard form with time constant \(T=1\) and damping ratio \(\zeta =2\). Since \(\zeta >1\), this system is overdamped. Alternatively, the characteristic polynomial is \(s^2+4s+1\) which has two distinct real roots (as seen in (bii)) so that the system is overdamped.

    (bii) We have to solve \(\ddot {q}(t)+4\dot {q}(t)+q(t)=1\). The homogeneous equation is \(\ddot {q}(t)+4\dot {q}(t)+q(t)=0\) which has characteristic equation \(s^2+4s+1=0\), which has roots \(s=-2\pm \sqrt {3}\). A particular solution is \(1\). Therefore the general solution is

    \[ q(t)=C_1\e ^{(-2+\sqrt {3})t}+C_2\e ^{(-2-\sqrt {3})t}+1. \]

    The initial condition is \(q(0)=\dot {q}(0)=0\). This gives

    \[ C_1+C_2+1=0,\qquad C_1(-2+\sqrt {3})+C_2(-2-\sqrt {3})=0. \]

    Solving this gives

    \[ C_1=-\frac {2+\sqrt {3}}{2\sqrt {3}},\qquad C_2=\frac {2-\sqrt {3}}{2\sqrt {3}}, \]

    so that

    \[ q(t)=-\frac {2+\sqrt {3}}{2\sqrt {3}}~\e ^{(-2+\sqrt {3})t}+\frac {2-\sqrt {3}}{2\sqrt {3}}~\e ^{(-2-\sqrt {3})t}+1. \]

    Therefore

    \[ y(t)=\dot {q}(t)=\frac {1}{2\sqrt {3}}~\e ^{(-2+\sqrt {3})t}-\frac {1}{2\sqrt {3}}~\e ^{(-2-\sqrt {3})t}. \]

    Hence the step response is \(\frac {1}{2\sqrt {3}}~\e ^{(-2+\sqrt {3})t}-\frac {1}{2\sqrt {3}}~\e ^{(-2-\sqrt {3})t}\).

    (biii) The impulse response \(h\) is the derivative of the step response, so

    \[ h(t)=\frac {-2+\sqrt {3}}{2\sqrt {3}}~\e ^{(-2+\sqrt {3})t}-\frac {-2-\sqrt {3}}{2\sqrt {3}}~\e ^{(-2-\sqrt {3})t}. \]

    (biv) From the equations we immediately see that \(G(s)=\frac {s}{s^2+4s+1}\).

    (bv) We have \(F(\omega )=G(i\omega )=\frac {i\omega }{4i\omega +1-\omega ^2}\).

    (bvi) We have

    \[ |F(\omega )|^2=\frac {\omega ^2}{16\omega ^2+(1-\omega ^2)^2},\qquad |F(\omega )|=\frac {\omega }{\sqrt {\omega ^4+14\omega ^2+1}}. \]

    Hence \(p(\omega )=\omega ^4+14\omega ^2+1\).

    (c) The variation of parameters formula is

    \[ y(t)=\int _0^t h(t-\theta )u(\theta )\,d\theta . \]

    In this particular case this is

    \[ y(t)=\int _0^t \e ^{\theta -t}\e ^{-2\theta }\,d\theta =\e ^{-t}\int _0^t \e ^{-\theta }\,d\theta =\e ^{-t}\left [-\e ^{-\theta }\right ]_{\theta =0}^t =\e ^{-t}-\e ^{-2t}. \]

  • 2.

    (a) Since \(A\) is stable we can take \(F=0\) and obtain that \(A+B_2F\) is stable.

    (b) We have

    \[ G(s)=C_1(sI-A)^{-1}B_2=\frac {1}{s+1}\bbm {1\\1}. \]

    (c) Since \(G(s)\) has more rows than columns, it is not surjective (this is true for all \(s\in \mC \backslash \{-1\}\), so in particular for \(s=0\)).

    (d) With the given matrices, the regulator equations are

    \[ -\Pi +\bbm {1\\1}V=\bbm {0\\0},\qquad \Pi =\bbm {1\\1}. \]

    From this we see that \(\Pi =\bbm {1\\1}\), \(V=1\) is the unique solution.

    (ei) With the given matrices, the regulator equations are

    \[ -\Pi +\bbm {V_1&V_2\\V_1&V_2}=\bbm {0&0\\0&0},\qquad \Pi =\bbm {1&0\\0&1}. \]

    We then get the contradictions that \(0=V_1=1\) and \(0=V_2=1\). Therefore no solution exists.

    (eii) We choose \(x_e^0=\sbm {0\\1}\). It then follows that \(w=x^e=\sbm {0\\1}\). Since \(z=x-w\), the condition \(\lim _{t\to \infty }z(t)=0\) then is \(\lim _{t\to \infty }x(t)=\sbm {0\\1}\). We have that \(\dot {x}_1=-x_1(t)+u(t)\) and \(\dot {x}_2=-x_2(t)+u(t)\), i.e. \(x_1\) and \(x_2\) satisfy the same differential equation (no matter what \(u\) is). If we choose \(x^0=\sbm {x_1^0\\x_2^0}\) with \(x_1^0=x_2^0\), then we will have \(x_1(t)=x_2(t)\) for all \(t\). In particular, we cannot have \(\lim _{t\to \infty }x_1(t)=0\) and \(\lim _{t\to \infty }x_2(t)=1\).

    (f) The exosystem in (d) generates \(w\) of the form \(\sbm {r\\r}\) for \(r\in \mR \) (take \(x_e^0=r\)) whereas the exosystem in (e) generates \(w\) of the from \(\sbm {r_1\\r_2}\) for \(r_1,r_2\in \mR \) (take \(x_e^0=\sbm {r_1\\r_2}\)). As we saw in (eii), if \(x_1^0=x_2^0\), then \(x_1=x_2\) and therefore the only possible limits of \(x\) are those generated by the exosystem in (d).

    If \(G(0)\) had been surjective, then the regulator equations would have been solvable (and therefore the output regulation problem would have been solvable) for any exosystem with zero as its only eigenvalue. Since \(G(0)\) is not surjective by (c), there is the possibility that the regulator equations are not solvable, which we saw in (e), but there is also the possibility that the regulator equations are solvable, which we saw in (d).

    (gi) We have

    \[ \bbm {A&0\\0&A_e}-\bbm {L_1\\L_2}\bbm {C_2&0}=\bbm {A-L_1C_2&0\\-L_2C_2&A_e}, \]

    which by the block structure is asymptotically stable if and only if both \(A-L_1C_2\) and \(A_e\) are. Since \(A_e\) is not asymptotically stable in either case, we have that the given pair is not detectable.

    (gii) We have

    \[ \bbm {A&0\\0&A_e}-\bbm {L_1\\L_2}\bbm {0&D_{21}C_e}=\bbm {A&-L_1C_e\\0&A_e-L_2C_e}, \]

    which by the block structure is asymptotically stable if and only if both \(A\) and \(A_e-L_2C_e\) are. Therefore the given pair is detectable if and only if \((A_e,C_e)\) is detectable. In both cases this is true; for the exosystem from (d) we could choose \(L_2=\bbm {1&1}\) so that \(A_e-L_2C_e=-2\) and for the exosystem from (e) we could choose \(L_2=\sbm {1&0\\0&1}\) which gives \(A_e-L_2C_e=\sbm {-1&0\\0&-1}\) both of which are clearly stable.

  • 3.

    (ai) We have

    \[ \e ^{At}=\bbm {\e ^{-2t}&0\\0&\e ^{-3t}}, \]

    so that

    \[ Q_T:=\int _0^T \e ^{At}BB^*\e ^{A^*t}\,dt =\int _0^T \bbm {\e ^{-4t}&0\\0&\e ^{-6t}}\,dt =\bbm {\frac {1}{4}(1-\e ^{-4T})&0\\0&\frac {1}{6}(1-\e ^{-6T})}. \]

    (aii) We have

    \begin{multline*} u(t)=B^*\e ^{A^*(T-t)}Q_T^{-1}x^1 =\bbm {\e ^{-2(T-t)}&0\\0&\e ^{-3(T-t)}}\bbm {\frac {4}{1-\e ^{-4T}}&0\\0&\frac {6}{1-\e ^{-6T}}}\bbm {1\\1} \\=\bbm {\e ^{-2(T-t)}\frac {4}{1-\e ^{-4T}}\\\e ^{-3(T-t)}\frac {6}{1-\e ^{-6T}}}. \end{multline*}

    (b) Since \(p\) is monic, such an \(F\) exists if the pair \((A,B)\) is controllable. The Kalman controllability matrix is

    \[ \bbm {B&AB}=\bbm {1&3\\1&1}. \]

    Since this matrix is invertible and therefore surjective, the pair \((A,B)\) is controllable.

    One could also construct an explicit \(F\) which works (either by hand or using Ackermann’s formula), the unique one is \(F=\bbm {\frac {-29}{2}&\frac {3}{2}}\).

    (c) The Kalman controllability matrix is

    \[ \bbm {B&AB}=\bbm {1&1+2\alpha \\2&2}. \]

    If \(\alpha \neq 0\), then this matrix is invertible so that the reachable subspace is \(\mR ^2\). If \(\alpha =0\), then \(\bbm {B&AB}=\sbm {1&1\\2&2}\) which has as image the span of \(\sbm {1\\2}\) which therefore is the reachable subspace.

    (di) We have \(A=\frac {-1}{T}\) and \(B=\frac {K}{T}\).

    (dii) The observation Lyapunov equation is

    \[ \frac {-2}{T}S+1=0, \]

    which gives \(S=\frac {T}{2}\).

    (diii) We have

    \[ \ipd {SB}{B}=SB^2=\frac {T}{2}\frac {K^2}{T^2}=\frac {K^2}{2T}. \]

    (e) If \(L\) is such that \(A-LC\) is asymptotically stable, then \(A_c=A-LC\), \(B_c=L\) is an observer. We have

    \[ A-LC=\bbm {1&0\\0&-2}-\bbm {L_1\\L_2}\bbm {1&1}=\bbm {1-L_1&-L_1\\-L_2&-2-L_2}. \]

    Choosing \(L_1=2\) and \(L_2=0\) (i.e. \(L=\sbm {2\\0}\)) this is

    \[ A-LC=\bbm {-1&-2\\0&-2}, \]

    which by the upper-diagonal structure has eigenvalues \(-1\) and \(-2\) and is therefore stable. Therefore an observer is the input-state system with

    \[ A_c=\bbm {-1&-2\\0&-2},\qquad B_c=\bbm {2\\0}. \]

  • 4.

    (a) The Kalman controllability matrix is

    \[ \bbm {B_2&AB_2}=\bbm {0&1\\1&0}, \]

    which is invertible and therefore surjective. Hence the pair \((A,B_2)\) is controllable and therefore stabilizable.

    (b) The Kalman observability matrix is

    \[ \bbm {C_2\\C_2A}=\bbm {0&1\\-1&0}, \]

    which is invertible and therefore injective. Hence the pair \((A,C_2)\) is observable and therefore detectable.

    (c) The Rosenbrock matrix is

    \[ \bbm {s&-1&0\\1&s&-1\\0&1&0\\0&0&\varepsilon }. \]

    The matrix consisting of the last three rows is upper-triangular with nonzero elements on the diagonal and is therefore invertible. Hence the Rosenbrock matrix is injective for all \(s\in \mC \) (in particular for all \(s\in \mC \) with \(\re (s)=0\)).

    (d) The Rosenbrock matrix is

    \[ \bbm {s&-1&0&0\\1&s&1&0\\0&1&0&\delta }. \]

    The matrix consisting of the last three columns is lower-triangular with nonzero elements on the diagonal and is therefore invertible. Hence the Rosenbrock matrix is surjective for all \(s\in \mC \) (in particular for all \(s\in \mC \) with \(\re (s)=0\)).

    (e) The Riccati equation is (using that \(X\) is symmetric and that \(C_1^*D_{12}=0\) and \(D_{12}^*D_{12}=\varepsilon ^2\))

    \begin{multline*} \bbm {0&-1\\1&0}\bbm {X_1&X_0\\X_0&X_2} +\bbm {X_1&X_0\\X_0&X_2}\bbm {0&1\\-1&0} +\bbm {0&0\\1&0}\bbm {0&1\\0&0} \\ -\varepsilon ^{-2}\bbm {X_1&X_0\\X_0&X_2}\bbm {0\\1}\bbm {0&1}\bbm {X_1&X_0\\X_0&X_2}=\bbm {0&0\\0&0}, \end{multline*} which is

    \[ \bbm {-2X_0&X_1-X_2\\X_1-X_2&2X_0} +\bbm {0&0\\0&1} -\varepsilon ^{-2}\bbm {X_0^2&X_0X_2\\X_0X_2&X_2^2}=\bbm {0&0\\0&0}, \]

    which is

    \[ \bbm { -2X_0-\varepsilon ^{-2}X_0^2 &X_1-X_2-\varepsilon ^{-2}X_0X_2 \\X_1-X_2-\varepsilon ^{-2}X_0X_2 &2X_0+1-\varepsilon ^{-2}X_2^2 }=\bbm {0&0\\0&0}. \]

    The top-left corner gives \(X_0=0\) or \(X_0=-2\varepsilon ^2\).

    We first consider the case \(X_0=-2\varepsilon ^2\). The off-diagonal entry then gives

    \[ X_1+X_2=0. \]

    Since \(X_1,X_2\geq 0\) (because \(X\) is positive semi-definite), this implies \(X_1=X_2=0\). The determinant of \(X\) then equals \(-X_0^2\) which is negative which contradicts that \(X\) is positive semi-definite. Therefore this case must be excluded.

    We now consider the case \(X_0=0\). Then the bottom-right corner gives \(X_2=\varepsilon \) and finally the off-diagonal entry gives \(X_1=\varepsilon \). It follows that

    \[ X=\varepsilon \bbm {1&0\\0&1}. \]

    The optimal feedback then is (again using that \(C_1^*D_{12}=0\) and \(D_{12}^*D_{12}=\varepsilon ^2\)):

    \[ F=-\varepsilon ^{-2}\bbm {0&1}\varepsilon \bbm {1&0\\0&1} =\bbm {0&-\varepsilon ^{-1}}. \]

    The closed-loop matrix then is

    \[ A+B_2F=\bbm {0&1\\-1&0}+\bbm {0\\1}\bbm {0&-\varepsilon ^{-1}} =\bbm {0&1\\-1&-\varepsilon ^{-1}}. \]

    The characteristic polynomial of \(A+B_2F\) is \(s^2+\varepsilon ^{-1}s+1\) which is stable (by Routh–Hurwitz).

Mock exam number 2

  • 1.

    (a)

    Determine whether or not the following matrix is asymptotically stable

    \[ \bbm {-2&-4&0\\1&-6&-1\\2&-4&-4}. \]

    Recall that by the Routh–Hurwitz criterion the polynomial \(a_3s^3+a_2s^2+a_1s+a_0\) with \(a_3>0\) is stable if and only if \(a_2,a_0>0\) and \(a_2a_1-a_0a_3>0\).

    (b)

    Consider the second order system

    \[ \ddot {q}(t)+2\dot {q}(t)+5q(t)=5u(t),\qquad y(t)=\dot {q}(t). \]

    (i)

    Determine whether this system is underdamped, critically damped or overdamped.

    (ii)

    Calculate the step response of this system.

    (iii)

    Determine the impulse response of this system.

    (iv)

    Determine the transfer function of this system.

    (v)

    Determine the frequency response of this system.

    (c)

    For a system with transfer function \(\frac {1}{s+1}\), determine the step response.

  • 2. Consider the input-state-output system \(\dot {x}=Ax+B_1w+B_2u\), \(x(0)=x^0\), \(z=C_1x+D_{11}w\) with

    \[ A=1,\qquad B_1=2,\qquad B_2=1,\qquad C_1=1,\qquad D_{11}=1. \]

    Denote the transfer function from \(u\) to \(z\) by \(G\), i.e. \(G(s)=C_1(sI-A)^{-1}B_2\).

    (a)

    Show that \((A,B_2)\) is stabilizable.

    (b)

    Calculate the transfer function \(G\).

    (c)

    Determine whether or not \(G(0)\) is surjective.

    (d)

    Consider the exosystem \(\dot {x}_e=A_ex_e\), \(x_e(0)=x_e^0\), \(w=C_ex_e\) with

    \[ A_e=0,\qquad C_e=1. \]

    Solve the regulator equations \(A\Pi +B_1C_e+B_2V=\Pi A_e\), \(C_1\Pi +D_{11}C_e=0\) for the unknowns \(\Pi \in \mR \) and \(V\in \mR \).

    (e)

    In the notes, a connection is made between \(G(0)\) being surjective and solvability of the regulator equations. How do your answers to (c) and (d) relate to that.

    (f )

    Choose a stabilizing feedback \(F_2\in \mR \) for \((A,B_2)\) and let \(\Pi \) and \(V\) be as in (d). By solving the differential equations (i.e. not using any results from the notes) show that the state feedback \(u=(V-F_2\Pi )x_e+F_2x\) solves the full information output regulation and disturbance rejection problem, i.e. is such that

    • • for all \(x^0,x_e^0\in \mR \) we have \(\lim _{t\to \infty }z(t)=0\);

    • • for all \(x^0\in \mR \) and \(x_e^0=0\) we have \(\lim _{t\to \infty }x(t)=0\).

    (g)

    Consider the measured output \(y=C_2x+D_{21}w\) with

    \[ C_2=1,\qquad D_{21}=0. \]

    (i)

    Show that the pair \(\left (\sbm {A&B_1C_e\\0&A_e},\bbm {C_2&D_{21}C_e}\right )\) is detectable.

    (ii)

    Recall that if \(F_2\) is a stabilizing feedback for \((A,B_2)\) and \(L\) is a stabilizing output injection for \(\left (\sbm {A&B_1C_e\\0&A_e},\bbm {C_2&D_{21}C_e}\right )\), then the controller

    \[ \dot {x}_c=A_cx_x+B_cy,\quad u=C_cx_c, \]

    with

    \begin{gather*} A_c=\bbm {A&B_1C_e\\0&A_e}-L\bbm {C_2&D_{21}C_e}+\bbm {B_2\\0}\bbm {F_2&V-F_2\Pi },\\ B_c=L,\qquad C_c=\bbm {F_2&V-F_2\Pi }, \end{gather*} where \(\Pi \) and \(V\) are solutions of the regulator equations solves the measurement feedback output regulation and disturbance rejection problem.  
    Determine a specific controller which solves the measurement feedback output regulation and disturbance rejection problem for the given state-input-output system.

  • 3.

    (a)

    Consider the input-state system \(\dot {x}=Ax+Bu\) with

    \[ A=\bbm {-1&0\\0&-1},\qquad B=\bbm {1\\1}. \]

    Determine whether or not the system is controllable.

    (b)

    Consider the input-state system \(\dot {x}=Ax+Bu\) with

    \[ A=\bbm {1&0&0\\0&1&2\\0&1&3},\qquad B=\bbm {1&0\\0&1\\0&0}. \]

    Determine whether or not the system is controllable.

    (c)

    Consider the state-output system \(\dot {x}=Ax\), \(y=Cx\) with

    \[ A=\bbm {-3&1\\0&5},\qquad C=\bbm {1&0}. \]

    Determine whether or not the system is observable.

    (d)

    Let \(\alpha \in \mR \). Consider the input-state system \(\dot {x}=Ax+Bu\) with

    \[ A=\bbm {-1&0\\0&\alpha },\qquad B=\bbm {2\\1}. \]

    (i)

    Determine all \(\alpha \in \mR \) for which the system is controllable.

    (ii)

    Determine the reachable subspace.

    (iii)

    Determine all \(\alpha \in \mR \) for which the system is stabilizable.

    (e)

    Consider the input-state system \(\dot {x}=Ax+Bu\) with

    \[ A=\bbm {0&0\\0&0},\qquad B=\bbm {1\\1}. \]

    Show using the definition of controllability only that this system is not controllable, i.e. is such that there exists a \(T>0\) and \(x^0,x^1\in \mR ^2\) such that there does not exist a control \(u:[0,T]\to \mR \) with \(x(0)=x^0\) and \(x(T)=x^1\).

  • 4.

    Consider the input-state-output system

    \[ \dot {x}=Ax+B_1w+B_2u,\qquad z=C_1x+D_{12}u,\qquad y=C_2x+D_{21}w, \]

    with

    \begin{gather*} A=\bbm {0&1\\-4&0},\quad B_1=\bbm {0&0\\1&0},\quad B_2=\bbm {0\\1},\quad C_1=\bbm {0&1\\0&0},\quad D_{12}=\bbm {0\\1}, \\ C_2=\bbm {0&1},\quad D_{21}=\bbm {0&1}. \end{gather*}

    (a)

    Determine whether or not \((A,B_2)\) is stabilizable.

    (b)

    Determine whether or not \((A,C_2)\) is detectable.

    (c)

    Determine whether or not the Rosenbrock matrix

    \[ \bbm {sI-A&-B_2\\C_1&D_{12}}, \]

    is injective for all \(s\in \mC \) with \(\re (s)=0\).

    (d)

    Determine whether or not the Rosenbrock matrix

    \[ \bbm {sI-A&-B_1\\C_2&D_{21}}, \]

    is surjective for all \(s\in \mC \) with \(\re (s)=0\).

    (e)

    Determine all \(\gamma >0\) for which there exist symmetric positive semidefinite solutions \(X\) and \(Y\) of the algebraic Riccati equations

    \begin{gather*} \hspace *{-5mm} A^*X+XA+C_1^*C_1+\gamma ^{-2}XB_1B_1^*X-(XB_2+C_1^*D_{12})(D_{12}^*D_{12})^{-1}(B_2^*X+D_{12}^*C_1)=0,\notag \\ \hspace *{-5mm} AY+YA^*+B_1B_1^*+\gamma ^{-2}YC_1^*C_1Y-(YC_2^*+B_1D_{21}^*)(D_{21}D_{21}^*)^{-1}(C_2Y+D_{21}B_1^*)=0, \end{gather*} such that the spectral radius condition \(r(XY)<\gamma ^2\) holds and such that

    \begin{gather*} A+\gamma ^{-2}B_1B_1^*X-B_2(D_{12}^*D_{12})^{-1}\left (D_{12}^*C_1+B_2^*X\right ), \\ A+\gamma ^{-2}YC_1^*C_1-(B_1D_{21}^*+YC_2^*)(D_{21}D_{21}^*)^{-1}C_2, \end{gather*} are asymptotically stable. Also, for a given \(\gamma \) for which we have existence, determine the unique \(X\) and \(Y\) with the above properties.

Solution

  • 1.

    (a) The characteristic polynomial \(\det (sI-A)\) is \(s^3+12s^2+44s+48\). All coefficients are positive and \(a_2a_1-a_0a_3=12*44-48>0\). Therefore the matrix is asymptotically stable.

    (bi) This is in the standard form \(\ddot {q}+2\zeta \omega _0\dot {q}+\omega _0^2q=K\omega _0^2u\) with natural frequency \(\omega _0=\sqrt {5}\) and damping ratio \(\zeta =\frac {1}{\sqrt {5}}\). Since \(\zeta \in (0,1)\), this system is underdamped. Alternatively, the characteristic polynomial is \(s^2+2s+5\) which has a pair of complex conjugate roots (as seen in (bii)) so that the system is underdamped.

    (bii) We have to solve \(\ddot {q}(t)+2\dot {q}(t)+5q(t)=5\). The homogeneous equation is \(\ddot {q}(t)+2\dot {q}(t)+5q(t)=0\) which has characteristic equation \(s^2+2s+5=0\), which has roots \(s=-1\pm 2i\). A particular solution is \(1\). Therefore the general solution is

    \[ q(t)=C_1\e ^{-t}\cos (2t)+C_2\e ^{-t}\sin (2t)+1. \]

    We then have

    \[ \dot {q}(t)=C_1\left (-2\e ^{-t}\sin (2t)-\e ^{-t}\cos (2t)\right ) +C_2\left (2\e ^{-t}\cos (2t)-\e ^{-t}\sin (2t)\right ). \]

    The initial condition is \(q(0)=\dot {q}(0)=0\). This gives

    \[ C_1+1=0,\qquad -C_1+2C_2=0. \]

    Solving this gives

    \[ C_1=-1,\qquad C_2=\frac {-1}{2}, \]

    so that

    \[ q(t)=-\e ^{-t}\cos (2t)-\frac {1}{2}\e ^{-t}\sin (2t)+1. \]

    The output for this is

    \begin{align*} y(t)&=\dot {q}(t)=-\left (-2\e ^{-t}\sin (2t)-\e ^{-t}\cos (2t)\right ) -\frac {1}{2}\left (2\e ^{-t}\cos (2t)-\e ^{-t}\sin (2t)\right ) \\&=\frac {5}{2}\e ^{-t}\sin (2t), \end{align*} which is therefore the step response.

    (biii) The impulse response \(h\) is the derivative of the step response, so

    \[ h(t)=\frac {5}{2}\e ^{-t}\left (2\cos (2t)-\sin (2t)\right ). \]

    (biv) From the equations we immediately see that \(G(s)=\frac {5s}{s^2+2s+5}\).

    (bv) We have \(F(\omega )=G(i\omega )=\frac {5i\omega }{2i\omega +5-\omega ^2}\).

    (c) The impulse response is the inverse Laplace transform of the transfer function and therefore equals \(\e ^{-t}\). The step response is the anti-derivative which is zero in zero of the impulse response and therefore equals \(1-\e ^{-t}\).

  • 2.

    (a) We can choose \(F_2=-2\) to make \(A+B_2F_2=-1\) which is stable.

    (b) We have

    \[ G(s)=C_1(sI-A)^{-1}B_2=\frac {1}{s-1}. \]

    (c) We have \(G(0)=-1\) which is surjective.

    (d) With the given matrices, the regulator equations are

    \[ \Pi +2+V=0,\qquad \Pi +1=0. \]

    This gives the solution \(\Pi =-1\), \(V=-1\).

    (e) The notes state that surjectivity of \(G(0)\) is a sufficient condition for solvability of the regulator equations. Therefore by (c) we know that a solution in (d) exists before calculating it.

    (f) As in (a), we choose \(F_2=-2\). The differential equations then are

    \[ \dot {x}=x+2w+u,\qquad z=x+w,\qquad \dot {x}_e=0,\qquad w=x_e,\qquad u=-3x_e-2x. \]

    Eliminating \(u\) and \(w\) gives

    \[ \dot {x}=-x-x_e,\qquad \dot {x}_e=0,\qquad z=x+x_e. \]

    Solving this with the initial conditions \(x(0)=x^0\) and \(x_e(0)=x_e^0\) gives

    \[ x_e=x_e^0,\qquad x=-x_e^0+(x^0+x_e^0)\e ^{-t},\qquad z=(x^0+x_e^0)\e ^{-t}. \]

    We then see that \(\lim _{t\to \infty }z(t)=0\) for all \(x^0\) and \(x_e^0\) and we see that when \(x_e^0=0\) we have \(x=x^0\e ^{-t}\) so that in this case \(\lim _{t\to \infty }x(t)=0\) for all \(x^0\).

    (gi) We have

    \[ \bbm {A&B_1C_e\\0&A_e}=\bbm {1&2\\0&0},\qquad \bbm {C_2&D_{21}C_e}=\bbm {1&0}. \]

    With \(L=\sbm {L_1\\L_2}\) we then have

    \[ \bbm {A&B_1C_e\\0&A_e}-L\bbm {C_2&D_{21}C_e} =\bbm {1&2\\0&0}-\bbm {L_1\\L_2}\bbm {1&0} =\bbm {1-L_1&2\\-L_2&0}. \]

    The characteristic polynomial of this is \(s^2+(-1+L_1)s+2L_2\) which is stable if and only if \(-1+L_1>0\) and \(L_2>0\). We can therefore take \(L_1=2\) and \(L_2=1\), i.e. \(L=\sbm {2\\1}\).

    (gii) With the formulas obtained earlier, we have

    \[ A_c=\bbm {-1&2\\-1&0}+\bbm {1\\0}\bbm {-2&-3}=\bbm {-3&-1\\-1&0},\qquad B_c=\bbm {2\\1},\qquad C_c=\bbm {-2&-3}. \]

  • 3.

    (a) We have

    \[ \bbm {B&AB}=\bbm {1&-1\\1&-1}. \]

    Since this matrix has up to a minus sign identical columns, it is not surjective. Therefore the system is not controllable.

    (b) The Hautus controllability matrix is

    \[ \bbm {sI-A&B}=\bbm {s-1&0&0&1&0\\0&s-1&-2&0&1\\0&-1&s-3&0&0}. \]

    Selecting the fourth, fifth and second columns gives the matrix

    \[ \bbm {1&0&0\\0&1&s-1\\0&0&-1}, \]

    which is upper-triangular with nonzero elements on the diagonal and is therefore invertible and therefore surjective. Therefore the Hautus controllability matrix is surjective for all \(s\in \mC \). It follows that the system is controllable.

    (c) We have

    \[ \bbm {C\\CA}=\bbm {1&0\\-3&1}. \]

    which is lower-triangular with nonzero elements on the diagonal and is therefore invertible and therefore injective. It follows that the system is observable.

    (di) We have

    \[ \bbm {B&AB}=\bbm {2&-2\\1&\alpha } \]

    The determinant of this matrix is \(2\alpha +2\), so that the matrix is invertible precisely when \(\alpha \neq -1\). Hence the matrix is surjective if and only if \(\alpha \neq -1\) so that the system is controllable if and only if \(\alpha \neq -1\).

    (dii) When \(\alpha \neq -1\) the system is controllable and therefore the reachable subspace is \(\mR ^2\). When \(\alpha =-1\), the image of the Kalman controllability matrix is the span of \(\sbm {2\\1}\) and therefore this is the reachable subspace.

    (diii) For \(\alpha \neq -1\) the system is controllable and therefore stabilizable. For \(\alpha =-1\), the only eigenvalue of \(A\) is \(-1\) (with multiplicity 2) and therefore \(A\) is stable. Therefore \((A,B)\) is stabilizable for all \(\alpha \in \mR \).

    (e) The differential equations are \(\dot {x}_1=u\), \(\dot {x}_2=u\). Define \(z:=x_1-x_2\). Then \(\dot {z}=\dot {x}_1-\dot {x}_2=u-u=0\). Choose \(x^0=\sbm {0\\0}\) so that \(z(0)=0\). Then \(z=0\) no matter what \(u\) is. Choose \(x^1=\sbm {1\\0}\). The condition \(x(T)=x^1\) then gives \(z(T)=1\), which is false for any \(T\). So we could in fact choose any \(T>0\).

  • 4.

    (a) The Kalman controllability matrix is

    \[ \bbm {B_2&AB_2}=\bbm {0&1\\1&0}, \]

    which is invertible and therefore surjective. Hence the pair \((A,B_2)\) is controllable and therefore stabilizable.

    (b) The Kalman observability matrix is

    \[ \bbm {C_2\\C_2A}=\bbm {0&1\\-4&0}, \]

    which is invertible and therefore injective. Hence the pair \((A,C_2)\) is observable and therefore detectable.

    (c) The Rosenbrock matrix is

    \[ \bbm {s&-1&0\\4&s&-1\\0&1&0\\0&0&1}. \]

    The matrix consisting of the last three rows is upper-triangular with nonzero elements on the diagonal and is therefore invertible. Hence the Rosenbrock matrix is injective for all \(s\in \mC \) (in particular for all \(s\in \mC \) with \(\re (s)=0\)).

    (d) The Rosenbrock matrix is

    \[ \bbm {s&-1&0&0\\4&s&-1&0\\0&1&0&1}. \]

    The matrix consisting of the last three columns is lower-triangular with nonzero elements on the diagonal and is therefore invertible. Hence the Rosenbrock matrix is injective for all \(s\in \mC \) (in particular for all \(s\in \mC \) with \(\re (s)=0\)).

    (e) The \(X\) Riccati equation is (using that \(X\) is symmetric and that \(C_1^*D_{12}=0\) and \(D_{12}^*D_{12}=1\))

    \begin{multline*} \bbm {0&-4\\1&0}\bbm {X_1&X_0\\X_0&X_2} +\bbm {X_1&X_0\\X_0&X_2}\bbm {0&1\\-4&0} +\bbm {0&0\\1&0}\bbm {0&1\\0&0} \\+\gamma ^{-2}\bbm {X_1&X_0\\X_0&X_2}\bbm {0&0\\1&0}\bbm {0&1\\0&0}\bbm {X_1&X_0\\X_0&X_2} -\bbm {X_1&X_0\\X_0&X_2}\bbm {0\\1}\bbm {0&1}\bbm {X_1&X_0\\X_0&X_2}=\bbm {0&0\\0&0}, \end{multline*} which is

    \[ \bbm {-8X_0&X_1-4X_2\\X_1-4X_2&2X_0} +\bbm {0&0\\0&1} +\left (\gamma ^{-2}-1\right )\bbm {X_0^2&X_0X_2\\X_0X_2&X_2^2}=\bbm {0&0\\0&0}, \]

    which is

    \[ \bbm { -8X_0+\left (\gamma ^{-2}-1\right )X_0^2 &X_1-4X_2+\left (\gamma ^{-2}-1\right )X_0X_2 \\X_1-4X_2+\left (\gamma ^{-2}-1\right )X_0X_2 &2X_0+1+\left (\gamma ^{-2}-1\right )X_2^2 }=\bbm {0&0\\0&0}. \]

    The top-left corner gives \(X_0=0\) or \(X_0=8\left (\gamma ^{-2}-1\right )^{-1}\).

    We first consider the case \(X_0=8\left (\gamma ^{-2}-1\right )^{-1}\). The off-diagonal entry then gives

    \[ X_1+4X_2=0. \]

    Since \(X_1,X_2\geq 0\) (because \(X\) is symmetric positive semi-definite), this implies \(X_1=X_2=0\). The determinant of \(X\) then equals \(-X_0^2\) which is negative which contradicts that \(X\) is symmetric positive semi-definite. Therefore this case must be excluded.

    We now consider the case \(X_0=0\). Then the bottom-right corner gives that we must have

    \[ \gamma >1, \]

    and that \(X_2=\left (1-\gamma ^{-2}\right )^{-1/2}\) and finally the off-diagonal entry gives that \(X_1=4\left (1-\gamma ^{-2}\right )^{-1/2}\). It follows that

    \[ X=\left (1-\gamma ^{-2}\right )^{-1/2}\bbm {4&0\\0&1}. \]

    The feedback then is (again using that \(C_1^*D_{12}=0\) and \(D_{12}^*D_{12}=1\)):

    \[ F=-\bbm {0&1}\left (1-\gamma ^{-2}\right )^{-1/2}\bbm {4&0\\0&1} =\bbm {0&-\left (1-\gamma ^{-2}\right )^{-1/2}}, \]

    so that

    \[ B_2F=\bbm {0&0\\0&-\left (1-\gamma ^{-2}\right )^{-1/2}}. \]

    We further have

    \[ \gamma ^{-2}B_1B_1^*X=\gamma ^{-2}\bbm {0&0\\0&\left (1-\gamma ^{-2}\right )^{-1/2}}, \]

    so that

    \[ B_2F+\gamma ^{-2}B_1B_1^*X=(-1+\gamma ^{-2})\bbm {0&0\\0&\left (1-\gamma ^{-2}\right )^{-1/2}}, \]

    and therefore

    \begin{align*} A+B_2F+\gamma ^{-2}B_1B_1^*X &=\bbm {0&1\\-4&0} +\bbm {0&0\\0& \left (\gamma ^{-2}-1\right )\left (1-\gamma ^{-2}\right )^{-1/2}} \\ &=\bbm {0&1\\-4& -\left (1-\gamma ^{-2}\right )^{1/2}}, \end{align*} which is stable under the condition \(\gamma >\varepsilon \) which we already saw above (it is the first order form of a second order system with positive coefficients).

    The \(Y\) Riccati equation is (using that \(Y\) is symmetric, that \(B_1D_{21}^*=0\) and \(D_{21}D_{21}^*=1\))

    \begin{multline*} \bbm {0&1\\-4&0}\bbm {Y_1&Y_0\\Y_0&Y_2} +\bbm {Y_1&Y_0\\Y_0&Y_2}\bbm {0&-4\\1&0} +\bbm {0&0\\1&0}\bbm {0&1\\0&0} \\ +\gamma ^{-2}\bbm {Y_1&Y_0\\Y_0&Y_2}\bbm {0&0\\1&0}\bbm {0&1\\0&0}\bbm {Y_1&Y_0\\Y_0&Y_2} -\bbm {Y_1&Y_0\\Y_0&Y_2}\bbm {0\\1}\bbm {0&1}\bbm {Y_1&Y_0\\Y_0&Y_2} =\bbm {0&0\\0&0}, \end{multline*} which is

    \[ \bbm {2Y_0&Y_2-4Y_1\\Y_2-4Y_1&-8Y_0} +\bbm {0&0\\0&1} +\left (\gamma ^{-2}-1\right )\bbm {Y_0^2&Y_0Y_2\\Y_0Y_2&Y_2^2} =\bbm {0&0\\0&0}, \]

    which is

    \[ \bbm {2Y_0+\left (\gamma ^{-2}-1\right )Y_0^2 &Y_2-4Y_1+\left (\gamma ^{-2}-1\right )Y_0Y_2 \\Y_2-4Y_1+\left (\gamma ^{-2}-1\right )Y_0Y_2 &-8Y_0+1+\left (\gamma ^{-2}-1\right )Y_2^2 }=\bbm {0&0\\0&0}. \]

    From the top-left corner we obtain \(Y_0=0\) or \(Y_0=2\left (1-\gamma ^{-2}\right )^{-1}\).

    We first consider the case \(Y_0=2\left (1-\gamma ^{-2}\right )^{-1}\). The off-diagonal entry then gives

    \[ -4Y_1-Y_2=0, \]

    which since \(Y_1,Y_2\geq 0\) (since \(Y\) is symmetric positive semi-definite) gives \(Y_1=Y_2=0\). The determinant of \(Y\) then equals \(-Y_0^2<0\) which contradicts that \(Y\) is symmetric positive semi-definite. We can therefore ignore this case.

    We now consider the case \(Y_0=0\). The bottom-right corner then gives that we must have \(\gamma >1\) and that \(Y_2=\left (1-\gamma ^{-2}\right )^{-1/2}\) and the off-diagonal entry gives \(Y_1=\left (1-\gamma ^{-2}\right )^{-1/2}\frac {1}{4}\). Hence

    \[ Y=\left (1-\gamma ^{-2}\right )^{-1/2} \bbm {\frac {1}{4}&0\\0&1}. \]

    We then have (using again that \(B_1D_{21}^*=0\) and \(D_{21}D_{21}^*=1\)):

    \begin{gather*} A+\gamma ^{-2}YC_1^*C_1-(B_1D_{21}^*+YC_2^*)(D_{21}D_{21}^*)^{-1}C_2 =\bbm {0&1\\-4&0}+\left (\gamma ^{-2}-1\right )Y\bbm {0&0\\0&1} \\ =\bbm {0&1\\-4&-\left (1-\gamma ^{-2}\right )^{1/2}}, \end{gather*} which is stable.

    We have

    \[ XY=\left (1-\gamma ^{-2}\right )^{-1/2}\bbm {4&0\\0&1}\left (1-\gamma ^{-2}\right )^{-1/2} \bbm {\frac {1}{4}&0\\0&1}=\left (1-\gamma ^{-2}\right )^{-1}\bbm {1&0\\0&1}, \]

    so that the spectral radius condition is \(\left (1-\gamma ^{-2}\right )^{-1}<\gamma ^2\), which can be re-written as \(\gamma >\sqrt {2}\).

Exam guide

Material that is not examinable

Not examinable is:

  • • Chapters 17 and 18 (the starred chapters);

  • • The sections with “case study” in their title;

  • • Figures;

  • • The remarks on Lur’e equations (Remark 15.9 and Remarks 16.3 and 16.4);

The remainder of the lecture notes are examinable.

Correspondence between chapters and exam questions

As in the mock:

  • • Exam Question 1 will be on Chapters 1–7;

  • • Exam Question 2 will be on Chapters 9–10;

  • • Exam Question 3 will be on Chapters 11–13;

  • • Exam Question 4 will be on Chapters 14–16 and 19–20.

Formulas which you should know by heart and ones that will be given in the question

The objective of the exam is to test that you understand the concepts of the module by showing that you can apply them (calculate with them) in specific examples. It is not meant to be a memory test.

Results/definitions which you should know by heart:

  • • You should know when a second order system is underdamped, critically damped or overdamped;

  • • You should know how the input and output with zero initial condition relate to each other in terms of the transfer function and in terms of the impulse response;

  • • You should know how the step response, transfer function, frequency response and impulse response are related;

  • • You should know the various equivalent conditions for controllability, observability, stabilizability and detectability;

  • • You should be able to determine the reachable subspace and the unobservable subspace.

Formulas which you should know by heart:

  • • Routh–Hurwitz for first and second degree equations;

  • • The formula for the step response in Definition 4.2 (and/or other ways of calculating the step response);

  • • The formula for the transfer function in Definition 5.4 (and/or other ways of calculating the transfer function);

  • • The formula for the frequency response in Definition 6.6 (and/or other ways of calculating the frequency response);

  • • The formula for the impulse response in Remark 7.2 (and/or other ways of calculating the impulse response);

  • • The formulas for the Kalman controllability matrix and the Hautus controllability matrix in Definition 11.2;

  • • The formulas for the Kalman observability matrix and the Hautus observability matrix in Definition 12.2;

  • • The formula relating the output injection and an observer in Remark 13.9.

Formulas which will be given in the question if needed:

  • • Routh–Hurwitz for third degree and higher equations;

  • • The filter formulas in Section 6.3;

  • • Exosystems such as in Remark 9.1;

  • • The regulator equations in Definition 9.3;

  • • The relation between the feedback and the regulator equations in Theorem 9.6;

  • • The formulas for the controller in Theorem 10.4;

  • • The formulas for the the controllability Gramian in Definition 11.2, for the infinite-time controllability Gramian in Definition 11.10 and the control Lyapunov equation in Proposition 11.11;

  • • Ackermann’s formula from Remark 11.6;

  • • The minimizing control and minimizing value in Remark 11.7;

  • • The formulas for the the observability Gramian in Definition 12.2, for the infinite-time observability Gramian in Definition 12.7 and the observation Lyapunov equation in Proposition 12.8;

  • • The formula for the initial condition in terms of the output and observability Gramian in Remark 12.4;

  • • The formula relating the impulse response and the infinite-time observability Gramian in Remark 12.9;

  • • The algebraic Riccati equation and the corresponding feedback in Definition 14.1 and Theorem 14.2;

  • • The Rosenbrock matrix in Definition 14.3;

  • • The algebraic Riccati equation, the corresponding feedback, the Rosenbrock matrix and the minimum value in Theorem 15.6;

  • • The algebraic Riccati equations, the corresponding feedback and output injection, the Rosenbrock matrices, the controller formulas and the minimum value in Theorem 16.2;

  • • The algebraic Riccati equation, the corresponding feedback and the Rosenbrock matrix in Theorem 19.3;

  • • The algebraic Riccati equations, the corresponding feedback and output injection, the Rosenbrock matrices, the controller formulas and the coupling condition in Theorem 20.2.