Consider the input-state-output system
\[ \dot {x}(t)=Ax(t)+Bu(t),\qquad y(t)=Cx(t)+Du(t). \]
Here \(A\in \mR ^{n\times n}\), \(B\in \mR ^{n\times m}\), \(C\in \mR ^{p\times n}\) and \(D\in \mR ^{p\times n}\).
Our objective is to find, for a given initial state \(x^0\in \mR ^n\) (i.e. such that \(x(0)=x^0\)), a control \(u\) which minimizes
\(\seteqnumber{0}{14.}{0}\)\begin{equation} \label {eq:cost} \int _0^\infty \|y(t)\|^2\,dt. \end{equation}
We will also consider this problem with the additional requirement that \(\lim _{t\to \infty }x(t)=0\); this is called the zero endpoint case.
Definition 14.1. Assume that \(D\) is injective. The algebraic Riccati equation associated to the input-state-output system is
\(\seteqnumber{0}{14.}{1}\)\begin{equation} \label {eq:Riccati} A^*X+XA+C^*C-(XB+C^*D)(D^*D)^{-1}(D^*C+B^*X)=0, \end{equation}
where the unknown \(X\) is a \(n\times n\) matrix.
Theorem 14.2. Assume that \(D\) is injective. Then the following are equivalent:
1. For every \(x^0\in \mR ^n\) there exists a \(u\) such that \(\int _0^\infty \|y(t)\|^2\,dt<\infty \)
2. The algebraic Riccati equation (14.2) has a symmetric positive semidefinite solution.
Assume that the above conditions hold. Then the following hold:
• There exists a smallest symmetric positive semidefinite solution \(X_0\) of the algebraic Riccati equation (14.2)
• For every \(x^0\in \mR ^n\) there exists a unique optimal control and this optimal control is given by the state feedback matrix
\[ F_0=-(D^*D)^{-1}(B^*X_0+D^*C), \]
as \(u(t)=F_0x(t)\)
• The minimimum of the cost (14.1) is given by \(\ipd {X_0x^0}{x^0}\).
The theorem below is about the zero endpoint case.
Theorem 14.4. Assume that \(D\) is injective and that \((A,B)\) is stabilizable. The following hold:
• There exists a largest symmetric positive semidefinite solution \(X_+\) of the algebraic Riccati equation (14.2)
• The infimum of the cost (14.1) amongst all \(u\) such that \(\lim _{t\to \infty }x(t)=0\) is given by \(\ipd {X_+x^0}{x^0}\).
The following two statements are equivalent
1. For every \(x^0\in \mR ^n\) there exists an optimal control;
2. The Rosenbrock matrix is injective for all \(s\) with \(\re (s)=0\).
Assume that the above conditions hold. Then the following hold:
• For every \(x^0\in \mR ^n\) there exists a unique optimal control, this optimal control is given by the state feedback matrix
\[ F_+=-(D^*D)^{-1}(B^*X_++D^*C), \]
as \(u(t)=F_+x(t)\), and the closed-loop matrix \(A+BF_+\) is asymptotically stable.
Remark. Recall that a matrix \(X\in \mR ^{n\times n}\) is called symmetric positive semidefinite if \(X=X^*\) and \(\ipd {Xx}{x}\geq 0\) for all \(x\in \mR ^n\). A matrix \(X\) is symmetric positive semi-definite if and only if all of its eigenvalues are real and nonnegative. In the case \(n=2\) with
\[ X=\bbm {X_1&X_0\\X_0&X_2}, \]
we have that \(X\) is symmetric positive semidefinite if and only if \(X_1,X_2\geq 0\) and \(\det (X)\geq 0\). For symmetric positive semidefinite matrices \(X^1,X^2\in \mR ^{n\times n}\) we define that \(X^2\geq X^1\) if \(\ipd {X^2x}{x}\geq \ipd {X^1x}{x}\) for all \(x\in \mR ^n\). The notions of smallest in Theorem 14.2 and largest in Theorem 14.4 should be understood in that sense. Note that being symmetric positive semidefinite precisely means that \(X\) (is symmetric and) satisfies \(X\geq 0\) where \(0\) denotes the \(n\)-by-\(n\) zero matrix.
Example 14.5. Consider the first order system
\[ \dot {x}(t)+x(t)=u(t), \]
and the performance output
\[ y=\bbm {x\\\varepsilon u}, \]
where \(\varepsilon >0\), i.e.
\[ A=-1,\qquad B=1,\qquad C=\bbm {1\\0},\qquad D=\bbm {0\\\varepsilon }, \]
and the objective is to minimize
\[ \int _0^\infty |x(t)|^2+\varepsilon ^2|u(t)|^2\,dt, \]
with or without a stability condition.
We have that \(D\) is injective. We also have that \((A,B)\) is stabilizable (even controllable by Example 11.12). The Rosenbrock matrix is
\[ \bbm {s+1&-1\\1&0\\0&\varepsilon }. \]
If we delete the first row, then the resulting matrix is diagonal with nonzero entries on its diagonal and is therefore invertible. It follows that the Rosenbrock matrix is injective (in fact for all \(s\in \mC \)). Hence the conditions of Theorems 14.2 and 14.4 are satisfied.
The Riccati equation is (using that \(C^*D=0\) and \(D^*D=\varepsilon ^2\))
\[ -2X+1-\varepsilon ^{-2}X^2=0. \]
This quadratic equation has the two solutions
\[ X=\frac {-1\pm \sqrt {1+\varepsilon ^{-2}}}{\varepsilon ^{-2}} =-\varepsilon ^2\pm \varepsilon \sqrt {\varepsilon ^2+1}. \]
To obtain a positive semi-definite solution (which here just means \(X\geq 0\)) we need the plus sign, so
\[ X=-\varepsilon ^2+\varepsilon \sqrt {\varepsilon ^2+1}. \]
The optimal feedback then is (again using that \(C^*D=0\) and \(D^*D=\varepsilon ^2\)):
\[ F=-\varepsilon ^{-2}\left (-\varepsilon ^2+\varepsilon \sqrt {\varepsilon ^2+1}\right ) =1-\sqrt {1+\varepsilon ^{-2}}. \]
The optimal control then is \(u=\left (1-\sqrt {1+\varepsilon ^{-2}}\right )~x\), which gives the closed-loop system
\[ \dot {x}+\sqrt {1+\varepsilon ^{-2}}~x=0. \]
Example 14.6. Consider the undamped second order system
\[ \ddot {q}(t)+q(t)=u(t), \]
ith the state \(x=\sbm {q\\\dot {q}}\) and the performance output
\[ y=\bbm {\dot {q}\\\varepsilon u}, \]
where \(\varepsilon >0\), i.e.
\[ A=\bbm {0&1\\-1&0},\quad B=\bbm {0\\1},\quad C=\bbm {0&1\\0&0},\qquad D=\bbm {0\\\varepsilon }, \]
and the objective is to minimize
\[ \int _0^\infty |\dot {q}(t)|^2+\varepsilon ^2|u(t)|^2\,dt, \]
with or without a stability condition.
We have that \(D\) is injective. We also have that \((A,B)\) is stabilizable (even controllable by Example 11.14). The Rosenbrock matrix is
\[ \bbm {s&-1&0\\1&s&-1\\0&1&0\\0&0&\varepsilon }. \]
If we delete the first row, then the resulting matrix is upper-triangular with nonzero entries on its diagonal and is therefore invertible. It follows that the Rosenbrock matrix is injective (in fact for all \(s\in \mC \)). Hence the conditions of Theorems 14.2 and 14.4 are satisfied.
The Riccati equation is (using that \(X\) is symmetric and that \(C^*D=0\) and \(D^*D=\varepsilon ^2\))
\(\seteqnumber{0}{14.}{2}\)\begin{multline*} \bbm {0&-1\\1&0}\bbm {X_1&X_0\\X_0&X_2} +\bbm {X_1&X_0\\X_0&X_2}\bbm {0&1\\-1&0} +\bbm {0&0\\1&0}\bbm {0&1\\0&0} \\ -\varepsilon ^{-2}\bbm {X_1&X_0\\X_0&X_2}\bbm {0\\1}\bbm {0&1}\bbm {X_1&X_0\\X_0&X_2}=\bbm {0&0\\0&0}, \end{multline*} which is
\[ \bbm {-2X_0&X_1-X_2\\X_1-X_2&2X_0} +\bbm {0&0\\0&1} -\varepsilon ^{-2}\bbm {X_0^2&X_0X_2\\X_0X_2&X_2^2}=\bbm {0&0\\0&0}, \]
which is
\[ \bbm { -2X_0-\varepsilon ^{-2}X_0^2 &X_1-X_2-\varepsilon ^{-2}X_0X_2 \\X_1-X_2-\varepsilon ^{-2}X_0X_2 &2X_0+1-\varepsilon ^{-2}X_2^2 }=\bbm {0&0\\0&0}. \]
The top-left corner gives \(X_0=0\) or \(X_0=-2\varepsilon ^2\).
We first consider the case \(X_0=-2\varepsilon ^2\). The off-diagonal entry then gives
\[ X_1+X_2=0. \]
Since \(X_1,X_2\geq 0\) (because \(X\) is symmetric positive semi-definite), this implies \(X_1=X_2=0\). The determinant of \(X\) then equals \(-X_0^2\) which is negative which contradicts that \(X\) is symmetric positive semi-definite. Therefore this case must be excluded.
We now consider the case \(X_0=0\). Then the bottom-right corner gives \(X_2=\varepsilon \) and finally the off-diagonal entry gives \(X_1=\varepsilon \). It follows that
\[ X=\varepsilon \bbm {1&0\\0&1}. \]
The optimal feedback then is (again using that \(C^*D=0\) and \(D^*D=\varepsilon ^2\)):
\[ F=-\varepsilon ^{-2}\bbm {0&1}\varepsilon \bbm {1&0\\0&1} =\bbm {0&-\varepsilon ^{-1}}, \]
which gives the control \(u=-\varepsilon ^{-1}\dot {q}\). The closed-loop system then is (in second order form)
\[ \ddot {q}(t)+\varepsilon ^{-1}\dot {q}(t)+q(t)=0. \]
From this we see that increasing \(\varepsilon \) (i.e. putting more emphasis on the control cost) leads to a closed-loop system with a smaller damping ratio.
Since the Riccati equation has only one symmetric positive semi-definite solution and the conditions of Theorem 14.4 are satisfied, it follows that \(F\) is stabilizing.
Example 14.7. We consider a combination of two first order systems
\[ \dot {x}_1=x_1+u,\qquad \dot {x}_2=-x_2, \]
with the cost function
\[ \int _0^\infty |x_2(t)|^2+|u(t)|^2\,dt. \]
Since \(u\) doesn’t influence \(x_2\) (also not indirectly), it is clear that \(u=0\) is the minimizer if asymptotic stability is not demanded. Since \(u=0\) does not make \(x_1\) stable, if asymptotic stability is demanded, then \(u=0\) is not the minimizer. Therefore the \(X_0\) and \(F_0\) from Theorem 14.2 will be different from the \(X_+\) and \(F_+\) from Theorem 14.4.
We have
\[ A=\bbm {1&0\\0&-1},\qquad B=\bbm {1\\0},\qquad C=\bbm {0&1\\0&0},\qquad D=\bbm {0\\1}. \]
We have that \(D\) is injective, that \((A,B)\) is stabilizable (for example \(F=\bbm {-2&0}\) will do since then \(A+BF=-I\)). The Rosenbrock matrix is
\[ \bbm {s-1&0&-1\\0&s+1&0\\0&1&0\\0&0&1}. \]
If we delete the second row, then we obtain an upper-triangular matrix with \(s-1\), \(1\) and \(1\) on the diagonal, so that this matrix is invertible for all \(s\neq 1\) (in particular: for all \(s\) with real part zero). Hence the Rosenbrock matrix is injective.
The Riccati equation is (using that \(X\) is symmetric and that \(C^*D=0\) and \(D^*D=1\))
\[ \bbm {1&0\\0&-1}\bbm {X_1&X_0\\X_0&X_2} +\bbm {X_1&X_0\\X_0&X_2}\bbm {1&0\\0&-1} +\bbm {0&0\\1&0}\bbm {0&1\\0&0} -\bbm {X_1&X_0\\X_0&X_2}\bbm {1\\0}\bbm {1&0}\bbm {X_1&X_0\\X_0&X_2} =\bbm {0&0\\0&0}, \]
which is
\[ \bbm {2X_1&0\\0&-2X_2}+\bbm {0&0\\0&1}-\bbm {X_1^2&X_0X_1\\X_0X_1&X_0^2}=\bbm {0&0\\0&0}, \]
which is
\[ \bbm {2X_1-X_1^2&-X_0X_1\\-X_0X_1&-2X_2+1-X_0^2}=\bbm {0&0\\0&0}. \]
The off-diagonal entry gives that either \(X_0=0\) and \(X_1=0\).
We first consider the case \(X_1=0\). Then the determinant of \(X\) equals \(-X_0^2\) and for \(X\) to be symmetric positive semi-definite we therefore need \(X_0=0\). Hence we do not need to consider the case \(X_1=0\) separately further.
We now consider the case \(X_0=0\). Then the bottom-right corner gives \(X_2=\frac {1}{2}\) and the top left entry gives that either \(X_1=0\) or \(X_1=2\). Therefore we obtain two solutions
\[ X=\bbm {0&0\\0&\frac {1}{2}},\qquad X=\bbm {2&0\\0&\frac {1}{2}}. \]
The first of these is smaller than the second and therefore
\[ X_0=\bbm {0&0\\0&\frac {1}{2}},\qquad X_+=\bbm {2&0\\0&\frac {1}{2}}. \]
The corresponding feedback matrices are
\[ F_0=-\bbm {1&0}\bbm {0&0\\0&\frac {1}{2}}=\bbm {0&0}, \qquad F_+=-\bbm {1&0}\bbm {2&0\\0&\frac {1}{2}}=\bbm {-2&0}. \]
The closed-loop system corresponding to \(F_0\) is
\[ \dot {x}_1=x_1,\qquad \dot {x}_2=-x_2, \]
which we note is not asymptotically stable. The closed-loop system corresponding to \(F_+\) is
\[ \dot {x}_1=-x_1,\qquad \dot {x}_2=-x_2, \]
which is asymptotically stable.
Example 14.8. Consider \(A=0\), \(B=1\), \(C=0\), \(D=1\). Then \(D\) is injective and \((A,B)\) is stabilizable (for example with \(F=-1\) we have \(A+BF=-1\) which is stable). The Rosenbrock matrix is
\[ \bbm {s&-1\\0&1}. \]
For \(s=0\) this is
\[ \bbm {0&-1\\0&1}, \]
which is not injective since it is square and not invertible (its determinant equals zero). Therefore the Rosenbrock condition from Theorem 14.4 is not satisfied and therefore by that theorem (at least for some initial condition \(x^0\)) there does not exist an optimal control (the infimum is not a minimum). We check this directly. The differential equation and the cost are
\[ \dot {x}=u,\qquad \int _0^\infty |u(t)|^2\,dt. \]
Let \(\varepsilon >0\) and \(u=-\varepsilon x\). Then \(\dot {x}=-\varepsilon x\) so that \(x(t)=\e ^{-\varepsilon t}x^0\) (so that we have stability) and \(u(t)=-\varepsilon \e ^{-\varepsilon t}x^0\). Therefore the cost with this \(u\) is
\[ \int _0^\infty \varepsilon ^2\e ^{-2\varepsilon t}(x^0)^2\,dt =\frac {\varepsilon }{2}(x^0)^2. \]
For a given \(x^0\), the cost can therefore be made arbitrarily small by choosing \(\varepsilon \) small enough. Therefore the infimum of the cost equals zero (as the cost is clearly always greater than or equal to zero). If the cost is zero, then we need \(u=0\). However, for \(u=0\) we obtain \(\dot {x}=0\) which is not stable (solutions are \(x=x^0\) which do not converge to zero as \(t\to \infty \) for \(x^0\neq 0\)). Therefore, for the problem with stability, there does not exist an optimal control for \(x^0\neq 0\) (for the problem without stability \(u=0\) is the optimal control).
• Consider the undamped second order system
\[ \ddot {q}(t)+q(t)=u(t), \]
with the state \(x=\sbm {q\\\dot {q}}\) and the performance output
\[ y=\bbm {q\\\varepsilon u}, \]
where \(\varepsilon >0\), i.e.
\[ A=\bbm {0&1\\-1&0},\quad B=\bbm {0\\1},\quad C=\bbm {1&0\\0&0},\qquad D=\bbm {0\\\varepsilon }. \]
(a) Determine whether or not \(D\) is injective
(b) Determine whether or not \((A,B)\) is stabilizable
(c) Determine whether or not the Rosenbrock injectivity condition holds.
(d) Determine all symmetric positive semidefinite solutions of the algebraic Riccati equation (14.2) and the corresponding state feedback matrices.
(e) Write down the closed-loop system in second order form.
Solution. (a) Since \(\varepsilon >0\) we have that \(D\) is injective (the second row is an invertible 1-by-1 matrix).
(b) Since \(A\) and \(B\) are the same as in Example 14.6 we have as there that \((A,B)\) is stabilizable.
(c) The Rosenbrock matrix is
\[ \bbm {s&-1&0\\1&s&-1\\1&0&0\\0&0&\varepsilon }. \]
If we delete the second row, then we obtain the matrix
\[ \bbm {s&-1&0\\1&0&0\\0&0&\varepsilon }, \]
which is easily seen to have determinant \(\varepsilon \) and is therefore invertible. It follows that the Rosenbrock matrix is injective (in fact for all \(s\in \mC \)).
(d) The Riccati equation is (using that \(X\) is symmetric and that \(C^*D=0\) and \(D^*D=\varepsilon ^2\))
\(\seteqnumber{0}{14.}{2}\)\begin{multline*} \bbm {0&-1\\1&0}\bbm {X_1&X_0\\X_0&X_2} +\bbm {X_1&X_0\\X_0&X_2}\bbm {0&1\\-1&0} +\bbm {1&0\\0&0}\bbm {1&0\\0&0} \\ -\varepsilon ^{-2}\bbm {X_1&X_0\\X_0&X_2}\bbm {0\\1}\bbm {0&1}\bbm {X_1&X_0\\X_0&X_2}=\bbm {0&0\\0&0}, \end{multline*} which is
\[ \bbm {-2X_0&X_1-X_2\\X_1-X_2&2X_0} +\bbm {1&0\\0&0} -\varepsilon ^{-2}\bbm {X_0^2&X_0X_2\\X_0X_2&X_2^2}=\bbm {0&0\\0&0}, \]
which is
\[ \bbm { -2X_0+1-\varepsilon ^{-2}X_0^2 &X_1-X_2-\varepsilon ^{-2}X_0X_2 \\X_1-X_2-\varepsilon ^{-2}X_0X_2 &2X_0-\varepsilon ^{-2}X_2^2 }=\bbm {0&0\\0&0}. \]
We note that from the bottom-right corner we have that \(X_0\geq 0\). The top-left corner gives (using that \(X_0\geq 0\) so that we can exclude the negative solution):
\[ X_0=\frac {-1+\sqrt {1+\varepsilon ^{-2}}}{\varepsilon ^{-2}} =-\varepsilon ^2+\varepsilon \sqrt {\varepsilon ^2+1}. \]
The bottom-right corner then gives (using that \(X_2\geq 0\))
\[ X_2=\varepsilon \sqrt {-2\varepsilon ^2+2\varepsilon \sqrt {\varepsilon ^2+1}}. \]
The off-diagonal entry gives
\[ X_1=\sqrt {\varepsilon ^2+1}~\sqrt {-2\varepsilon ^2+2\varepsilon \sqrt {\varepsilon ^2+1}}. \]
It follows that
\[ X=\bbm { \sqrt {\varepsilon ^2+1}~\sqrt {-2\varepsilon ^2+2\varepsilon \sqrt {\varepsilon ^2+1}}. & -\varepsilon ^2+\varepsilon \sqrt {\varepsilon ^2+1} \\ -\varepsilon ^2+\varepsilon \sqrt {\varepsilon ^2+1} &\varepsilon \sqrt {-2\varepsilon ^2+2\varepsilon \sqrt {\varepsilon ^2+1}} }. \]
This is the unique symmetric positive semidefinite solution. The optimal feedback then is (again using that \(C^*D=0\) and \(D^*D=\varepsilon ^2\)):
\[ F=-\varepsilon ^{-2}\bbm {0&1}X =\bbm { 1-\sqrt {1+\varepsilon ^{-2}} &-\sqrt {-2+2\sqrt {1+\varepsilon ^{-2}}} }. \]
(e) The closed-loop system then is (in second order form)
\[ \ddot {q}(t)+\sqrt {-2+2\sqrt {1+\varepsilon ^{-2}}}~\dot {q}(t)+\sqrt {1+\varepsilon ^{-2}}~q(t)=0. \]
□