Notes on modern control theory – Just enough for robotics – Part 3

Till now we have assumed we can observe all the output state that is we have a full state feedback $y = C \cdot x$ where $y$ is the output, $x$ is the input and $C = I$. Assuming full state feedback, we have an optimal feedback matrix $K$ such that $u = -K \cdot x$ is the optimal control as derived using Linear Quadratic Regulator(LQR). In Matlab K = lqr(A,B,Q,R)`where Q is the penalty for errors in tracking and R is the control effort.

In real system we don't have access to the full state. Sometimes we don't have the sensors and sometimes measuring the actual state is too challenging. If we assume that $x \in \mathbb{R}^n, u \in \mathbb{R}^q, y \in \mathbb{R}^p$ then $p \ll n$


The observability obsv(A,C) answers the question whether it is possible to estimate any state $x$ given a series of measurements $y(t)$. If the system is observable then we can design a full state estimator which uses the control signal $u$ and the measured state $y$ to estimate $\tilde x$. The best estimated state $\tilde x$ is then fed into LQR controller to generate appropriate optimal control.

The observability matrix is defined as

\begin{equation} \mathcal{O} = [C \hspace{0.1cm} CA \hspace{0.1cm} CA^2 \ldots CA^{n-1}]^T \end{equation}

which looks(and behaves) like the controllable matrix $\mathcal{C}$. The system is observable if $rank(obsv(A,C)) == n$. Also if we take $[U, \Sigma, V] = svd(\mathcal{O})$ then $V^T$ represents the directions where the system is most observable.

Full State Estimation and Kalman Filter

The estimator itself is a dynamical system which can be represented as

\begin{align} \dot{\tilde x} &= A \tilde x + B u + \overbrace{K_f(y – \tilde y)}^\text{update based on new data 'y'} \nonumber \\ \tilde y &= C \tilde x \end{align}

The equations can be combined into one equation( by plugging $\tilde y = C \tilde x$) to get

\begin{equation} \dot{\tilde x} = (A – K_f C) \cdot \tilde x + [B \hspace{0.1cm} K_f] \cdot \left[ \begin{array}{c}u \\ y \end{array} \right] \end{equation}

If $A$ and $C$ are observable then by placing the eigenvalues of $(A-K_f C)$ using suitable gain matrix $K$ will be an optimal choice.

Kalman Filter

Kalman filter is an analog of LQR, that is an optimal estimator. A real system has noise and uncertainty. If we assume the system has $w_d$ disturbance and $w_n$ as the sensor measurement noise then we can express the dynamical system as

\begin{align} \dot x &= A \cdot x + B \cdot u + w_d \nonumber \\ y &= C \cdot x + w_n \end{align}

Where $w_d$ and $w_n$ are both white Gaussian noise with covariances $V_d$ and $V_n$ respectively. If we believe that the disturbance is way more than expected then we should trust the measurements and also the other way around. In some sense the ratio of the variances is a measure of how we should adjust the system.

Linear Quadratic Gaussian (LQG)

LQG combines LQR and LQE to generate an optimal controller for a linear system that is controllable and observable. It is interesting and not immediately obvious that an optimal estimator and optimal controller together are also optimal.

Sometimes LQG does not work and has robustness issues. That leads to robust control which is studied later. We have (as derived previously)

\begin{align} \epsilon &= x – \tilde x \nonumber \\ \dot x &= Ax -B K_r \tilde x + w_d \nonumber \\ \dot \epsilon &= (A- K_f C)\epsilon +w_d – K_f w_n \end{align}

Then combining the equations we get

\begin{equation} \frac{d}{dt} \left[ \begin{array}{c} x \\ \epsilon \end{array} \right] = \left[ \begin{array} {cc} (A- BK_f) & BK_r \\ 0 & (A-K_fC) \end{array} \right] \left[ \begin{array} {c} x \\ \epsilon \end{array} \right] + \left[ \begin{array} {cc} I & 0 \\ I & -K_f \end{array} \right] \left[ \begin{array} {c} w_d \\ w_n \end{array} \right] \end{equation}

It clearly combines the dynamics of LQR and LQE. This is an example of Separation Principle