Notes on modern control theory – Just enough for robotics – Part 1

In the following post we study modern optimal control theory. What is easy and challenging.

We already have dynamical systems represented as a series of Ordinary Differential Equations(ODE) which model the physical natural systems. This is already successful in making prediction of the naturals system.

In control theory, we want to manipulate and control the systems studied above in a desired manner.

Passive control

No energy expenditure and if it works then it's the best. Often not enough and we need to do more for actual control.

Active Control

Active energy pumped to control the system and keep stable.

Open loop control.

We have a known system, which is also called a plant. A system has an input(u) and outputs(y). Open loop control understands the plant so that we can give exact control input to get the required output. For example a sinusoidal for controlling the balance of vertical pole on a finger. Always needs energy.

Feedback closed loop

We take sensor measurements of the output and pass it through a controller which feeds back into the input to stabilize and track.

Why Feedback?

State space representation of ODEs

\begin{align} \dot{x} &= A \cdot x + B \cdot u \\ y &= C \cdot x \end{align}

Where $A$ is the state matrix, $B$ is the control matrix($u$ is the control) and that we can only measure some of the output state $y$(hence the $C$ matrix)

If we posit $u = -K \cdot x$ then

\begin{align} \dot{x} &= A \cdot x – B \cdot K \cdot x \nonumber \\ &= (A-B \cdot K)\cdot x \end{align}

which means we have a new dynamical system and the eigenvalues of the system $A-B \cdot K$ and not $A$ govern the stability which means we can arbitrarily control(under some limits) the system to make it stable.

Linear Systems

For a linear system $ \dot{x} = A \cdot x$ and the solution is $x(t) = e^{At} \cdot x(0)$. $e^{At}$ can be expanded with the Taylor series expansion for exponential, like so

\begin{equation} e^{At} = I + At + \frac{A^2t^2}{2!} + \frac{A^3t^3}{3!} + \ldots \end{equation}

Writing this in direct A matrix form is cumbersome so we will transform A into eigenvalue and eigenvector form so that the expansion is easier.

The definition of eigenvector ofcourse is $A \xi = \lambda \xi$ where $\xi$ is the eigen vector and $\lambda$ is the corresponding eigen value. Also $AT = TD$ where $T$ is the matrix of eigen vectors(in columns) and D is the diagonal matrix with eigenvalues on the diagonal.

If we write the dynamics of the system expressed in $\dot{x} = A \cdot x$ instead in some $z$ coordinates(basis of eigenvectors) then the dynamics in each direction become uncoupled.

\begin{align} x &= Tz \nonumber \\ \dot{x} &= T \dot{z} = Ax \nonumber \\ T \dot{z} &= ATz \nonumber \\ \dot{z} &= T^{-1} A T z \nonumber \\ \dot{z} &= Dz \end{align}

$e^{Dt}$ is trivial to solve so we can solve the whole system easily without having to write a complex Taylor expansion because $e^{At} = T \cdot e^{Dt} \cdot T^{-1}$. The final solution to the dynamical system is then given as

\begin{equation} \label{eq:eigensol} x(t) = T \cdot e^{Dt} \cdot T^{-1} \cdot x(0) \end{equation}

The way to understand the system as shown in \ref{eq:eigensol} is that $T^{-1} \cdot x(0)$ transforms the system into eigen vector coordinates $z$ where the solution $e^{Dt} z$ is simple because all the dynamics are uncoupled and then multiplying by $T$ transforms the solution back to the coordinates of the system under consideration.

From basic complex analysis we know that eigenvalues in the left half complex plane(negative real roots) imply a decaying exponential solution and hence stable.

Discrete physical systems

A dynamical system in discrete times is represented as

\begin{equation} x_{k+1} = \tilde{A} \cdot x_k, \hspace{0.5cm} x_k = x(k \Delta t) \end{equation}

If we represent $\lambda = R \cdot e^{i \theta}$ then the system is stable if the eigen values lie within a unit circle($R = 1$). This is equivalent to saying that the eigenvalues lie within the LHP in continuous systems.

Linearizing non linear systems around a fixed point

if $\dot{x} = f(x)$ where $f(x)$ is a non linear system and $x \in \mathbb{R}^n$ then we linearize around a fixed point. A fixed point $\bar{x}$ is where $f(\bar{x})=0$. For example a perfectly vertical point of inverted pendulum is a fixed point(however unstable) since the forces are fully balanced. To do that we calculate the Jacobian(partial derivative of $f(x)$ w.r.t $x \in \mathbb{R}^n$) of the dynamics at $\bar{x}$. Around the fixed point the non linear dynamics are reduced to linear(as we expand the Taylor series, higher orders of $x – \bar{x}$ become smaller and smaller).

Linearizing a pendulum

The dynamics of pendulum are non linear and are given as in \ref{eq:pendulum}

\begin{equation} \label{eq:pendulum} \ddot\theta = -\frac{g}{L} sin(\theta) + \delta \dot\theta \end{equation}

If we write a state space system where the state variables $[x_1,x_2]$ are $[\theta, \dot\theta]$ then

\begin{equation} \frac{d}{dt} \left[ \begin{array}{c} x_1 \\ x_2 \end{array} \right] = \left[ \begin{array}{c} \dot x_1 \\ \dot x_2 \end{array} \right] = \left[ \begin{array}{c} x_2 \\ -sin(x_1) – \delta x_2 \end{array} \right] \end{equation}

A fixed point $\bar x$ is $[0, 0]$ and $[\pi, 0]$ ($\dot \theta$ at fixed point has to be zero obviously)

The Jacobian of $f(x)$ w.r.t $x_1, x_2$ is

\begin{equation} J = \left[ \begin{array}{cc} 0 & 1 \\ -cos(x_1) & 1 \end{array} \right] \end{equation}

Substitute $[x_1, x_2$ ]= $[0,0]$ for pendulum down(normal) and $[\pi, 0]$ for pendulum up(inverted) positions. If we calculate the eigen vectors for these then we can know whether the system is stable or not(Real roots should be in LHP).

The nonlinear system of \ref{eq:pendulum} is now reduced to

\begin{equation} \dot{\bar x} = J \bar x \end{equation}

Discuss...