The stability analysis of solutions of differential equations is one of the most important axes in ODE. Here w gives a concise course on the stability of equilibrium (critical) points of differential equations.

**The stability analysis of solutions of differential equations**

The existence of solutions is guaranteed by two fundamental theorems, Peano’s theorem which gives on the existence of solutions, and the Cauchy-Lipschitz theorem which gives the existence and the uniqueness of solutions. We also mention that several types of solutions are introduced for nonlinear Cauchy problems. In fact, the local solution; the maximal solution, and the global solution. For stability, the solution must be global.

**Critical points of functions**

A **critical** (or an **equilibrium**) point of a continuous function $F:\mathbb{R}^d\to \mathbb{R}^d$ is an element $x^\ast\in \mathbb{R}^d$ such that $F(x^\ast)=0$.

Let us consider the differential equation \begin{align*}\tag{Eq}\dot{u}(t)=F(u(t)),\quad t\ge t_0,\quad u(t_0)=x_0.\end{align*} As $F$ is continuous, by Peanoâ€™s theorem we know that there exists a maximal solution $u:[t_0,T)\to \mathbb{R}^d$.

If $x^\ast\in\mathbb{R}^d$ is a critical point of $F,$ then the function $r(t)=x^\ast$ for any $t\ge 0$ satisfies $\dot{r}(t)=F(r(t))$. The function $t\mapsto r(t)$ is called the **reference or the stationary solution** of the differential equation.

Having the stationary solution, one wonders if the other solutions to the differential equations are somehow close to this reference solution. This is the starting point of the theory of stability, which we study below.

**Stability analysis of solutions to differential equations**

The critical point $x^\ast$ is called **exponentially stable** if there exist constants $\delta,\omega,M\in (0,\infty),$ with $\delta$ small enough, such that if the initial state $u(t_0)=x_0$ is closed to the critical point $x^\ast$, i.e. $\|x_0-x^\ast\| \le \delta,$ then

- the solution $u(t)$ is well defined for any $t\ge t_0$. i.e. $u$ is a global solution, and
- $\|u(t)-x^\ast\|\le M e^{-\omega t}\|x_0-x^\ast\|$ for any $t\ge 0$.

**The linear case:** We assume that $F(x)=Ax,$ for $x\in\mathbb{R}^p,$ where $A$ is a square matrix of order $p$. As $F$ is linear, then $F(0)=0,$ so that $0$ is an equilibrium point for $F$. On the other hand, the solution of the differential equation $\dot{u}(t)=A u(t)$ is given by the following exponential of the matrix $A,$ \begin{align*}u(t)=e^{t A}u(0):=\sum_{n=0}^\infty \frac{t^n}{n!} A^n u(0),\qquad \forall t\ge 0.\end{align*}

If $A$ is a diagonal matrix with complex nombers $\lambda_1,\lambda_2,\cdots,\lambda_p$ at the diagonal, then it is clear that the matrix $e^{t A}$ is diagonal with $e^{\lambda_i}$ for $i=1,cdots,p$ at the diagonal. Thus \begin{align*}\|u(t)\|\le e^{(\sup_{1\le i\le p} {\rm Re}\lambda_i)t}.\end{align*}

In this case, the equilibrium point $0$ is exponentially stable if and only if the spectral bound $s(A):=\sup_{1\le i\le p} {\rm Re}\lambda_i < 0$. More generally, we have the following result

### Lyapunov stability theorem: linear differential equation

Let $A$ be a square matrix with spectrum $\sigma(A)$, the set of eigenvalues of $A$. Then there exist constants $\omega >0$ a $M>0$ such that \begin{align*}\|e^{tA}\|\le M e^{-\omega t},\qquad \forall t\ge 0,\end{align*} if and only if the spectral bounded satisfies \begin{align*}s(A)=\sup\{ {\rm Re}\lambda: \lambda\in \sigma(A)\}.\end{align*}

**Sketch of the proof:** If $A$ is diagonalizable, then there exists an invertible matrix $P$ such that $A=P^{-1}D P,$ where $P$ is a diagonal matrix. Remark that for all $n\in \mathbb{N},$ $A^n=P^{-1}D^n P$. This implies that \begin{align*}e^{tA}=P^{-1} e^{tD} P,\quad \forall t\ge 0.\end{align*}Now, assume that $s(A) < 0. $ Using the previous result, for any $\omega\in (0,-s(A))$, we have \begin{align*}\left\| e^{t A}\right\|\le M e^{-\omega t},\quad\forall t\ge 0,\end{align*} where $M=\|P^{-1}\|\|P\|$. Conversely, by contraposition assume that $s(A)\ge 0,$ then there exists $\lambda\in\sigma(A)$ such that ${\rm Re}\lambda\ge 0$. Let $x\in \mathbb{R}^p$ the eigenvector associated with $\lambda,$ so that $Ax=\lambda x$. Then $A^n=\lambda^n x$. This implies that $e^{t A}x=e^{\lambda t}x$. Thus $\|e^{t A}x\|=e^{{\rm \lambda} t}\|x\|$ dont converge to zero as $t\to +\infty$. This ends the proof.

If $A$ is not diagonalizable, i.e. when $\sigma(A)=\{\lambda_1,\cdots,\lambda_r\}$ with $r < p$ and each $\lambda_i$ has a multiplicity $n_i$ such that $n_1+\cdots+n_r=p$. In this case, there exists a square invertible matrix $S$ such $A=S^{-1}JS$, where $J$ is a matrix of Jordan, $J$ is diagonalizable by block $J=diag(J_{\lambda_1},\cdots,J_{\lambda_r})$ with each block $J_{\lambda_i}$ is an upper triangular matrix with $\lambda_i$ repeated in the diagonal. We also have \begin{align*}e^{tA}=S^{-1} diag\left(e^{t J_{\lambda_1}},\cdots,e^{t J_{\lambda_r}}\right)S.\end{align*}