Home Blog Page 8

Exercises on differential calculus

0

We offer exercises on differential calculus with detailed answers. In fact, our objectives are as follows: knowing how to calculate the derivative of a function according to a vector; calculating the differential of a function at a point; and showing that a function is continuously differentiable.

We recall that differential calculus is very important in differential equations theory.

Exercises on differential calculus

Exercise: Let $f:\mathbb{R}^2\to \mathbb{R}$ be the function defined by \begin{align*} f(x,y)=\begin{cases} \frac{x^2}{y},& y\neq 0,\cr 0,& y=0.\end{cases} \end{align*} Prove that $f$ admits directional derivatives at $(0,0)$ along all vectors $h\in\mathbb{R}^2\setminus\{(0,0)\}$ that we calculate.

Solution: Let $h=(h_1,h_2)\in\mathbb{R}^2\setminus\{(0,0)\}$. We consider the partial function \begin{align*} \varphi(t)&=f(th)=f(th_1,th_2)\cr &=\begin{cases} t f(h),& t\neq 0,\cr 0,&t=0.\end{cases} \end{align*} We remark that \begin{align*} \varphi(t)=t f(h),\qquad \forall t\in \mathbb{R}. \end{align*} This function is differential in $0$ and $\dot{\varphi}(t)=f(h)$. Hence the directional derivative at $(0,0)$ along $h$ is \begin{align*} D_hf(0,0)=f(h). \end{align*}

Exercise: Prove that the function $\mathbb{R}^2\to \mathbb{R}$ defined by \begin{align*} f(x,y)= \begin{cases} \frac{x^4+x^4}{x^2+y^2},& (x,y)\neq (0,0),\cr 0,& \text{if not.} \end{cases} \end{align*} is of class $\mathcal{C}^1$ on $\mathbb{R}^2$.

Solution: As we have a rational fraction involving $x^2+y^2$ we shall use polar coordinates to prove continuity at $(0,0)$.

Let us first prove that the partial derivatives $\frac{\partial f}{\partial x}$ and $\frac{\partial f}{\partial y}$ exist and equal to $0$ at $(0,0)$. In fact, the functions $\varphi:t\mapsto f\left((0,0)+t(1,0)\right)=t^2$ and $\psi:t\mapsto f\left((0,0)+t(0,1)\right)=t^2$ are differentiable at $0$ and $\varphi'(0)=\psi'(0)=0$. This proves the first claim.

Second, we prove continuity of partial derivatives of $f$ at $(0,0)$. Let $(x,y)\in\mathbb{R}^2\backslash\{(0,0)\}$. Then \begin{align*} \frac{\partial f}{\partial x}(x,y)=\frac{2x^5+4x^3y^2-2xy^4}{(x^2+y^2)^2}. \end{align*} For reals $r>0$ and $\theta\in\mathbb{R},$ we have \begin{align*} &\left|\frac{\partial f}{\partial x}(r\cos(\theta),r\sin(x))\right|\cr & = \left|\frac{r^5(2\cos^5(\theta)+4\cos^3(\theta)\sin^2(\theta)-2\cos(\theta)\sin^4(\theta))}{r^4}\right|\cr & \le 8r, \end{align*} which means that \begin{align*} \left|\frac{\partial f}{\partial x}(x,y)\right|\le 8\sqrt{x^2+y^2}. \end{align*} This implies that \begin{align*} \lim_{(x,y)\to (0,0)} \frac{\partial f}{\partial x}(x,y)=0. \end{align*} Hence $\frac{\partial f}{\partial x}$ is continuous at $(0,0)$. Remark that $x$ et $y$ play the same role, then we have also \begin{align*} \left|\frac{\partial f}{\partial y}(x,y)\right|\le 8\sqrt{x^2+y^2}. \end{align*} This implies that \begin{align*} \lim_{(x,y)\to (0,0)} \frac{\partial f}{\partial y}(x,y)=0. \end{align*} Hence $\frac{\partial f}{\partial y}$ is continuous at $(0,0)$. All these imply that $f$ is a class $\mathcal{C}^1$ on $\mathbb{R}^2$.

One of the classical examples in the exercises on differential calculus is the following:

Exercise: Let $E$ be an euclidian space endowed with the norm $\|x\|:=\sqrt{\langle x,x\rangle}$. Study the differentiability of the function $f(x)=\|x\|$.

Solution: We know that the function square root is differentiable at $0$, then we shall first prove that $f$ is not differentiable at $0_E$. In fact the function $t\mapsto f(th)=\|th\|=|t|\|h\|$ is not differentiable at $0$. This implies that $f$ does not admit a directional derivative along $h\in E\backslash\{0\}$ in $0$.

Assume that $x\in E\backslash\{0\}$. By using the fact that $\sqrt{1+u}=1+\frac{1}{2}u+o(u),$ as $u\to 0$, we obtain \begin{align*} \|x+h\|&=\sqrt{\langle x+h,x+h\rangle}\cr &=\sqrt{ \|x\|^2+2\langle x,h\rangle+\|h\|^2}\cr &= \|x\|\sqrt{1+\frac{2\langle x,h\rangle}{\|x\|^2}+\underset{\|h\|\to 0}{o}(\|h\|)}\cr &= \|x\|\left( 1+\frac{\langle x,h\rangle}{\|x\|^2}+\underset{\|h\|\to 0}{o}(\|h\|) \right)\cr &= \|x\|+\frac{\langle x,h\rangle}{\|x\|}+\underset{\|h\|\to 0}{o}(\|h\|) \end{align*} We define the application \begin{align*} Df(x):E\to \mathbb{R},\quad h\mapsto Df(x)h=\frac{\langle x,h\rangle}{\|x\|}. \end{align*} This application is linear and continuous since \begin{align*} |Df(x)h|\le \frac{|\langle x,h\rangle|}{\|x\|}\le \|h\|. \end{align*} We also have \begin{align*} f(x+h)=f(x)+Df(x)h+\underset{\|h\|\to 0}{o}(\|h\|). \end{align*} Thus $f$ is differentiable on $E\backslash\{0\}$ with differential application \begin{align*}Df(x):E\to \mathbb{R},\quad h\mapsto Df(x)h=\frac{\langle x,h\rangle}{\|x\|},\qquad x\neq 0. \end{align*}

You can also see details on the derivative functions.

Rings and fields

0

In this post, we learn about rings and fields. These are part of the usual algebraic structure. We mention that fields are important in the study of vector spaces, the concept of dimension, and other properties.

An introduction to rings and fields

Definition: A ring is a set $\mathscr{R}$ endowed with two internal composition lows $+$ and $\times$ such that

  • $(\mathscr{R},+)$ is a commutative group, where we denote $0_{\mathscr{R}}$ its neutral element,
  • the law $\times$ is associative, i.e. for any $a,b,c\in \mathscr{R}$, $a\times (b\times c)=(a\times b)\times c,$
  • the law $\times$ has a neutral element denoted by $1_{\mathscr{R}}$,
  • for any $a,b,c\in\mathscr{R},$ we have \begin{align*} a\times (b+c)=a\times b+a\times c\quad (b+c)\times a=b\times a+c\times a.\end{align*}

Let $(\mathscr{R},+,\times)$ be a ring. If in addition, the law $\times$ is commutative, then we say that the ring is commutative.

Remark: Let us mention that the element $a$ of a ring $\mathscr{R}$ does not necessarily have inverses with respect to the law $\times$. When this is the case, we say that $a$ is invertible or it is a unit of $\mathscr{R},$ and the inverse will be denoted by $a^{-1}$.

The product of rings: Let $R$ and $W$ be two rings. On $R\times W$ we define the following structure \begin{align*}(a,b)+(c,d)=(a+c,b+d),\quad (a,b)\times(c,d)=(a\times c,b\times d).\end{align*}Then $(R\times W,+,\times)$ is a ring.

Subrings: Let $(\mathscr{R},+,\times)$ be a ring. A subset $S$ of $\mathscr{R}$ is called a subring of $\mathscr{R}$ if $(S,+,\times)$ is a ring. We have the following characterization: $S$ is a subring of $\mathscr{R}$ if and only if the following assertions hold:

  • $1_{\mathscr{R}}\in S,$
  • for any $x,y\in S,$ $x+(-y)\in S,$
  • for any $x,y\in S$, $x\times y\in S$.

Binomial expansion formula: If $\mathscr{R}$ is a commutative ring, then for any $a,b\in \mathscr{R},$ and $n\in\mathbb{N},$ we have \begin{align*}(a+b)^n=\sum_{k=0}^n (\begin{smallmatrix}n\\ k\end{smallmatrix}) a^{k}\times b^{n-k}.\end{align*} Since the ring of matrices is not commutative, the binomial formula is not applicable to the matrix unless the matrices commute.

Definition: A field is a commutative ring in which every nonzero element is invertible.

A selection of exercises on rings 

In what follows in this section, we propose some exercises on rings and fields with detailed solutions.

Exercise: Prove that the set of dyadic numbers $H=\{n2^{-p}:(n,p)\in\mathbb{Z}\times \mathbb{N}\}$ endowed with usual laws is a ring. Can $H$ be a field?

Solution: It suffices to show that $H$ is a subring of $\mathbb{R}$. Observe that $1=1\times 2^{-0}\in H$. Let $(x,y)\in H^2$. There exist $(n,m)\in\mathbb{Z}^2$ and $(p,q)\in\mathbb{N}^2$ such that \begin{align*} x=n2^{-p}\quad\text{and}\quad y=m2^{-q}. \end{align*} We select $r=\max\{p,q\}$. Then \begin{align*} x-y&=\frac{n2^{r-p}}{2^r}-\frac{m2^{r-q}}{2^r}\cr &= \frac{n2^{r-p}-m2^{r-q}}{2^r}. \end{align*} As $n2^{r-p}-m2^{r-q} \in \mathbb{Z}$ and $r\in \mathbb{N}$ then we have $x-y\in H$. On the other hand, we have $xy=nm 2^{-(p+q)}\in H$ because $nm\in\mathbb{Z}$ and $p+q\in\mathbb{N}$. Hence $H$ is a subring of $\mathbb{R},$ then it is a ring for the usual laws.

Let us show that $H$ is not a field. In fact, we have $3=6\times 2^{-1}\in H$. But $\frac{1}{3}\notin H$. In fact if there exists $(n,p)\in\mathbb{Z}\times \mathbb{N}$ such that $\frac{1}{3}=n2^{-p},$ then $2^p=3n$ which means that $3|2^p$, absurd.

Exercise: Prove that the set \begin{align*} K=\{x+y\sqrt{2}:(x,y)\in\mathbb{Q}\} \end{align*} is a field with respect to the usual laws.

Solution: It suffices to show that $K$ is a subfield of $\mathbb{R}$. Observe that $1=1+0\sqrt{2}\in K$. Let $(x,y)\in K^2$. There exist $(p,q,r,s)\in\mathbb{Q}^4$ such that \begin{align*} x=p+q\sqrt{2}\quad\text{and}\quad y=r+s\sqrt{2}. \end{align*} We select $r=\max\{p,q\}$. Then \begin{align*} x-y&=(p-r)+(q-s)\sqrt{2}\in K \end{align*} On the other hand, \begin{align*} xy&=(p+q\sqrt{2})r+s\sqrt{2}\cr & =(pr+2qs)+(qr+ps)\sqrt{2}\in K. \end{align*} Now if $x\neq 0,$ we have $p-q\sqrt{2}\neq 0$ because $\sqrt{2}\notin \mathbb{Q}$. We can write \begin{align*} x^{-1}&=\frac{p-q\sqrt{2}}{x(p-q\sqrt{2})}\cr &= \frac{p-q\sqrt{2}}{p+q\sqrt{2}(p-q\sqrt{2})}\cr & = \frac{p-q\sqrt{2}}{p^2-2q^2}\cr & =\frac{p}{p^2-2q^2}-\frac{q}{p^2-2q^2}\sqrt{2}\in K. \end{align*} This shows that $K$ is a subfield of $\mathbb{R}$, then it is a field.

Here come other exercises of rings and fields, in particular, ideals.

Exercise: Let $(\mathcal{R},+,\times)$ be a commutative ring. An ideal of $I$ of $\mathcal{R}$ is called prime ideal if $I$ is not the whole ring $\mathcal{R}$ and for any $x,y\in \mathcal{R}$ such that $xy\in I$ we have $x\in I$ or $y\in I$.

  • Let $d\in \mathbb{Z}$. Prove that $d$ is prime if and only if for all $(x,y)\in\mathbb{Z}^2$, $d|(xy)$ implies that $d|x$ or $d|y$.
  • Determine the prime ideals of $\mathbb{Z}$.
  • Determine the prime ideal of $\mathbb{K}[X],$ where $\mathbb{K}$ is a subfield of $\mathbb{C}$.
  • Let $J$ and $K$ be two ideals of $\mathcal{R}$, and $I$ is a prime ideal such that $J\cap K=I$. Prove that $J=I$ or $K=I$.
  • Assume that any ideal of $\mathcal{R}$ is prime. Prove that $\mathcal{R}$ is an integer. Prove that $\mathcal{R}$ is a field.

Solution: 1) Assume that $d$ is prime, then for all $(x,y)\in\mathbb{Z}^2,$ $d|(xy)$ implies that $d|x$ or $d|y$. Conversely, if for all $(x,y)\in\mathbb{Z}^2,$ $d|(xy)$ implies that $d|x$ or $d|y$, and if $ell$ is a strict divisor of $d$. We take $x=\ell$ and $y=\frac{d}{\ell}$. Then $d|(xy)$ but $d$ does not divide either $x$ or $y$, absurd. So $d$ does not have a strict divisor. Hence $d$ is prime.

2) It is well known that the ideals of $\mathbb{Z}$ have the forme $d\mathbb{Z}$ with $d\in\mathbb{N}$. Moreover, $x\in d\mathbb{Z}$ if and only if $d|x$. So $d\mathbb{Z}$ is prime if and only if for all $(x,y)\in\mathbb{Z}^2$, $d|(xy)$ implies that $d|x$ or $d|y$. This is equivalent to $p$ is a prime number.

3) We know that the ideals of $\mathbb{K}[X]$ are of the forme $P\mathbb{K}[X]$ with $P\in \mathbb{K}[X]$. By analogy with the previous question, it seems that $P\mathbb{K}[X]$ is prime if and only if $P$ is irreducible. In fact, if $P$ is irreducible, then for $(Q,R)\in \mathbb{K}[X]^2$ such that $QR\in P\mathbb{K}[X]$, we have $P|QR$. So P is an element in the irreducible factor decomposition of $QR$ obtained from the product of the decomposition of $Q$ and that of $R$. Hence $P$ is present in the irreducible factor decomposition of $Q$ or of that of $P$. This implies that $P|Q$ or $P|R$. This means that $Q\in P\mathbb{K}[X]$ or $R\in P\mathbb{K}[X]$, then $P\mathbb{K}[X]$ is prime. Conversely, let $P\in \mathbb{K}[X]$ such that $P\mathbb{K}[X]$ is prime. Assume that $P$ is not irreducible. Thus $P$ has a strict divisor $Q\in \mathbb{K}[X]$. Now Let $R\in \mathbb{K}[X]$ such that $P=QR$, we then have $QR\in P\mathbb{K}[X]$. But $P$ does not divide $Q$ nor $R$, then $Q\notin P\mathbb{K}[X]$ and $R\notin P\mathbb{K}[X]$, absurd. Finally, the prime ideals of $\mathbb{K}[X]$ are $P\mathbb{K}[X]$ with $P$ irreducible.

4) As $J\cap K=I$ then we have $I\subset J$ and $I\subset K$. Assume that $J\neq I$ and let $x\in J\setminus I$. For $y\in K$ we have $xy\in J\cap K$, because $J$ and $K$ are absorbent, so $xy\in I$. As $x\notin I,$ and $I$ is prime, we have $y\in I$. This implies that $Ksubset I,$ and then $I=K$.

5) Let $(a,b)\in\mathcal{R}$ such that $ab=0$. Then $ab\in\{0\},$ which is an ideal, and then it is prime by hypothesis. This implies that $a\in\{0\}$ or $b\in\{0\}$. This means that $a=0$ or $b=0$. So $\mathcal{R}$ is an integer.

Let $a\in \mathcal{R}\setminus\{0\}$ and $I=a^2\mathcal{R}$. As $I$ is prime and $aa\in I,$ then $a\in I$. Then there exists $y\in \mathcal{R}$ such that $a=a^2 y$, which implies $a(1-ay)=0$. As $\mathcal{R}$ is integer, we have $1-ay=0$. Then $ay=1$. Now as $\mathcal{R}$ is commutative, $a$ is invertible. We have proved that every null element of $\mathcal{R}$ is invertible. Then $\mathcal{R}$ is a field.

Maximal solution to Cauchy problems

0

The maximal solution to Cauchy’s problems is a solution that gives full information on the physical model. In fact, we obtain this solution by extending local solutions. In this post, we give some theorem that shows the existence of the uniqueness of the maximal solution.

What is the maximal solution to Cauchy’s problems?

Let $f: I\times \Omega\to \mathbb{R}^b$ be a continuous function, where $I$ is an interval of $\mathbb{R}$ and $Omega$ an open set of $\mathbb{R}^d$. Moreover, let $(t_0,x_0)\in I\times \Omega$ and consider the Cauchy problem\begin{align*} (CP)\quad\begin{cases}\dot{u}(t)=f(t,u(t)),& t\in I,\cr u(t_0)=x_0.\end{cases}\end{align*}

A maximal solution of the Cauchy problem (CP) that can not admit an extension to another solution.

Peano’s theorem gives the existence of the maximal solution under the continuity assumption of the vector field $f$ on $I\times \Omega$. On the other hand, if in addition, we assume that $f$ is locally Lipschitz with respect to its second variable, then the Cauchy problem admits a unique maximal solution. This result is called the Cauchy-Lipschitz theorem.

We recall that a function $f$ is locally Lipschitz with respect to its second variable, if for any $(t_0,x_0)\in I\times \Omega$ there exists a neighborhood $V_{t_0,x_0}$ of $(t_0,x_0) $ and a constant $C>0$ such that for any $(t,x)$ and $(s,y)$ in $V_{t_0,x_0}$ we have \begin{align*}\|f(t,x)-f(t,y)\|\le C \|x-y\|.\end{align*}

We mention that the maximal solution is necessarily defined on an open interval of the form $(\alpha,\beta)$.

The global solutions to Cauchy problems

A solution $u: J\to \Omega$ of the Cauchy problem (CP) is called global if $J=I$. It is then important to look for conditions that guarantee the existence of global solutions. In fact, there exists a nice theorem, called the explosion theorem, that gives these conditions.

Assume that $I=(a,b)$ with $a$ can possibly be $-\infty$, and $b$ can also be $+\infty$. In addition, we suppose that $\Omega=\mathbb{R}^d$. Let $u:(\alpha,\beta)\to\mathbb{R}^d$ be a maximal solution. Then we have the following cases:

  • $\beta=b$, if not then $|u(t)|\to+\infty$ as $t\to \beta$. This means that if the solution is not left global then it is unbounded in a neighborhood of $\beta$.
  • $\alpha=a$, if not $|u(t)|\to+\infty$ as $t\to \alpha$. This means that if the solution is not right global, then it is unbounded in a neighborhood of $\alpha$.

Algebra questions with answers

0

In this post, we give selected algebra questions with answers. The exercises focus on group theory and arithmetic. In fact, these types of algebra exercises are classic and students need to learn them.

A selection of algebra questions with answers

In this section, we shall give various algebra questions with answers.

An algebraic equation in a quotient set

Let $p$ be an odd prime number and let $(a,b,c)\in\mathbb{Z}^3$ such that $a\notin p\mathbb{Z}$. This means that the class of $a$ denoted by $\overline{a}$ satisfies  $\overline{a}\neq \overline{0}$.

  • Show that the equation $\overline{a}x^2+\overline{b}x+\overline{c}=\overline{0}$ has solutions in $\mathbb{Z}/p\mathbb{Z}$ if and only if $\Delta=\overline{b^2-4ac}$ is a square in $\mathbb{Z}/p\mathbb{Z}$. In fact, let $x\in \mathbb{Z}/p\mathbb{Z}$. As $\mathbb{Z}/p\mathbb{Z}$ is a field and, $\overline{a}\neq \overline{0}$, and $\overline{2}\neq \overline{0},$ because $p$ is prime, we deduce that $\overline{2}$ and $\overline{a}$ are invertible. We can then write \begin{align*} \overline{a}x^2+\overline{b}x+\overline{c}&= \overline{a}\left( x^2+ \overline{2} (\overline{2}^{-1}\overline{a}^{-1}\overline{b})x\right)+\overline{c}. \end{align*} We select \begin{align*} \overline{d}:=\overline{2}^{-1}\overline{a}^{-1}\overline{b}. \end{align*} Then \begin{align*} \overline{a}x^2+\overline{b}x+\overline{c}&= \overline{a}\left( x^2+ \overline{2} \overline{d} x\right)+\overline{c}\cr &= \overline{a}\left( x+ \overline{d}\right)^2+ \overline{c}- \overline{a}\times \overline{d}^2. \end{align*} Then $x$ is a solution of $\overline{a}x^2+\overline{b}x+\overline{c}$ if and only if \begin{align*} \overline{a}\left( x+ \overline{d}\right)^2&=\overline{a} \overline{d}^2-\overline{c}\cr &= \overline{4a}^{-1}\times \overline{b}-\overline{c}\cr &= \overline{4a}^{-1},\Delta. \end{align*} We conclude that \begin{align*} \Delta= \left(\overline{2a}(x+\overline{d})\right)^2. \end{align*} Finally, the equation has solutions if and only if $\Delta$ is a square in $\mathbb{Z}/p\mathbb{Z}$. Remark that The fact that Whether $\delta$ is a square in $\mathbb{Z}/p\mathbb{Z}$ or not does not depend on the delta sign of $\Delta$. For example, $-1\equiv 2^2\,[5]$ is a square in $\mathbb{Z}/5\mathbb{Z},$ but $3$ does not.
  • How many numbers of the solution we have? Determine their expressions in terms of $\Delta$. In fact, From the previous question, solutions exists if $\Delta=\delta^2$ with $\delta\in \mathbb{Z}/p\mathbb{Z}$. We discuss two cases. First, if $\delta=\overline{0},$ in this case we have \begin{align*} \left(\overline{2a}(x+\overline{d})\right)^2=0. \end{align*} So that \begin{align*} x=-\overline{d}=- \overline{2a}^{-1}\times \overline{b}. \end{align*} Second, if $\delta\neq \overline{0},$ it follows that $\overline{2a}(x+\overline{d}=\pm \delta$ and hence \begin{align*} x= \overline{2a}^{-1}(\overline{b}\pm \delta). \end{align*}
  • Solve in $\mathbb{Z}/7\mathbb{Z}$ the following equations \begin{align*} x^2+\overline{5}x+\overline{1}=\overline{0},\quad x^2+\overline{2}x+\overline{4}=\overline{0}. \end{align*} In fact, Let consider the equation $x^2+\overline{5}x+\overline{1}=\overline{0}$. In this case we have $\delta=\overline{25}-\overline{4}=\overline{3\times 7}=\overline{0}$ in $\mathbb{Z}/7\mathbb{Z}$. According to the question 2, the unique solution is $x=-\overline{2}\times \overline{5}$. But if we write $5=7-2,$ then $\overline{5}=-\overline{2}$ in $\mathbb{Z}/7\mathbb{Z}$. Thus the solution is $x=-\overline{2}^{-1}\times (-\overline{2})=\overline{1}$. For the second equation $x^2+\overline{2}x+\overline{4}=\overline{0},$ we have $\Delta=\overline{4}-\overline{16}=-\overline{12}=\overline{2}$ in $\mathbb{Z}/7\mathbb{Z}$. Observe that $\overline{9}=\overline{2}$ in $\mathbb{Z}/7\mathbb{Z}$. Then $\Delta=\overline{3}^2$. Thus from question 2, the solutions are $x=\overline{2}^{-1}\times (-\overline{2}\pm \overline{3})$, this means that $x=\overline{2}^{-1}$ or $x=\overline{2}^{-1} \times (-\overline{5})$. As $\overline{2}^{-1}=\overline{4}$ and $-\overline{5}=\overline{2}$ in $\mathbb{Z}/7\mathbb{Z}$, we obtain \begin{align*} x=\overline{4}\quad\text{and}\quad x=\overline{1}. \end{align*}

Selected algebra exercises on finite group

Let $G$ be a finite abelian group.

  • Let $x$ and $y$ be elements of $G$ of orders $p$ and $q$, respectively. Show that if ${\rm gcd}(p,q)=1$ then the order of the element $xy$ is $pq$. In fact, As $G$ is abelian, we have \begin{align*} (xy)^{pq}=x^{pq} y^{pq}= (x^p)^q(y^q)^p=e. \end{align*} Now if we denote by $d$ the order of $xy,$ then $d|pq$. Let us now prove that $pq|d$. In fact, we have $(xy)^d=e$. Using the fact that $G$ is abelian, we obtain $(xy)^{dq}=e$ and then $x^{dq}y^{dq}=e$. As $y^{dq}=e$, then $x^{dq}=e$. As $p$ is the order of $x,$ $p|pq$. But $gcd(p,q)=1$, so that $p|d$. Using similar argent and the fact that $(xy)^{dp}=e$, we obtain $q|d$. This implies that $pq|d$ because $gcd(p,q)=1$. Hence $d=pq$.
  • Deduce that there exists $x\in G$ with an order equal to the lowest common multiple, “lcm” of elements of $G$. In fact, let us prove that for any $i=1,\cdots,s$, there exists $x_i$ element of $G$ of order $p_{i}^{\alpha_i}$. In fact, let $i\in {1,\cdots,s}$ and denote by $A$ the set of all orders of elements of $G$. As $m$ is the lowest common multiple of elements of $G$, we have $\alpha_i=nu_{p_i}(m)=\max_{d\in A}nu_{p_i}(d),$ here $\nu_{p_i}$ is the $p_i$-adic valuation. Hence $y\in G$ of order $d$ satisfies $nu_{p_i}(d)=\alpha_i$. So $p_{i}^{\alpha_i}|d$ and there exists $\ell\in\mathbb{N}$ such that $d=p_{i}^{\alpha_i}\ell$. This implies that the element $x_i=y^{\ell}$ is of order $p_{i}^{\alpha_i}$. We now select \begin{align*} x=x_1\cdots x_s. \end{align*} By using a recurrence argument, one can see that $x$ is of order $m$.

The subject is beyond the scope of the first course in groups.

Delay equations in Banach spaces

0

The delay equations are an important class of differential equations. In fact, these equations are studied by both mathematicians and engineers because most systems are affected by a delay.

Imagine a basketball game transmission from Italy to the United States. Sometimes a lag between the voice and the television images occurs. Indeed, the voice arrives before the image, so there is a delay for the image to coincide with the voice.

In the following, we will see that the translation operator plays a key role in the delay equation existence of solutions.

Functions of bounded variations

Let $X$ be a Banach space, and denote by $\mathcal{L}(X)$ the Banach algebra of all linear bounded operators from $X$ to $X,$ endowed with uniform topology.

A function $\mu:[-r,0]\to \mathcal{L}(X)$ is called a bounded variation functions if \begin{align*}|\mu|([-r,0])=\sup\left\{\sum_{i=1}^N |\mu(\tau_i)-\mu(\tau_{i-1})|, 0=\tau_0>\tau_1\cdots>\tau_N=-r\right\}\end{align*} is finite. We not $|\mu|$ is a Borel measure on $[-r,0]$.

For a continuous function $f:[-r,0]\to X,$ we define the following Riemann-Stieltjes integral \begin{align*}L f=\int^0_{-r}d\mu(\theta)f(\theta).\end{align*} We note that \begin{align*}\|Lf\|&\le \int^0_{-r}|f(\theta)|d|\mu|( \theta )\cr & \le \gamma \|f\|_\infty, \end{align*} where $\gamma:= |\mu|([-r,0]) $ and $\|f\|_\infty:=\sup_{\theta\in [-r,0]}|f(\theta)|$.

The heat delay equations

Here we consider a classical example of delay equations. We select $X=L^2([0,L])$ and define the operator \begin{align*} Au=u”,\quad D(A)=\{u\in W^{2,2}([0,L]):u(0)=u(L)=0\}.\end{align*} We recall from semigroup theory, Lumer-Phillips theorem, that $(A,D(A))$ generates a strongly continuous semigroup $(T(t))_{t\ge 0}$ on $X$. That is \begin{align*} & T(t)\in\mathcal{L}(X),\quad T(0)=Id,\cr & T(t+s)=T(t)T(s),\qquad \forall t,s\ge 0,\cr & \lim_{t\to 0}\|T(t)f-f\|=0,\qquad \forall f\in X.\end{align*} Moreover, $f\in D(A)$ if and only if the following limit \begin{align*}Af=\lim_{t\to 0} \frac{T(t)f-f}{t}\end{align*}exists.

Consider the delayed heat equation \begin{align*}\tag{Eq} \begin{cases}\dot{u}(t)=Au(t)+\displaystyle\int^0_{-r}d\mu(\theta)u(t+\theta),& t\ge 0,\cr u(0)=h,\cr u(t)=f(t),& -r\le t\le 0.\end{cases}\end{align*} Here $h\in X$ and $f\in L^2([-r,0],X)$.

Exercises on polynomials

0

We offer exercises on polynomials with detailed proofs. In fact, our goal is to show the student how to calculate the greatest common divisor of two polynomials, and how to use the Bezout relation between polynomials.

Definition and properties of a polynomial 

In the sequel,  $\mathbb{K}$ is a commutative ring.

A polynomial $P$ is an infinite sequance $(a_n)_{n\ge 0}$ such that there exist an integer $p\ge 0$ with $a_k=0$ for any $k\ge p+1$. The smallest element $p$ such that $a_k\neq 0$ for $k\ge p+1$ is called the degree of $P$ and will be denoted by ${\rm deg}(P)$.

Let $p$ be the degree of $P,$ we can write \begin{align*} P=a_0(1,0,\cdots)+a_1(0,1,0,\cdots)+\cdots+a_p (0,\cdots,0,1,0,\cdots).\end{align*} If we select \begin{align*} 1=X^0=(1,0,\cdots),\;X=(0,1,0,\cdots),\; \cdots,X^p=(0,\cdots,0,1,0,\cdots),\end{align*} then the polynomial $P$ takes the form \begin{align*}P=a_0+a_1 X+\cdots+a_p X^p.\end{align*} The set of all polynomials with coefficients in $\mathbb{K}$ is denoted by $\mathbb{K}[X]$ and the set of all polynomials of degrees less or equals $p$ is denoted by $\mathbb{K}_p[X]$.

Addition of polynomials: Let $P=(a_n)_n$ and $Q=(b_n)_n$ be two polynomials. Then $P+Q$ is a polynomial with coefficients $(a_n+b_n)_n$.

Scalar multiplication of a polynomial: Let $P=(a_n)_n$ be a polynomial with coefficients in $\mathbb{K},$ and $\lambda\in\mathbb{K}$ be a scalar. Then $\lambda P$ is a polynomial with coefficients $(\lambda a_n)_n$.

Product of polynomials: Take two polynomials $P=(a_n)_n$ and $Q=(b_n)_n$ then $PQ=(c_n)_n$ is a polynomial with \begin{align*} c_n=\sum_{k=0}^n a_k b_{n-k}.\end{align*}

Proposition: $(\mathbb{K},+,\cdot)$ is a commutative ring.

A selection of exercises on polynomials

Exercise: Determine the greatest common divisor (gcd) and a Bezout relation between the polynomials $P=X^3+X^2+X-3$ and $Q=X^2-3X+2$.

Solution: Here we shall apply Euclid’s algorithm: we select $P_0=P$ and $P_1=Q$. Moreover, we put $U_0=1,\,U_1=0$ and $V_0=0,\,V_1=1$ in order to have $PU_0+QV_0=P_0$ and $PU_1+QV_1=P_1$. The Euclidean division of $P_0$ by $P_1$ is \begin{align*} P_0=P_1(X+4)+(11 X-11). \end{align*} We set $P_2=11 X-11$. Then \begin{align*} P_2&=P_0-P_1(X+4)\cr &= PU_0+Q V_0-(PU_1+QV_1)(X+4)\cr &= P(U_0-(X+4)U_1)+Q(V_0-(X+4)V_1)\cr &= P U_2+Q V_2, \end{align*} where \begin{align*} U_2=U_0-(X+4)U_1=1,\quad V_2=V_0-(X+4)V_1=-X-4. \end{align*} The Euclidean division of $P_1$ by $P_2$ is \begin{align*} P_1=P_2 \left(\frac{1}{11}X-\frac{2}{11}\right)+0 \end{align*} The GCD of $P$ and $Q$ is \begin{align*} \frac{1}{11}P_2=X-1. \end{align*} The Bezout relation is then \begin{align*} \frac{1}{11} P-\frac{1}{11}(X+4) Q=X-1. \end{align*}

Exercise: Does the polynomial $P=X^4+1$ is irreducible in $\mathbb{C}[X]$ ? in $\mathbb{R}[X]$ ? in $\mathbb{Q}[X]$ ?

Solution: It is not irreducible neither in $\mathbb{C}[X]$ nor in $\mathbb{R}[X]$ as it is not either of degree 1 or of degree 2.

Assume that $P$ is reducible in $\mathbb{Q}[X]$. Remark that $P$ does not have rational roots “and also real roots”, $P$ can not admit a divisor of degree $1$. Then $P$ is the product of two polynomials of degree 2. We then can assume that \begin{align*} P=(X^2+aX+b)(X^2+\alpha+\beta)\end{align*} with $(a,b,\alpha,\beta)\in\mathbb{Q}^4$. By identification of coefficients, we obtain \begin{align*} a+\alpha=0,\quad a\alpha+b+\beta=0,\quad a\beta+\alpha b=0,\quad b\beta=1. \end{align*}

From this, we deduce that $b$ and $\beta$ are not null and have the same sign so that $b+\beta\neq 0$. Consequently, $\alpha=-a\neq 0$, and the third equation gives $b=\beta$. Finally, $b=\beta=\pm 1,$ and hence $a^2=\pm 2$ this is not possible if $a\in \mathbb{Q}$. Thus $P$ is irreducible in $\mathbb{Q}[X]$.

How to calculate integrals?

0

We show you how to calculate integrals using elementary methods. In addition, we teach you to study the properties of functions defined by an integral.

It is very important to be able to calculate an integral easily because it interferes in the study of differential equations another important subject of mathematical analysis.

Exercises on how to calculate integrals

We propose several exercises with detailed solutions to teach you how to calculate integrals.

Exercise: Determine the value of the following integrals \begin{align*} I=\int^{\frac{\pi}{2}}_0 e^{2x}\cos(x)dx,\qquad J=\int^{\frac{\pi}{2}}_0 \frac{\sin(\theta)}{2+\cos(\theta)}d\theta. \end{align*}

Solution: To compute $I$ we shall use the integration by parts method. In fact, we can write \begin{align*} I&= \int^{\frac{\pi}{2}}_0 \left(\frac{e^{2x}}{2}\right)’\cos(x)dx\cr &= \left[ \frac{e^{2x}}{2} \cos(x)\right]^{\frac{\pi}{2}}_0- \int^{\frac{\pi}{2}}_0 \frac{e^{2x}}{2}\cos'(x)dx\cr & = \frac{1}{2}+ \frac{1}{2} \int^{\frac{\pi}{2}}_0 e^{2x} \sin(x)dx\cr & = \frac{1}{2}+ \frac{1}{2}\left( \left[ \frac{e^{2x}}{2} \sin(x)\right]^{\frac{\pi}{2}}_0-\int^{\frac{\pi}{2}}_0 \frac{e^{2x}}{2} \sin'(x)dx\right)\cr &= \frac{1}{2}+\frac{e^{\pi}}{4}- \frac{1}{4} \int^{\frac{\pi}{2}}_0 e^{2x}\cos(x)dx\cr &= \frac{1}{2}+\frac{e^{\pi}}{4}- \frac{1}{4} I. \end{align*} We deduce that \begin{align*} I+ \frac{1}{4} I= \frac{1}{2}+\frac{e^{\pi}}{4}. \end{align*} Finally, \begin{align*} I=\frac{2+e^\pi}{5}. \end{align*}

To calculate $J$ we will use the integration by parts technique. We put $t=\cos(\theta)$. We then have $dt=-\sin(\theta)d\theta$. Then \begin{align*} J&=\int^{\cos(\frac{pi}{2})}_{\cos(0)} \frac{-dt}{2+t}\cr &= -\int^0_1 \frac{dt}{2+t}\cr &= \int^1_0 \frac{dt}{2+t}\cr &= \left[\ln(2+t)\right]^1_0\cr & = \ln(3)-\ln(2)=\ln\left(\frac{3}{2}\right). \end{align*}

Exercise: Let consider the function \begin{align*} g(x)=\int^x_{\frac{1}{x}} \frac{\ln(t)}{t}dt. \end{align*}

  • Determine the domain of definition $D_g$ of $g$.
  • Prove that $g$ is differentiable on $D_g$ and compute $g'(x)$ for any $x\in D_g$.
  • Deduce $g$ is the null function.

Solution: 1) We define the function \begin{align*} f(t)=\frac{\ln(t)}{t}. \end{align*} clearly, the function $f$ is only defined in $(0,+\infty)$. From the expression of $g$ we then conclude that $g(x)$ is well defined if and only if $x\in (0,+\infty)$. Hence the domain of definition of $g$ is $D_g=(0,+\infty)$.

2) Denote by $F$ the primitive of $f$, $F$ exists because the function $f$ is continuous on $(0,+\infty)$. We select \begin{align*} F(x)=\int^x_c f(t)dt \end{align*} for any constant $c>0$. The function $F$ is differentiable on $(0,+\infty)$ and $F'(x)=f(x)$ for all $x\in (0,+\infty)$. On the other hand, we can write \begin{align*} g(x)&=\int^x_c f(t)dt+\int^c_{\frac{1}{x}}f(t)dt\cr &= F(x)-F\left(\frac{1}{x}\right) \end{align*} for all $x\in (0,+\infty)$. Hence $g$ is differentiable on $(0,+\infty)$ as composition and sum of differentiable functions. Moreover, for all $x>0,$ \begin{align*} g'(x)&=F'(x)-\left(F\left(\frac{1}{x}\right)\right)’\cr &= f(x)- \left(\frac{1}{x}\right)’ F’\left(\frac{1}{x}\right)\cr &= \frac{\ln(x)}{x}+\frac{1}{x^2} f\left(\frac{1}{x}\right)\cr &= \frac{\ln(x)}{x}-\frac{\ln(x)}{x}=0. \end{align*}

3) An the derivative of $g$ on $(0,+\infty)$ is zero, then $g$ is the constant function on $(0,+\infty)$. But $g(1)=0$. Then $g(x)=0$ for any $x\in (0,+\infty)$.

First-order differential equations

0

We discuss some facts about first-order differential equations for beginners. Such equations are important because many problems in our real life can be modeled as a differential equation.

We assume that the reader is familiar with the concept of continuous function primitives.

First-order differential equations with constant coefficients

In algebra, we already studied algebraic equations where a variable is a number. Here we study equations where the variable is a function.

Let $a$ be a real number and $f:\mathbb{R}\to \mathbb{R}$ be a continuous function. We look for differential functions $u:\mathbb{R}\to \mathbb{R}$ such that\begin{align*}\dot{u}(x)=a u(x)+f(x).\end{align*} Here we denote $\dot{u}(x)=\frac{d}{dx}u(x)$, the derivative of the function $u$.

We recall that $\frac{d}{dx}e^{-a x}=ae^{-ax}.$ By multiplying the both sides of the above differential equation by $e^{-ax}$, we obtain $ e^{-ax} \dot{u}(x)-a e^{-ax} u(x)= e^{-ax} f(x) $. Also, we write\begin{align*}\frac{d}{dx}\left( e^{-ax} u(x) \right)= e^{-ax} f(x) .\end{align*}By taking the integral between $0$ and $x$ in the both sides of this equation, we get \begin{align*} e^{-ax} u(x) =u(0)+\int^x_0 e^{-as}f(s)ds.\end{align*} Now by multiplying the both sides of the avove equality br $e^{ax},$ we obtain \begin{align*} u(x)=e^{ax}u(0)+\int^x_0 e^{a(x-s)}f(s)ds.\end{align*}

Cauchy problem: A first-order differential is a Cauchy problem if it take the following form \begin{align*} \begin{cases} \dot{u}(t)=a u(t),& t\in\mathbb{R},\cr u(t_0)=x.\end{cases}\end{align*} Here $t_0$ is the initial time and $x$ is the initial state. The solution to this Cauchy problem is $u(t)=e^{t a}x$.

Equations with variable coefficients

In most cases the coefficients of differential equations are functions. These equations take the following form \begin{align*}a(x)\dot{u}(x)+b(x)u(x)=0,\end{align*}where $a(\cdot)$ and $b(\cdot)$ are continuous function such that $a(x)\neq 0$. This equation can be rewritten as\begin{align*} \frac{\dot{u}(x)}{u(x)}=-\frac{b(x)}{a(x)}.\end{align*} On the other hand, we recall that \begin{align*} \frac{d}{dx}\ln(|u(x)|)= \frac{\dot{u}(x)}{u(x)} .\end{align*} We then obtain \begin{align*} \ln(|u(x)|) =\int -\frac{b(x)}{a(x)} dx.\end{align*} Hence, we have\begin{align*} u(x)=e^{- \displaystyle \int \frac{b(x)}{a(x)} dx}.\end{align*}

Let consider the first example $(x-1)u’-2u=0$, here we have $a(x)=x-1$ and $b(x)=-2$. Then the solution is given by \begin{align*} u(x)= e^{\displaystyle \int \frac{2}{x-1} dx} =e^{ (2\ln(|x-1|+C)}= A e^{\ln((x-1)^2)}.\end{align*} Thus the solution of the differential equation is $u(x)=A(x-1)^2$, where $A$ is a real constante.

Cauchy Lipschitz theorem for differential equations

0

In this article, we state and prove the Cauchy-Lipschitz theorem for the existence and uniqueness of solutions to nonlinear ordinary differential equations. The key proof of this theorem is the Banach-Picard fixed point theorem. We give some applications of this theorem.

Local and maximal solutions to nonlinear Cauchy problems

Throughout this section, $I$ is an interval of $\mathbb{R}$, and $\Omega$ is an open set of $\mathbb{R}^d$. In addition, let $(t_0,x_0)\in \Omega$ et $f:I\times \Omega\to\mathbb{R}^n$ be a continuous function.

We look for additional conditions on $f$ so as the following Cauchy problem \begin{align*}\tag{Eq} u(t_0)=x_0,\quad \dot{u}(t)=f(t,u(t)),\quad t\in I,\end{align*}admits a “kind” of solutions.

By a solution to $({\rm Eq})$ we mean a couple $(J,u),$ where $J\subset I$ is an interval such that $t_0\in J,$ and $u:J\to \Omega$ is a $C^1$ function that satisfies $({\rm Eq})$.

On the other hand, we define an order in the set of all solutions to the Cauchy problem $({\rm Eq})$. In fact, we say that a solution $(J_2,u_2)$ extends another solution $(J_1,u_1),$ of (Eq) if $J_1\subset J_2$ and $u_2(t)=u_1(t)$ for any $t\in J_1$.

A maximum solution of $({\rm Eq})$ is a solution that does not admit an extension to another solution.

Remark: Niote that $(J,u)$ is a solution of $({\rm Eq})$ if and only if it satisfies the following integral equation \begin{align*}\tag{IE} u(t)=x_0+\int^t_{t_0}f(s,u(s))ds,\quad\forall t\in J.\end{align*}

A particular version of the Cauchy Lipschitz theorem

In this section, let $\alpha>0,\;r>0$ and $(t_0,x_0)\in I\times \Omega$ such that \begin{align*} \tag{H1}Q:=[t_0-\alpha,t_0+\alpha]\times \overline{B}(x_0,r)\subset I\times \Omega\end{align*}\begin{align*} \tag{H2} f:Q\to \mathbb{R}^d\; \text{is continuous, and }\;M=\sup_Q\|f\|<\infty.\end{align*}\begin{align*} \tag{H3} \exists C>0, \forall (t,x),(t,y)\in Q, \quad \|f(t,x)-f(t,y)\|\le C|x-y|.\end{align*}

Theorem: Under the condition $(H1)$ to $(H3)$, the Cauchy problem $({\rm Eq})$ admits a unique solution $(J,u)$ such that \begin{align*} J=[t_0-T,t_0+T]\quad\text{with}\quad T:=\min\left\{alpha,\frac{r}{M}\right\}.\end{align*}\begin{align*} (s,u(s))\in Q,\quad \forall s\in J.\end{align*}

Proof: We shall use Banach-Picard’s fixed point theorem. The latter said that if $E$ is a Banach space and $\Phi: E\to E$ is a contraction, that is there exists $\gamma\in (0,1)$ such that $\|\Phi(x)-\Phi(y)\|\le \gamma \|x-y\|$ for any $x,y\in E;$ then there exists a unique $u\in E$ such that $\Phi(u)=u$. In this case, we also have $\Phi^n(u)=u$ for any $n\in \mathbb{N}$, where $\Phi^n=\Phi\circ\Phi\circ\cdots\Phi$. Conversely, il there exists $m\in \mathbb{N}$ and a unique $u\in E$ such that $\Phi^m(u)=u$, then $u$ is a fixed point for $\Phi,$ that is $\Phi(u)=u$.

Now come back to the proof of the theorem. For $u\in E:=\mathcal{C}(J, \overline{B}(x_0,r) )$, we define for any $t\in J,$ \begin{align*}\left(\Phi(u)\right)(t)=x_0+\int^t_{t_0}f(s,u(s))ds.\end{align*}Then $\Phi:E\to E$. By recurrence we show that for any $t\in J$ and $n\in \mathbb{N}$ we have \begin{align*} \|\Phi^n(v)(t)- \Phi^n(w)(t) \|\le \frac{C^n}{n!}|t-t_0|^n \|u-v\|_\infty.\end{align*} For a large $n,$ we have \begin{align*}\gamma:= \frac{C^n}{n!} \left(\frac{r}{M}\right)^n < 1.\end{align*} Then $\Phi^n$ is a contraction. Thus there exists a unique $u\in E= \mathcal{C}(J, \overline{B}(x_0,r) )$ such that $\Phi(u)=u$. This means that $u:J\to \overline{B}(x_0,r) $ such that \begin{align*} u(t)=x_0+\int^t_{t_0}f(s,u(s))ds,\qquad \forall t\in J.\end{align*}This ends the proof.

You may also consult the concept of maximal solutions in detail.

Binomial coefficients

0

We show how Binomial coefficients help simplify expressions. We notice that these coefficients are heavily used in probability calculus. On the other hand, the Binomial theorem serves in computing powers of numbers.

The expression of Binomial Coefficients

For natural numbers $n,k\in\mathbb{N}$ uch that $n\ge k$, we define the Binomial coefficient by \begin{align*} \binom{n}{k}=\frac{n!}{k!(n-k)!}.\end{align*}

The binomial coefficients usually appear in elementary probability.

Exercise: For which integers $p\in{1,2,\cdots,n-1}$, prove that the followining inequality between binomial coefiicients, \begin{align*} \binom{n}{p} < \binom{n}{p+1}.\end{align*}

Proof: To prove this, we first compute \begin{align*}\frac{\binom{n}{p}}{\binom{n}{p+1}}=& \frac{n!}{p!(n-p)!}\times \frac{(p+1)!(n-p-1)!}{n!}\cr &= (p+1)\frac{(n-p-1)!}{(n-p)!}\cr &= \frac{p+1}{n-p}.\end{align*}

We have \begin{align*} \binom{n}{p} < \binom{n}{p+1}&\;\Longleftrightarrow\; \frac{p+1}{n-p} < 1\cr & \;\Longleftrightarrow\; p < \frac{n-1}{2}.\end{align*}

The Binomial theorem

The binomial theorem has many applications in algebra, calculus, and probability. For example, it enters into the definition of the Binomial distribution in probability theory. Here we show a nice application of this Theorem.

Binomial Theorem: For $a$ and $b$ real numbers, and a natural number $n,$ the binomial formula is given by \begin{align*}(a+b)^n=\sum_{k=0}^{n}\binom{n}{k}a^k b^{n-k}.\end{align*}

The proof of the above sum formula is based on the fact that the multiplication operation in the set of real numbers is commutative, that is $ab=ba$. Otherwise, this formula is not true. For example, we cannot apply the binomial theorem to the sum of two matrices unless these matrices commute with each other.

Exercise: Compute the sum\begin{align*} A_n=\binom{n}{0}+\frac{1}{2}\binom{n}{1}+\cdots+\frac{1}{n+1}\binom{n}{n}.\end{align*} Proof: Using Binome formula, we obtain for any $x\in\mathbb{R}$ and $n\in \mathbb{N},$ \begin{align*}(1+x)^n&= \sum_{k=1}^{n}\binom{n}{k} x^k1^{n-k} \cr &= \binom{n}{0}+\binom{n}{1} x+\cdots+\binom{n}{n} x^n.\end{align*} By taking the integral between $0$ and $1$ in this formula, we obtain \begin{align*} A_n=\frac{2^{n+1}-1}{n+1}.\end{align*}

Exercise: Prove that \begin{align*} &\sum_{k=0}^{n}\binom{n}{k}=2^n,\cr & 4^n\ge \sum_{k=0}^{n} \frac{3^k}{k!}. \end{align*} Proof: For the equality, il suffices to apply the Binomial theorem in the case $a=1$ and $b=1$. Then we obtain \begin{align*} 2^n=(1+1)^n= \sum_{k=0}^{n}\binom{n}{k}  1^k 1^{n-k}=\sum_{k=0}^{n}\binom{n}{k}.\end{align*} On the other hand, to prove the inequality, we use the Binomial theorem in the case of $a=3$ and $b=1$. We have \begin{align*} 4^n=(3+1)^n&=\sum_{k=0}^{n}\binom{n}{k}  3^k 1^{n-k}\cr &=\sum_{k=0}^{n}\binom{n}{k} \frac{3^k}{k!}\frac{n!}{(n-k)!}.\end{align*} Now the result immediately follows from the fact that $n!\ge (n-k)!$ for any $k=1,2,\cdots,n$.

Complex Numbers: Introduction

0

Complex numbers are a fundamental concept in mathematics that extends the realm of real numbers. They are composed of a real part and an imaginary part, where the imaginary part is a multiple of the imaginary unit, denoted by “i”.

What is a complex number?

Complex numbers are represented in the form $a + bi$ where “$a$” represents the real part and “$b$” represents the imaginary part. The study of complex numbers has significant applications in various branches of mathematics, physics, and engineering. This introductory article aims to provide a comprehensive overview of complex numbers, their properties, and their applications in different fields.

Modulus of complex numbers

The modulus of a complex number is defined as the distance between the origin and the point representing the complex number in the complex plane. This modulus is denoted by $|z|$, where $z$ is the complex number.

The modulus of a complex number can be expressed in terms of its real and imaginary parts. Specifically, if $z = a + bi$, where a and b are real numbers and i is the imaginary unit, then $$|z| = \sqrt{a^2 + b^2}. $$This formula is derived from the Pythagorean theorem, which states that the square of the hypotenuse of a right triangle is equal to the sum of the squares of its legs.

The modulus of a complex number has several important properties that make it a useful tool in mathematical analysis. For example, it is invariant under rotation, meaning that if a complex number is rotated by an angle $\theta$, its modulus remains unchanged. Additionally, the modulus of a product of complex numbers is equal to the product of their moduli, and the modulus of a quotient of complex numbers is equal to the quotient of their moduli.

Argument of Complex Number

The argument of a complex number is a fundamental concept in mathematics that plays a crucial role in understanding the geometric interpretation of complex numbers. It is defined as the angle between the positive real axis and the line connecting the origin to the complex number in the complex plane.

The argument of a complex number is denoted by the symbol $\arg(z)$, where $z$ represents the complex number. It is measured in radians and can take any value between $-\pi$ and $\pi$, inclusive. The argument of a complex number is unique up to an integer multiple of $2\pi$.

The argument of a complex number can be calculated using the Arctan function, which takes the imaginary part of the complex number divided by its real part. In fact if $z=a+ib$, then $$ \arg(z)=\arctan\left(\frac{b}{a}\right).$$ This yields the ratio of the lengths of the sides of a right triangle formed by the complex number and the positive real axis. The arctan function then returns the angle between the positive real axis and the line connecting the origin to the complex number.

The argument of a complex number has several important properties.

  1. Firstly, the argument of the product of two complex numbers is equal to the sum of their arguments.
  2. Secondly, the argument of the quotient of two complex numbers is equal to the difference of their arguments.
These properties are analogous to the properties of exponents in real numbers.

Different Forms of Complex Numbers

Complex numbers are numbers that consist of a real part and an imaginary part, and they are represented in the form a + bi, where a and b are real numbers and i is the imaginary unit. There are various forms of complex numbers that are used in different mathematical contexts.

Rectangular form

One form of complex numbers is the rectangular form, which is the standard form of representing complex numbers. In this form, the real part and the imaginary part of a complex number are written as separate terms, with the real part being written first.

Polar form

Another form of complex numbers is the polar form, which represents a complex number in terms of its magnitude and argument. The magnitude is the distance from the origin to the complex number in the complex plane, while the argument is the angle between the positive real axis and the line connecting the origin to the complex number.

Exponential form

In addition to these forms, there are also other forms of complex numbers that are used in specific mathematical contexts. For example, the exponential form of a complex number is used in complex analysis and is written as $re^{i\theta}$, where $r$ is the magnitude and $\theta$ is the argument.

Trigonometric form

The trigonometric form of a complex number is another form that is used in trigonometry and is written as $r(\cos\theta + i \sin\theta), where $r$ is the magnitude and $\theta$ is the argument.

The Geometrical Representation of Complex Numbers

In the geometrical representation of complex numbers, the real part is represented on the horizontal axis, often referred to as the real axis, while the imaginary part is represented on the vertical axis, known as the imaginary axis. This two-dimensional coordinate system is commonly referred to as the complex plane.

The complex plane allows for the visualization of complex numbers as points in this plane. Each complex number corresponds to a unique point in the complex plane, where the real part determines the position along the horizontal axis and the imaginary part determines the position along the vertical axis.

Furthermore, the distance from the origin of the complex plane to a specific point represents the magnitude or modulus of the complex number. This magnitude can be calculated using the Pythagorean theorem, where the real and imaginary parts of the complex number form the two sides of a right triangle.

Additionally, the angle formed between the positive real axis and the line connecting the origin and the point representing the complex number is known as the argument or phase of the complex number. This argument can be determined using trigonometric functions such as sine and cosine.

The geometrical representation of complex numbers provides a powerful tool for understanding and analyzing these mathematical entities. It allows for the visualization of complex operations such as addition, subtraction, multiplication, and division, as well as the interpretation of complex numbers in terms of magnitude and phase. This representation is widely used in various branches of mathematics, physics, engineering, and other scientific disciplines

How to determine the Square root of a complex number

In order to determine the square root of a complex number, let us consider the example of finding the square root of the complex number $\lambda=4+3i$. To do this, we need to find a complex number $z$ such that $z^2=\lambda$. We can represent $z$ as $z=a+ib$, where $a$ and $b$ are real numbers.

Expanding $z^2$, we have $z^2=a^2+2iab+(ib)^2$. Since $i^2=-1$, we can simplify this expression to $z^2=a^2-b^2+i(2ab)$. Now, let us consider the equation $a^2-b^2+i(2ab)=4+i3$. By comparing the real and imaginary parts of both sides of the equation, we obtain the following system of equations: $a^2-b^2=4$ and $2ab=3$.

To solve this system, we introduce a third equation involving $a^2$ and $b^2$. Taking the modulus of $z$, we have $|z|^2=a^2+b^2=|4+i3|=\sqrt{16+9}=5$. This equation allows us to eliminate one of the squares, either $a^2$ or $b^2$. By adding the two equations containing $a^2$ and $b^2$, we find that $2a^2=9$, which implies $a=\pm \frac{3}{2}\sqrt{2}$. Furthermore, we have $2b^2=2a^2-4=9-4=5$, leading to $b=\pm \sqrt{\frac{5}{2}}$. Therefore, the complex number $\lambda$ has two square roots, given by $\frac{3}{2}\sqrt{2} +i \sqrt{\frac{5}{2}}$ and $– \frac{3}{2}\sqrt{2} -i \sqrt{\frac{5}{2}}$.

How to solve a system of complex numbers

In this section, we will discuss the process of solving a system of complex equations. While systems of equations with real numbers are commonly solved using determinants or elimination techniques, we will focus on solving systems with complex numbers. To illustrate this, we will consider a specific system of equations: \begin{align*}z_1z_2=i,\qquad z_1-z_2=1+i,\end{align*} where $z_1$ and $z_2$ are complex numbers. Our goal is to determine the expressions for $z_1$ and $z_2$.

It is important to approach this problem with caution and avoid complex calculations. Instead of solving for $z_1$ in terms of $z_2$ and substituting the expression into the second equation, we will introduce a concise method to solve the complex system. We can rewrite the system as: \begin{align*} z_1 (-z_2)=-i,\quad z_1+(-z_2)=1+i.\end{align*} From this, we can deduce that $z_1$ and $(-z_2)$ are solutions of the equation: \begin{align*}\tag{E}t^2-(1+i)t-i=0.\end{align*}

The discriminant associated with this equation is: \begin{align*} \Delta&= 2i+4i=6i\cr &= \left(\sqrt{6}\left(\frac{\sqrt{2}}{2}+\frac{\sqrt{2}}{2} i\right)\right)^2\end{align*} Hence, $\Delta$ has square roots: \begin{align*}\Delta_1=\sqrt{3}+\sqrt{3}i\quad \text{and}\quad \Delta_2=-\sqrt{3}-\sqrt{3}i.\end{align*} The roots of equation $(E)$ are: \begin{align*}z’&=\frac{1+i+\Delta_1}{2}\cr &= \frac{1+\sqrt{3}}{2}(1+i)\end{align*} and \begin{align*}z”&=\frac{1+i+\Delta_2}{2}\cr &= \frac{1-\sqrt{3}}{2}(1+i).\end{align*} Therefore, we have $z_1=z’$ and

How to use mathematical induction?

0

We teach you how to use mathematical induction to prove algebraic properties. This technique is very useful and simple to use. We offer examples and exercises to help you understand proofs by induction.

Induction reasoning is often used to prove sequence properties.

Learn about how to use mathematical induction

In many mathematical situations, we need to prove a property $P(n)$ for any natural number $n$. In most cases, a direct approach to do this is very difficult. To overcome these difficulties, we mainly use induction reasoning. In fact, we only verify that the property $ P (0),$ for $ n = 0,$ is true. Then assume that $ P (n), $ is true as well. Finally,  verify that $ P (n + 1) $ is also satisfied, that’s all.! We notice that sometimes we verify $P(1)$, mainly if the property $P(n)$ is not defined at $n=0$.

Example: let us show that for any $n\in\mathbb{N},$ \begin{align*}\tag{P(n)} (n+1)!\ge \sum_{k=1}^n k!. \end{align*}

For $n=1,$ we have $(1+1)!=2!\ge 1!,$ hence the property $P(1)$ is satisfied. Assume now, by induction, that $P(n)$ holds. As $n+2>2,$ then \begin{align*}\tag{1} (n+2)!=(n+2)(n+1)!\ge 2(n+1)!.\end{align*}
On the other hand, by adding $(n+1)!$ to the both sides of the inequality $P(n),$ we obatin \begin{align*}\tag{2}2(n+1)!\ge \sum_{k=1}^nk!+(n+1)!=\sum_{k=1}^{n+1}k!\end{align*}
By combining (1) et (2), we obtain \begin{align*} (n+2)!\ge \sum_{k=1}^{n+1}k!.\end{align*}
Thus $P(n+1)$ holds.

Exercises on induction reasoning

In the following exercises, we show you how to use mathematical induction to prove some known formulas and inequalities.

Exercise: Prove by induction that \begin{align*} A_n&=1+2+\cdots+n=\frac{n(n+1)}{2}\cr & B_n=1^2+2^2+\cdots+n^2=\frac{n(n+1)(2n+1)}{6}.\end{align*}

Proof: We have $A_1=1=\frac{1(1+1)}{2},$ it is true. Assume, by induction, that the expression of $A_n$ as above, and determine that of $A_{n+1}$. We have \begin{align*}A_{n+1}&=1+2+\cdots+n+(n+1)=S_n+(n+1)\cr &=\frac{n(n+1){2}}+(n+1)=\frac{n(n+1)+2(n+1)}{2}\cr &= \frac{(n+1)(n+2)}{2}=\frac{(n+1)((n+1)+1)}{2}.\end{align*} Thus the induction hypothesis is also true for $A_{n+1}$. This ends the proof.

Similarly, we have $B_1=1=\frac{1(1+1)(2\times 1+1)}{6}$. Assume, by the induction, the $B_n$ is true and prove that $B_{n+1}$ is also true. We have \begin{align*} B_{n+1}&=B_n+ (n+1)^2=\frac{n(n+1)(2n+1)}{6}+(n+1)^2\cr & =\frac{n(n+1)(2n+1)+6(n+1)^2}{6}\cr &=\frac{(n+1)(n(2n+1)+6(n+1))}{6}\cr &=\frac{(n+1)(2n^2+7n+6)}{6}.\end{align*} But $(n+2)(2n+3)=2n^2+7n+6$. Thus \begin{align*} B_{n+1}&=\frac{(n+1)(n+2)(2n+3)}{6}\cr & =\frac{(n+1)((n+1)+1)(2(n+1)+1)}{6}.\end{align*} This ends the proof.