Home Blog Page 7

How to teach math to children: tips for parents

0

In this post, we propose some tips on how to teach math to children. It pays to teach kids math early and at home. As we will see, the use of educational games is the most common technique. Of course depending on age, one can use other methods to learn math.

Tips on how to teach math to children: For parents 

At what age is a child able to learn math

Education experts believe that children who are not yet of school age can learn math. Mainly visual mathematics like geometry and symmetry. In primary school, it is interesting to use interactive games to help children progress in mathematics. In fact, at this point, educational games, memory games, online coloring on gaming sites, board games, card games, building games, plasticine, and mental math are the most beneficial for the child.

Teach math to children in a simple way

It is true that parents are naturally more inclined to trust primary and secondary education. They, therefore, see that a qualified teacher is the only person who can intervene in the training of their children. But why not help your kids with math yourself?

First of all, from time to time, we have to talk about math at home. By doing so, you will transform mathematics into a less abstract discipline. For this purpose, you should only use educational games, logic games, and other memory games. These tools allow you to review many essential subjects in elementary mathematics such as algebra, geometry, exercise, multiplication, whole number, calculation, addition…

An ideal approach to teaching math to your child is to manipulate educational games. To these, you can also add online math lessons, coloring sites, and online games … Playing with them is the best way to learn while having fun. The objective is to create an environment conducive to mathematics!

Tips for teaching math to a dyslexic

What should you do if your child has dyslexia? Should we teach him math the same way? As you may know, dyslexia affects the reading and writing of the affected person. It will even have an impact on other compartments of life, such as memory capacity, concentration, or even organizational skills. How then to teach mathematics to a dyslexic?

With this type of child, it is very important to be patient and not to rush. It is important to repeat the same idea often so that it is finally assimilated. The alternation of activities is also very important to avoid blocking for too long on the same things.

There are other tips to apply when learning math, such as remembering what was done in the last lesson, using colors, graphics, drawings, diagrams, staggering homework, etc. We mention that teaching math to children may also depend on other factors.

Are boys better at math than girls?

Finally, let’s finish on a fairly sensitive subject: is there a mathematical gene in boys and girls? Some statistics show that boys prefer to study math more than girls. This can in no way diminish the values ​​of girls. In fact, the girls have shown themselves to be overly gifted in disciplines like biology and medicine.

The explanation comes from education and the family. Very early on, boys were introduced to educational games related to construction, to work on geometry, algebra, and the understanding of space. While girls play more like mom or merchant.

Matrix trace worksheet

0

We will explore the matrix trace, its properties, and its significance in understanding the behavior of matrices. Although, matrices are powerful mathematical tools that allow us to organize and manipulate data efficiently. They find applications in various fields, ranging from physics and engineering to computer science and data analysis. One important concept associated with matrices is the trace.

We assume that the reader is familiar with matrix operations and vector spaces.

What is a matrix trace?

Throughout the following the field $\mathbb{K}$ is the real numbers set $\mathbb{R}$ or the complex numbers set $\mathbb{C}$. We also denote by $\mathscr{M}_n(\mathbb{K})$ the space of all square matrices of order $n$ with coefficients in $\mathbb{K}$. If $A$ is matrix in $\mathscr{M}_n(\mathbb{K})$ with coefficients $\{a_{ij}: 1\le i,j\le n\},$ then we denote $A=(a_{ij})_{1\le i,j\le n}$, or sometimes just $A=(a_{ij})$ if there is no notational ambiguity.

Definition of the trace of a matrix

A trace of the matrix $A=(a_{ji})\in\mathscr{M}_n(\mathbb{K})$ is the followin scalar number \begin{align*}{\rm Tr}(A)=\sum_{i=1}^n a_{ii}.\end{align*}

Examples: 1- The trace of the identity matrix $I_n$ of order $n$ is ${\rm Tr}(I_n)=1+\cdots+1=n$.

2- Let the matrix $$ A=\begin{pmatrix} 1&4&0\\ 2&5&1\\ 0&1&2\end{pmatrix}.$$ Then ${\rm Tr}(A)=1+5+2=8$.

Properties of the Matrix Trace

The trace possesses several interesting properties that make it a useful measure of matrices. Firstly, it is invariant under cyclic permutations, meaning that the trace of a matrix remains the same even if the order of the terms in the sum is changed. Secondly, the trace is linear, meaning that $${\rm Tr}(\alpha A) = \alpha {\rm Tr}(A), \quad {\rm Tr}(A + B) = {\rm Tr}(A) + {\rm Tr}(B)$$ for any scalar $\alpha$ and matrices $A$ and $B$. Moreover, the following is one of the fundamental properties of trace matrices.

Theorem on the trace of product of two matrices

Let $A\in\mathscr{M}_{n,p}(\mathbb{K})$ and $B\in\mathscr{M}_{p,n}(\mathbb{K})$ be two matrices. Prove that \begin{align*} {\rm Tr}(AB)={\rm Tr}(BA). \end{align*}

Let $a_{ij},$ $b_{ij}$, $c_{ij}$ and $d_{ij}$ be the entries of the matrices $A,B,AB$ and $BA$ respectively. Observe that $AB$ is a square matrix of order $n$ and $BA$ is a matrix of order $p$. We have \begin{align*} c_{ij}=\sum_{k=1}^p a_{ik}b_{kj},\quad d_{ij}=\sum_{k=1}^n b_{ik}a_{kj}. \end{align*} Then by definition of the trace, we have \begin{align*} {\rm Tr}(AB)=\sum_{i=1}^n c_{ii}= \sum_{i=1}^n \left(\sum_{k=1}^p a_{ik}b_{ki}\right). \end{align*} On the other hand, \begin{align*} {\rm Tr}(BA)&=\sum_{i=1}^n d_{ii}= \sum_{i=1}^p \left(\sum_{k=1}^n b_{ik}a_{ki}\right)\cr&= \sum_{i=1}^n \left(\sum_{k=1}^p a_{ik}b_{ki}\right)\cr & ={\rm Tr}(AB). \end{align*}

Using this theorem we can easily prove that when two matrices $A$ and $B$ are similar, then they have the same trace. In fact, by similarity of $A$ and $B,$ there exists an invertible matrix $P$ such that $A=P^{-1}BP$. Thus \begin{align*} {\rm Tr}(A)={\rm Tr}(P^{-1}(BP))={\rm Tr}((BP)P^{-1})={\rm Tr}(B).\end{align*}

Worksheet

Exercise on matrices commutator

Does exist matrices $A,B\in\mathscr{M}_n(\mathbb{C})$ such that $AB-BA=I_n$?

Assume that $AB-BA=I_n$. As the trace operation is a linear map “linear forme” and ${\rm Tr}(I_n)=n,$ then \begin{align*} {\rm Tr}(AB)-{\rm Tr}(BA)=n. \end{align*} According to the Theorem above, we have ${\rm Tr}(AB)={\rm Tr}(BA)$, so that $n=0$. Absurd!!!

Using trace to prove that two matrices commute

Assume that matrices $A,B\in\mathscr{M}_n(\mathbb{C})$ satisfy \begin{align*} (AB-BA)^2=AB-BA. \end{align*} Show that $AB=BA$.

We put $N=AB-BA,$ so that $N^2=N$. This means that $N$ is the matrix associated with a projector $p$. Then ${\rm Tr}(p)={\rm Tr(N)}=0$. But it is known that ${\rm Tr}(p)={\rm Rank}$ “rank of $p$ which is the dimension of the range ${\rm Im}(p)$”. Hence ${\rm Rank}(p)=0$. This implies that $\ker(p)=\mathbb{C}$ “we recall that for a projector we have the direct sum $\mathbb{C}^n=\ker(p)+\mathcal{R}(p)$”. This means that $N=0_n$ “is the null matrix”. Finally, $AB=B

Sove a matrix equation

Let $A,B\in\mathscr{M}_n(\mathbb{R})$. Solve, in $\mathscr{M}_n(\mathbb{R}),$ the following matrices equation \begin{align*} X={\rm Tr}(X)A+B. \end{align*}

To solve the equation it suffices to determine ${\rm Tr}(X)$. By taking trace of $X$ and ${\rm Tr}(X)A+B,$ we obtain \begin{align*} {\rm Tr}(X)={\rm Tr}(X){\rm Tr}(A)+{\rm Tr}(B). \end{align*} This implies that \begin{align*}\tag{H} (1-{\rm Tr}(A)){\rm Tr}(X)={\rm Tr}(B) \end{align*} We distinct two cases: If ${\rm Tr}(A)\neq 1$, then we have \begin{align*} {\rm Tr}(X)=\frac{{\rm Tr}(B)}{1-{\rm Tr}(A)}. \end{align*} This implies that the solution of the matrix equation is \begin{align*} X=\frac{{\rm Tr}(B)}{1-{\rm Tr}(A)}\; A+B. \end{align*} Assume that ${\rm Tr}(A)=1$. If ${\rm Tr}(B)\neq 0$, then the equation is not compatible and there are no solutions to the matrix equation. Now if ${\rm Tr}(B)= 0,$ then the condition (H) is verified for any condition on $X$. In particular if $\lambda={\rm Tr}(X)$, then $X=\lambda A+B$ is a solution.

Relationship of the trace with Eigenvalues

An intriguing connection exists between the trace and the eigenvalues of a matrix. The trace is equal to the sum of the eigenvalues of a matrix. This relationship offers a quick way to compute the trace when the eigenvalues are known. Conversely, the trace can provide valuable information about the eigenvalues, such as their sum or average value.

Applications of the Matrix Trace

The matrix trace finds diverse applications in various areas of mathematics and beyond. In linear algebra, it is used to define the concept of similarity between matrices, where two matrices have the same trace if they are similar. Trace also appears in the calculation of determinants, where the determinant of a matrix can be expressed in terms of its trace and eigenvalues. In physics, the trace is often used to study the behavior of quantum systems, where it corresponds to the expectation value of certain operators.

Conclusion on matrix trace

The matrix trace is a valuable tool in linear algebra that provides insights into the behavior and properties of matrices. It serves as a compact measure that encapsulates information about a matrix, such as its diagonal elements and eigenvalues. The trace finds applications in a wide range of fields, including mathematics, physics, and engineering. By understanding the matrix trace and its properties, we can enhance our ability to analyze and manipulate matrices effectively, leading

Improper integral exercises

0

We offer a selection of improper integral exercises with detailed answers. Our main objective is to show you how to prove that an improper integral is convergent: how to employ the integration by parts; how to make a change of variables; how to apply the dominated convergence theorem; and how to integrate terms by terms.

Integrals on unbounded intervals, improper integral?

Proper Riemann integrals are defined for bounded functions on bounded intervals. Now we ask the question: can we define integrals for functions that are unbounded in the neighborhood of a point $(a,b]$ or functions defined on unbounded intervals of the form $(-\infty,+\infty),$ $[a,+infty),$ $(-\infty,a].$

Let $f:[a,+\infty)\to\mathbb{R}$ be a continuous function. If the limit $$ \lim_{x\to+\infty}\int_a^x f(t)dt$$ exists, then we say that the integral \begin{align*} \int^{+\infty}_a f(t)dt\end{align*} is convergent, and the value of this integral is exactly the value of the limit.

Similarly we define the improper integral of a continuous function on $(-\infty,a]$.

Let $f:[a,b)\to\mathbb{R}$ be a continuous function. If the limit $$ \lim_{x\to b^-}\int_a^x f(t)dt$$ exists, then we say that the integral \begin{align*} \int^{b}_a f(t)dt\end{align*} is convergent.

Selected improper integral exercises 

Exercise: Show that the following integrals are convergent \begin{align*} \begin{array}{cc} 1.\; \displaystyle\int^{+\infty}_0 e^{-t^2}dt & \quad 2.\; \displaystyle\int^{\frac{\pi}{2}}_0 \sqrt{\tan(\theta)};d\theta\\ 3.\;\displaystyle\int^{+\infty}_0 \frac{\ln(t)}{1+t^2}dt & \quad 4.; \displaystyle\int^1_0 \frac{dt}{1-\sqrt{t}}. \end{array} \end{align*}

Solution: 1) It suffices to show the convergence on the interval $[1,+\infty)$. For any $t\ge 1,$ we have $t^2\ge t,$ so that \begin{align*} 0< e^{-t^2}\le e^{-t},\qquad \forall t\ge 1. \end{align*} Observe that \begin{align*} \int^{+\infty}_1 e^{-t}dt&=\lim_{x\to +\infty} \int^x_1 (-e^{-t})’dt\cr &= \lim_{x\to+\infty} (e^{-1}-e^{-x})\cr &= \frac{1}{e}. \end{align*} This implies that the integral \begin{align*} \int^{+\infty}_0 e^{-t}dt \end{align*} is convergent. Hence the integral \begin{align*} \int^{+\infty}_0 e^{-t^2}dt \end{align*} is convergent is convergent as well.

Another proof: Remark that $t^2 e^{-t^2}\to 0$ as $t\to +\infty$. This means that there exists $\gamma>0,$ sufficiently large, such that $t^2 e^{-t^2}<1$ for any $t\ge \gamma$. We know that the integral \begin{align*} \int_\gamma^{+\infty} \frac{dt}{t^2} \end{align*} converge. Then by comparison the integral \begin{align*} \int_\gamma^{+\infty} e^{-t^2}dt \end{align*} converges. This ends the proof.

2) Here we shall use a change of variables. We have problem near of $\frac{\pi}{2},$ as $\cos(\frac{\pi}{2})=0$. We need to define a function of class $C^1$ which will play the role of change of variable. We then define the function \begin{align*} \psi(\theta)=\sqrt{\tan(\theta)},\quad \theta\in \left(0,\frac{\pi}{2}\right). \end{align*} Clearly $\psi$ is a $C^1$ function on $\left(0,\frac{\pi}{2}\right)$. In addition, for any $0< \theta < \frac{\pi}{2}$, we have \begin{align*} \psi'(\theta)&=\frac{\tan'(\theta)}{2 \sqrt{\tan(\theta)}}\cr &= \frac{\tan^2(\theta)}{2 \sqrt{\tan(\theta)}}. \end{align*} We deduce that $\psi'(\theta)>0$ for any $0< \theta < \frac{pi}{2}$, so that $\psi$ define a bijection from $\left(0,\frac{\pi}{2}\right)$ to \begin{align*}\left(\psi(0),\lim_{\sigma\to \frac{\pi}{2}}\psi(\sigma)\right)=(0,+\infty).\end{align*} Let $x\in (0,+\infty)$ and $0< \theta < \frac{\pi}{2}$ such that $x=\psi(\theta)$, which means that $x^2=\tan(\theta)$. Thus $\theta=\arctan(x^2)$. It follows that \begin{align*} d\theta=\frac{2x}{1+x^4} dx. \end{align*} Now we can write \begin{align*} \int^{\frac{\pi}{2}}_0 \sqrt{\tan(\theta)}\;d\theta=2\int^{+\infty}_0 \frac{x^2}{1+x^4}dx \end{align*} The function $f(x)=\frac{x^2}{1+x^4}$ is defined and continuous on $[0,+\infty)$. Observe that \begin{align*} 0 < f(x)\le \frac{1}{x^2},\quad\forall x\ge 1. \end{align*} An the function $x\mapsto \frac{1}{x^2}$ is integrable on $[1,+\infty),$ then the function $f$ is integrable on $[0,+\infty)$. This ends the proof.

3) The function $g(t)=\frac{\ln(t)}{1+t^2}$ is continuous on $(0,+\infty)$. Moreover, \begin{align*} t^{\frac{3}{2}} f(t)\quad\underset{t\to+\infty}{\sim}\quad \frac{\ln(t)}{\sqrt{t}}. \end{align*} As $t^{-\frac{1}{2}}\ln(t)\to 0$ as $t\to +\infty$, then \begin{align*} \lim_{t\to +\infty} t^{\frac{3}{2}} f(t)=0. \end{align*} On the other hand, as the function $t\mapsto \frac{1}{ t^{\frac{3}{2}}}$ is integrable on $[1,+\infty)$, the function $f$ is integrable on $[1,+\infty)$.

Clearly we have \begin{align*} \sqrt{t} f(t)\quad\underset{t\to+\infty}{\sim}\quad \sqrt{t} \ln(t). \end{align*} Thus \begin{align*} \lim_{t\to 0}\sqrt{t} f(t)=0. \end{align*} We know that the function $t\mapsto \frac{1}{\sqrt{t}}$ is integrable on $(0,1]$, it follows that the function $f$ is integrable on $(0,1]$. Conclusion: $f$ is integrable on $(0,+\infty)$. The improper integral is then convergent.

The following problem is classical in improper integral exercises.

Exercise “Gamma function”: For $x\in\mathbb{R},$ we consider the integral \begin{align*} \Gamma(x):=\int^{+\infty}_0 t^{x-1}e^{-t}dt. \end{align*} We also denote $I:=\{x\in \mathbb{R}: \Gamma(x) < \infty\}$.

  1. Determine $I$.
  2. Show that for all $x\in I,$ $\Gamma(x+1)=x\Gamma(x)$. Deduce the value of $\Gamma(n)$ for any $n\in\mathbb{N}^\ast$.
  3. Justify that \begin{align*} \Gamma\left(\frac{1}{2} \right)=2\int^{+\infty}_0 e^{-t^2}dt. \end{align*}

Solution: 1) Let $x\in\mathbb{R}$. the function $f:t\mapsto t^{x-1}e^{-t}$ is continuous on $(0,+\infty),$ as product of continuous functions. On the other hand, clearly we have $t^2f(t)\to 0$ as $t\to +\infty$. This shows that $f(t)$ is equivalent to $\frac{1}{t^2}$ when $t$ is near of $+\infty$. Thus the function $f$ is integrable on $[1,+\infty)$. In addition $f(t)\sim t^{x-1}$, $t\to 0$, so that $f$ is integrable on $(0,1]$ if and only if $x-1>1,$ i.e. $x>0$. Hence $f$ is integrable on $(0,+\infty)$ if and only if $x>0$. Finally $I=(0,+\infty)$.

2) Let $x\in I$. The functions $u:t\mapsto t^x$ and $v:t\mapsto e^{-t}$ are of class $C^1$ on $(0,+\infty)$ and $u(t)v(t)\to 0$ as $t\to 0$ and $t\to +\infty$. Now by integration by parts, we have \begin{align*} \Gamma(x+1)=\left[t^x (-e^{-t})\right]^{+\infty}_0 -\int^{+\infty}_0 xt^{x-1}(-e^{-t})dt=x\Gamma(x). \end{align*} Let now $n\in \mathbb{N}^\ast$. First, observe that \begin{align*} \Gamma(1)=\int^{+\infty}_0 e^{-t}dt= \left[-e^{-t}\right]^{+\infty}_0=1. \end{align*} We then have \begin{align*} \Gamma(n)=n\Gamma(n-1)=n(n-1)\Gamma(n-2)=\cdots=n(n-1)\cdots 2\Gamma(1)=n!. \end{align*}

3) In the integral defining $\Gamma\left(\frac{1}{2} \right),$ we make the change of variables $t=u^2$, and we obtain \begin{align*} \Gamma\left(\frac{1}{2} \right)=\int^{+\infty}_0 t^{-\frac{1}{2}}e^{-1}dt=\int^{+\infty}_0 \frac{e^{-u^2}}{u}2udu=2\int^{+\infty}_0 e^{-u^2}du. \end{align*}

Functions defined by integrals

0

We propose exercises on functions defined by integrals. We mainly prove the regularity of such a function, like continuity and differentiability. These functions can also be considered parameter-dependent integrals.

In many cases, we deal with a function $f(x,t)$ of two variables and we take the integral of this function with respect to one of its variables, for example, $t$. Then we obtain another function $F(x)$. A natural question appears: does properties of $f$ can also be transferred to $F$? In this article, we give some answers over examples.

A selection of exercises on functions defined by integrals

Problem: The object of this exercise is to calculate the expression of the function defined by integral \begin{align*} \Phi(x)=\int^x_{\frac{1}{x}} f(t)dt\quad\text{with}\;f(t)=\frac{1}{(t+1)^2(t^2+1)} \end{align*}

  • The object of this exercise is to calculate the expression of the function defined by integral \begin{align*} \Phi(x)=\int^x_{\frac{1}{x}} f(t)dt\quad\text{with}\;f(t)=\frac{1}{(t+1)^2(t^2+1)} \end{align*}
  • Prove that $\Phi$ is differentiable on $(0,+\infty)$ and that \begin{align*} \Phi'(x)=2 f(x),\qquad \forall x\in (0,+\infty). \end{align*}
  • Show that for any $x>0,$ \begin{align*} f(x)=\frac{1}{2}\left(\frac{1}{x+1}+\frac{1}{(x+1)^2}-\frac{x}{x^2+1}\right). \end{align*}
  • Deduce the expression of $\Phi(x)$ for any $x\in D_\Phi$ “Hint: observe that $\Phi(1)=0$”.
  • Application: Let a real $\alpha\in (0,\frac{\pi}{2})$ and consider the integral \\begin{align*} I(\alpha)=\int_\alpha^{\frac{\pi}{2}-\alpha}\frac{\cos^2(\theta)}{1+\sin(2\theta)}d\theta. \end{align*} By using the change of variables $t=\tan(\theta)$, show that \begin{align*} I(\alpha)=-\Phi(\tan(\alpha)). \end{align*} Deduce the expression of $I(\alpha)$.

Solution: 1) Denote by $D_\Phi$ the domain of definition of $\Phi$, i.e. the set of $\mathbb{R}$ in which $\Phi$ is well defined. Clearly, $0\notin D_\Phi$. On the other hand, we have $f(t)>0$ for any $t\in\mathbb{R}\backslash\{-1\}$, so that $\Phi(x)\neq 0$ for any $x\in D_\Phi$. Hence we should exclude $-1$ from $D_\Phi$, so $\{-1,0\}\nsubseteq D_\Phi$. We remark that $\Phi(x)$ is well defined if $-1$ is not in the interval defined by $x$ and $\frac{1}{x}$. Thus $(0,+\infty)\subset D_\Phi$. Let now $x\in (-\infty,0)\backslash\{-1\}$. We distinct two cases: if $x\in (-1,0)$ then $\frac{1}{x} < -1 < x,$ and thus $-1\in (\frac{1}{x},x)$. This shows $(-1,0)\cap D_\Phi=\emptyset$. If $x\in (-\infty,-1)$ then $x < -1 < \frac{1}{x}$. Hence $(-\infty,-1)\cap D_\Phi=\emptyset$. Finally we have \begin{align*} D_\Phi=(0,+\infty). \end{align*}

2) Observe that for any $x>0$, we have $1\in [x,\frac{1}{x}]$ of $1\in [\frac{1}{x},x]$. Thus, Chasless’s Relation implies that for any $x>0,$ we have \begin{align*} \Phi(x)&=\int^x_1 f(t)dt+\int^1_{\frac{1}{x}}f(t)dt\cr &= \int^x_1 f(t)dt-\int_1^{\frac{1}{x}}f(t)dt. \end{align*} Put \begin{align*} F(x)=\int^x_1 f(t)dt,\quad\forall x>0. \end{align*} The function $F$ is a primitive of the continuous $f$, then $F$ is a $C^1$ class function on $(0,+\infty)$, and $F'(x)=f(x)$ for any $x\in (0,+\infty)$. Using this function, we can write \begin{align*} \Phi(x)=F(x)-F\left(\frac{1}{x}\right),\qquad \forall x>0. \end{align*} Hence $\Phi$ is differentiable on $(0,+\infty)$ as sum and composition of differentiable functions. Moreover, for any $x>0,$ \begin{align*} \Phi'(x)&=F'(x)-\left( F\left(\frac{1}{x}\right) \right)’\;\cr & = f(x)-\left(\frac{1}{x}\right)’; F’\left(\frac{1}{x}\right)\cr &= f(x)+\frac{1}{x^2}f\left(\frac{1}{x}\right)\cr &= f(x)+ \frac{1}{x^2}\frac{1}{\frac{(1+x)^2}{x^2}\times \frac{1+x^2}{x^2}}\cr &= 2f(x). \end{align*}

3) It suffices to make a simple calculation \begin{align*} \frac{1}{x+1}+\frac{1}{(x+1)^2}-\frac{x}{x^2+1}&=\frac{(x+1)(x^2+1)+x^2+1-x(x+1)^2}{(x+1)^2(x^2+1)}\cr &=\frac{2}{(x+1)^2(x^2+1)}\cr &=2f(x). \end{align*}

4) Using the fact $\Phi(1)=0$ and using the questions 2 and 3, we obtain, for any $x>0,$ \begin{align*} \Phi(x)&=\int^x_1 \Phi'(t)dt\cr &=\int^x_1 \frac{dt}{t+1}+\int^x_1\frac{dt}{(t+1)^2}-\int^x_1\frac{t}{t^2+1}dt \cr &= \left[\ln(t+1)\right]^x_1+\left[\frac{-1}{t+1}\right]^x_1-\frac{1}{2}\left[\ln(t^2+1)\right]^x_1\cr &= \ln(x+1)-\frac{1}{1+x}-\frac{1}{2}\ln(x^2+1)+\frac{1}{2}-\frac{ln(2)}{2}\cr &= \ln\left(\frac{x+1}{\sqrt{2(x^2+1)}}\right)-\frac{1}{1+x}+\frac{1}{2}. \end{align*}

5) Let the change of variable $t=\tan(\theta)$. Thus \begin{align*} dt= \frac{\theta}{\cos^2(\theta)}= (1+\tan^2(\theta)) d\theta. \end{align*} Thus \begin{align*} d\theta=\frac{dt}{1+t^2}. \end{align*} As $\tan(\frac{\pi}{2}-\alpha)=\frac{1}{\tan(\alpha)}$, then \begin{align*} I(\alpha)=\int^{\frac{1}{\tan(\alpha)}}_{\tan(\alpha)} \frac{1} {\frac{1+\sin(2\theta)}{\cos^2(\theta)}} \frac{dt}{1+t^2}. \end{align*} Since $\sin(2\theta)=2\sin(\theta) \cos(\theta)$ and $\frac{1}{\cos^2(\theta)}=1+\tan^2(\theta)$, we have \begin{align*} \frac{1} {\frac{1+\sin(2\theta)}{\cos^2(\theta)}}&= \frac{1}{1+\tan^2(\theta)+2\tan(\theta)}\cr &= \frac{1}{(1+\tan(\theta))^2}\cr &= \frac{1}{(1+t)^2}. \end{align*} This implies that \begin{align*} I(\alpha)= \int^{\frac{1}{\tan(\alpha)}}_{\tan(\alpha)} \frac{dt}{(1+t)^2(1+t^2)}=-\Phi(\tan(\alpha)). \end{align*} Finally, \begin{align*} I(\alpha)&= – \ln\left(\frac{\tan(\alpha)+1}{\sqrt{2(\tan^2(\alpha)+1)}}\right)+\frac{1}{1+\tan(\alpha)}-\frac{1}{2}\cr &= – \ln\left(\frac{\sin(alpha)+\cos(alpha)}{\sqrt{2}}\right)+\frac{1}{1+\tan(\alpha)}-\frac{1}{2}. \end{align*}

The reader can also consult a study of some simple examples of such integral functions.

Subgroup of a group

0

We study the properties of a subgroup of a group. We mention that group theory is an important part of algebra that classifies sets. A group is a particular set with a particular structure governed by a law of composition which makes it possible to carry out calculations. In this article, we make this concept clear with definitions, properties, and great exercises. Groups inter in the study of vector spaces and rings

What is a group in algebra?

In this section, we give a concise background on group theory. The most important properties of groups will be discussed.

Definition: Let $G$ be a no-empty set. A composition law on $G$ is an application $\ast: G\times G\to G$ such that for any $(x,y)\in G$ we have $x\ast y\in G$. This means that the composition of two elements of $G$ is still an element of $G$. This is a somehow stability property of $\ast$.

Definition: A set $(G,\ast)$ is called a group if $\ast$ is a composition law with the following properties:

  • associativity of $\ast$: for any $x,y$ and $z$ in $G$, $x\ast (y\ast z)=(x\ast y)\ast z$.
  • there exists an element $e\in G$ such that $x\ast e=e\ast x=x$. This element $e$ is called the neutral element,
  • For any $x\in G$ there exists $y\in G$ such that $x\ast y=y\ast x=e$. The element $y$ is called the inverse of $x$ and will be denoted by $x^{-1}$.

A group $(G,\ast)$ is called commutative group if for any $x,y\in G$, $x\ast y=y\ast x$.

$(\mathbb{R},+)$, here $\ast=+$ the usual addition operation, is a commutative group. Note that $(\mathbb{N},+)$ is not a group, because any element of $\mathbb{N}$ has no inverse.

If $(G,\ast)$ and $(H,\star)$ are two groups, then we can define another group $(G\times H, \diamond )$ by introducing the following composition law \begin{align*} (x,y)\diamond (x’,y’)=(x\ast x’,y\star y’).\end{align*}

What is a subgroup of a group?

Notation: In what follow we will denote any composition law on any set by “$\cdot$” and we also denote $x\cdot y=xy$.

A set $H$ is called a subgroup of a group $(G,\cdot)$ if $H\subset G$ and $(H,\cdot)$ is a group. This is equivalent to

  • $H\neq \emptyset$
  • $x,y\in H$ implies that $xy\in H,$
  • for any $x\in H,$ the inverse $x^{-1}\in H$.

The last two assertions can be combined into a single assertion of the form $xy^{-1}\in H$.

Proposition: The intersection of two subgroups is a subgroup.

A selection of exercises for groups 

Let us now give a selection of exercises on group theory.

Exercise: Does the following groups are isomorphs?

  • $\mathbb{Z}/6\mathbb{Z}$ and $\mathfrak{S}_3,$ the group of permutations.
  • $\mathbb{Z}/(nm)\mathbb{Z}$ and $\left(\mathbb{Z}/n\mathbb{Z}\right)\times \left(\mathbb{Z}/m\mathbb{Z}\right)$ for $n,m\in\mathbb{N}$.
  • $(\mathbb{R},+)$ and $(\mathbb{Q},+)$.
  • $(\mathbb{R},+)$ and $(\mathbb{R}^\ast_{+},\times)$.
  • $(\mathbb{Q},+)$ and $(\mathbb{Q}^\ast_{+},\times)$.

Solution: 1) The groups $\mathbb{Z}/6\mathbb{Z}$ and $\mathfrak{S}_3$ are not isomorphic because one is commutative and not the other.

2) When $n$ and $m$ are not coprime “relatively prime”, the groups $\mathbb{Z}/(nm)\mathbb{Z}$ and $\left(\mathbb{Z}/n\mathbb{Z}\right)\times \left(\mathbb{Z}/m\mathbb{Z}\right)$ are not isomorphs. In fact if $p=gcd(n,m)$, then we must have $0 < p < nm$. Hence if we denote $[x]_r$ the elements of $\mathbb{Z}/r\mathbb{Z}$, for $r\in \mathbb{N}$, we have $p[1]_n=[p]_n\neq [0]_{nm}$.

If we assume that $\mathbb{Z}/(nm)\mathbb{Z}$ and $\left(\mathbb{Z}/n\mathbb{Z}\right)\times \left(\mathbb{Z}/m\mathbb{Z}\right)$ are isomorph, then there would be an element $(x,y)\left(\mathbb{Z}/n\mathbb{Z}\right)\times \left(\mathbb{Z}/m\mathbb{Z}\right)$ such that $p(x,y)\neq ([0]_n,[0]_m)$, which is absurd since $px=[0]_n$ “$p$ is a multiple of $n$” and $py=[0]_m$ “$p$ is a multiple of $m$”.

3) $(\mathbb{R},+)$ and $(\mathbb{Q},+)$ are not isomorph. In fact, we that $\mathbb{R}$ is not countable while $\mathbb{Q}$ is.

4) It is well known that the application $f:(\mathbb{R},+)\to \mathbb{R}^\ast_+$ such that $f(x)=e^x$ is bijective and $f(x+y)=e^{x+y}=e^x e^y=f(x)f(y)$ for any $x,y\in\mathbb{R}$. This shows that $f$ is an isomorphism of groups. Hence the groups $(\mathbb{R},+)$ and $(\mathbb{R}^\ast,\times)$ are isomorph.

5) The group $(\mathbb{Q},+)$ satisfies the following property \begin{align*} \forall y\in \mathbb{Q},\qquad \exists,x\in \mathbb{Q},\quad \text{s.t.}\;y=x+x. \end{align*} Now if there exists an isomorphism of groups between $(\mathbb{Q},+)$ and $(\mathbb{Q}^\ast_+,\times)$ then we will have \begin{align*} \forall y\in \mathbb{Q}^\ast_+,\qquad \exists,x\in \mathbb{Q}^\ast_+,\quad \text{such that}\;y=x\times x. \end{align*} This is not possible as e.g. the number $2$ does not admit a square root in $\mathbb{Q}$.

Exercise: We denote by $GL_2(\mathbb{R})$ the group of invertible matrices. Let \begin{align*} \mathcal{H}=\left\{\begin{pmatrix} 3^n&n3^{n-1}\\ 0&3^n\end{pmatrix}:n\in \mathbb{Z}\right\} \end{align*} a subgroup of $GL_2(\mathbb{R})$. Prove that $mathcal{H}$ is isomorph to $\mathbb{Z}$.

Solution: We shall use the fact that any infinite monogenic group is isomorph to $\mathbb{Z}$. We select \begin{align*} A=\begin{pmatrix} 3&1\\0&3\end{pmatrix}\in GL_2(\mathbb{R}). \end{align*} By using a recurrence argument one can see that \begin{align*} A^n=\begin{pmatrix} 3^n&n3^{n-1}\\ 0&3^n\end{pmatrix},\quad n\in \mathbb{Z}. \end{align*} Then \begin{align*} \mathcal{H}=\{A^n:n\in\mathbb{Z}\}. \end{align*} Then $\mathcal{H}$ is an infinite monogenic group, so it is isomorph to $\mathbb{Z}$.

Exercise: Let $G$ be a group “supposed not Abelian”. To any $a\in G,$ we associate an application \begin{align*} f_a:G\to G,\quad x\mapsto f_a(x)=axa^{-1}. \end{align*}

  • Prove that $f_a$ is an isomorphism from $G$ to $G$.
  • We denote \begin{align*} {\rm Int}(G)=\{f_a:a\in G\}. \end{align*} Prove that $({\rm Int}(G),\circ)$ is a group.
  • Prove that \begin{align*} \psi: G\to {\rm Int}(G),\quad a\mapsto f_a \end{align*} is an homomorphism of groups. Determine its kernel.

Solution: 1) We denote by $e$ the identity element of $G$, we then have $f_a(e)=aea^{-1}=aa^{-1}=e$. For $x,y\in G$, we have \begin{align*} f_a(xy)&=axya^{-1}=axeya^{-1}=axa^{-1}aya^{-1}\cr &=(axa^{-1})(aya^{-1})\cr &= f_a(x)f_a(y).\end{align*} Then $f_a$ is a homomorphism of groups. Let us prove that $f_a$ is bijective. It suffices to show that for any $y\in G$ there is a unique $x\in G$ such that $f_a(x)=y$, which means that $axa^{-1}=y$. This is equivalent to $x=a^{-1}ya$. This element is unique, so $f_a$ is an isomorphism.

2) We denote by $(B(G),\circ)$ the group of all isomorphism from $G$ to $G$. Observe that ${\rm Int}(G)\subset B(G)$. It suffices then to show that ${\rm Int}(G)$ is a subgroup of $B(G)$. In fact, remark that for any $x\in G$, we have ${\rm id}_G(x)=x=exe^{-1}=f_e(x)$. This means that ${\rm id}_G\in {\rm Int}(G)$, and then ${\rm Int}(G)$ is not empty. Let $\ell_1,\ell_2\in {\rm Int}(G)$. Then, there exist $a,b\in G$ such that $\ell_1=f_a$ and $\ell_2=f_b$. Now for any $x\in G,$ we have \begin{align*} (\ell_1\circ\ell_2)(x)&=\ell_1(\ell_2(x))\cr &=f_a(f_b(x))\cr &= af_b(x)a^{-1}\cr &= abxb^{-1}a^{-1}\cr &=(ab)x(ab)^{-1}\cr &= f_{ab}(x). \end{align*} As $ab\in G,$ then $f_{ab}\in {\rm Int}(G)$. Hence $\ell_1\circ\ell_2\in {\rm Int}(G)$. According to the proof of the first question, for any $a\in G,$ we have $f^{-1}_a=f_{a^{-1}}\in {\rm Int}(G),$ because $a^{-1}\in G$. This ends the proof.

3) In the proof of the second question we have seen that for $a,b\in G$ we gave $f_a\circ f_b=f_{ab}$. Hence \begin{align*} \psi(ab)&=f_{ab}=f_a\circ f_b\cr &= \psi(a)\circ \psi(b). \end{align*} This shows that $\psi$ is a homomorphism of groups.

Let $a\in \ker(\psi)$, which means that $f_a={\rm id}_G$. For all $x\in G,$ $f_a(x)=x$, then $axa^{-1}=x$. This implies that $a$ satisfies $ax=xa$ for all $x\in G$. Hence \begin{align*} \ker(\psi)=\{a\in G\;|\; ax=xa,\;\forall x\in G\}. \end{align*}

How to find the limit of a function?

0

We propose simple techniques to learn you how to find the limit of a function. We first recall what a limit of a function is. Then give some properties of the limit as well as several examples in the form of exercises with detailed solutions.

Generalities on limits of functions.

let $a$ a real number and $f:\mathbb{R}\setminus \{a\}\to\mathbb{R}$ be function.

  • We say that $f$  has a limit $\ell\in \mathbb{R}$ at the point $a$ if for any small open interval $V_\ell$ centered in $\ell$, there exists a small open interval $I_a$ centered in $a$ such that $f(I_a)\subset V_\ell$. That is, for any small real number $\varepsilon>0,$ there exists a small real number $\alpha>0$ such that, for any $x,$ $|x-a|<\alpha$ implies that $|f(x)-\ell|<\varepsilon$. Here we can take $I_a=(a-\alpha,a+\alpha)$ and $V_\ell=(\ell-\varepsilon,\ell+\varepsilon)$. In this case, we write $$\lim_{x\to a}f(x)=\ell.$$
  • The function $f$ has a right limit $\ell$ at $a$, if for any $\varepsilon>0,$ there exists $\alpha>0,$ such that, for any $x$, $a<x<a+\alpha$ implies that $|f(x)-\ell|<\varepsilon$. In this case, we write $$\lim_{x\to a^+}f(x)=\ell.$$
  • We say that $f$ has a left limit $\ell$ at $a$, if for any $\varepsilon>0,$ there exists $\alpha>0,$ such that, for any $x$, $a-\alpha<x<a$ implies that $|f(x)-\ell|<\varepsilon$. In this case, we write $$\lim_{x\to a^-}f(x)=\ell.$$
  • The function $f$ has a limit $+\infty$ at the point $a$ if for any sufficiently large $A>0,$ there exist a small $\alpha>0$ such that, for any $x,$ $|x-a|<\alpha$ implies $f(x)>A$. In this case, we write $$ \lim_{x\to a}f(x)=+\infty.$$
  • We say that $f$ has a limit $-\infty$ at the point $a$ if for any sufficiently large $A>0,$ there exist a small $\alpha>0$ such that, for any $x,$ $|x-a|<\alpha$ implies $f(x)<-A$. In this case, we write $$ \lim_{x\to a}f(x)=-\infty.$$

Proposition: The function $f$ admits a limit $\ell$ at the point $a$ if and only if it admits the right and left limits at $a$ and these limits are equal to $\ell$.

Similar to the limits of the sequence we have the following result.

The squeeze theorem for functions: Assume that there exist three real functions satisfying $h\le f\le g$ and that $$\lim_{x\to a}h(x)=\lim_{x\to a}g(x)=\ell.$$ Then the function has the limit $\ell$ at the point $a$.

Relation with sequences limits: The function has the limit $\ell$ at the point $a$ if and only if for any sequence $(u_n)_n$ such that $u_n$ converges to $a$, the sequence image $f(u_n)$ converges to $\ell$.

This result is very useful if we want to show that a function has no limit at a point. In fact, it suffices to find two sequences $(u_n)_n$ and $(v_n)_n$ which converge to $a,$ while the image sequences $f(u_n)_n$ and $f(v_n)$ converge to different real numbers.

How to find the limit of a function?

Exercise: Determine the limits of the following functions

  1. $ f(x)=\sqrt{x+1}-\sqrt{x}\qquad (x\to+\infty) $
  2. $ g(x)= \displaystyle\frac{x+\cos x}{x+\sin x}\qquad (x\to +\infty) $
  3. $h(x)=\displaystyle \sqrt{x+\sqrt x}-\sqrt{x}\qquad (x\to +\infty) $
  4. $ \varphi(x)= \displaystyle\sin(x)\sin\left(\frac{1}{x}\right)\qquad (x\to +\infty) $

Solution: 1) Observe that $\sqrt{x+1}$ and $\sqrt{x} $ goes to $+\infty$ as $x\to+\infty,$ then we have an indeterminate form. So you have to rewrite $f$ in another form. We recall that if $a,b$ are in $[0,+\infty)$, then $(\sqrt{a}-\sqrt{b})(\sqrt{a}+\sqrt{b})=a-b$. By using this, we obtain, for any $x>0,$ \begin{align*} f(x)&=\sqrt{x+1}-\sqrt{x}\cr &=\frac{(x+1)-x}{\sqrt{x+1}+\sqrt{x}}\cr =&\frac{1}{\sqrt{x+1}+\sqrt{x}}. \end{align*} We now see that \begin{align*} \lim_{x\to +\infty}f(x)=0. \end{align*}

2) It is well know that the functions $x\mapsto \cos(x)$ and $x\mapsto \sin(x)$ have not limits as $x\to\infty$, these functions are periodic and have values in $[-1,1]$. To compute the limit of $g$ we follow the following technic: For any $x>0,$ we have \begin{align*} g(x)&=\displaystyle\frac{x\left(1+\frac{\cos(x)}{x}\right)}{x\left(1+\frac{\sin(x)}{x}\right)} \cr &=\displaystyle\frac{1+\frac{\cos(x)}{x}}{1+\frac{\sin(x)}{x}} \end{align*} On the other hand, we have \begin{align*} \left|\frac{\cos(x)}{x} \right| \le \frac{1}{x},\qquad \left|\frac{\sin(x)}{x} \right| \le \frac{1}{x}. \end{align*} Then \begin{align*} \lim_{x\to +\infty}\frac{\cos(x)}{x}=0,\qquad \lim_{x\to +\infty}\frac{\sin(x)}{x}=0. \end{align*} Thus \begin{align*} \lim_{x\to +\infty}g(x)=\frac{1}{1}=1. \end{align*}

3) As in the first question, for $x>0$ we write \begin{align*} h(x)&=\frac{\sqrt{x}}{ \sqrt{x+\sqrt x}+\sqrt{x}}\cr & =\frac{\sqrt{x}}{\sqrt{x}\left( \sqrt{\sqrt {x}+1}+1\right)} \cr & =\frac{1}{\sqrt{\sqrt {x}+1}+1} \end{align*} Clearly, \begin{align*} \lim_{x\to +\infty}h(x)=0. \end{align*}

4) The idea is to use the following estimates \begin{align*} |\sin(x)|\le 1\quad\text{and}\quad \left|\sin\left(\frac{1}{x}\right)\right|\le \frac{1}{x}. \end{align*} Then \begin{align*} |\varphi(x)|\le \frac{1}{x}\underset{x\to+\infty}{\longrightarrow} 0. \end{align*} Hence \begin{align*} \lim_{x\to +\infty}\varphi(x)=0. \end{align*}

Learning basic to advanced math

0

We offer practical tips for learning basic to advanced math. Above all, mathematics is a difficult and abstract subject, while a part of some students feels very good in this area.

Some tips for learning basic to advanced math

Mathematics is not created for itself, but to help other sciences such as physics, biology, chemistry, engineering, as well as social and economic sciences. Mathematics is a powerful tool and it is the language of science. In the following, we give some tips for loving math and changing your weird view of math.

We know very well that mathematics is like a monster for many students and even they doubt its usefulness in life. This is not the case, because mathematics is at the heart of all science. The question that arises is whether it is an effective method for understanding mathematics. To understand math, you need to have a clear mind and ideas to organize well.

Two classes of study must be distinguished here. The first is the one who chooses mathematics as the main course, and the second chooses mathematics as a complementary subject. This sharing is a necessity in the university. Perhaps the first class will suffer from some difficulties, especially with the licensed math.

Anyway, if you master math, you can easily master the other disciplines. So here are practical tips for learning math.

Think of math as a chain

Mathematics is a logical subject and we have to learn it step by step. But before going to the next step, we are forced to dominate the previous one. How to climb a staircase that misses at least one step? It’s difficult, right? For example, to solve second-order algebraic equations, I need to know the real numbers, rational and irrational.

On the other hand, you have to know the structure of math lessons. The latter always begins by defining a concept, then we give simple properties to check the conditions of the definitions, these are theorems, and propositions.

For example, let us discuss the concept of the limit of a sequence. Definition: we say that a sequence (x_n) converges to a real $\ell$ if we can find a natural number $N$ such that the distance between $x_n$ and $\ell$ is very small whenever $n\ge N$. The difficult part of this definition is how to determine the number $N$. In the exercises, it is difficult to use this definition to determine the limit of sequences. What we, in general, is to know some classical limit of particular sequences such as $x_n=\frac{1}{n}$ which has $0$ as the limit, and then use prove that $|x_n-\ell|\le \frac{1}{n}$. For example
\begin{align*}x_n=\frac{n+1}{n}=1+\frac{1}{n}.\end{align*} This shows that the limit is $1$. So to solve exercises do not use definition, but use related theorems and properties.

Practice math as much as you can

In class, the teacher simply gives you the definitions, certain properties, and some examples of applications. It’s up to you to do the rest at home to better understand the chapter. Always start with the easiest exercises, then try to solve a problem that requires a little intuition. Sometimes you get stuck in front of these difficult exercises. Do not worry, be patient, you will accumulate techniques to deal with this kind of exercise.

learning basic math requires clear ideas

Mathematics is based on logic. It’s like musical notes, you have to compose them well to produce good music. So if you are in front of a math exercise, you must first read it well to know what concept it is, after reviewing the course definitions. From there one can conceive the properties or the theorem which must be used to solve the problem. If fact, if we want to compute an integral, then we should think of using the integration by part and change of variables technique.

Use reasoning by induction

Reasoning by induction is a very simple and powerful technique to prove iterative properties. This means that recurrence is useful if we have to prove that the property $P(n)$ is valid for any natural n. What you need to do, is just check that this property is valid for the first term $n=0,$ sometimes $ n = 1$ or $2$, then assume that it is valid for the term $ n $, then try to prove that the order $ n + 1 $ is also true.

The absurd in proofs is the key to learning basic to advanced math

Sometimes, to prove certain mathematical problems, we often use an absurd argument. For example, if you want to prove that $\sqrt{2}\notin\mathbb{Q} $. We do not have enough information to directly address this issue. The only way we have is to use the absurd. We then assume that $ \sqrt {2}\in\mathbb{Q}, $ therefore by the definition of the set of rational numbers $\mathbb{Q}$, there exist integers $p$ and $q$ with the same sign such that $ q $ is different from zero and $\sqrt{2}=\frac{p}{q}$. Then we will use certain arithmetic properties to find a contraction.

Absurd is the most important practical tip for learning math and is an efficient technique to solve very had problems.

Remember classic formulas and inequalities

In many cases, you must rely on a classical formula or a known inequality to demonstrate a mathematical result. It is therefore advisable to learn these formulas by heart. Here we give some:

  • Binomial formula: For any $a,b\in\mathbb{R}$ and $n\in \mathbb{N},$ we have \begin{align*}(a+b)^n=\sum_{k=1}^n \frac{n!}{k!(n-k)!} a^k b^{n_k}.\end{align*}
  • Gauss Formula: for any natural number $n$, \begin{align*} 1+2+3+\cdots n=\frac{n(n+1)}{2}.\end{align*}
  • Geometric sum: For real $a$ with $a\neq 1$ and any natural number $n,$ \begin{align*}1+a+a^2+\cdots+a^n=\frac{1-a^{n+1}}{1-a}.\end{align*}
  • Trigonometric Identities: For any real number $\theta,$ \begin{align*}\sin^2(\theta)+\cos^2(\theta)=1.\end{align*}

Vector spaces exercises

0

We provide vector spaces exercises with detailed answers. Such spaces are natural in mathematics in which we can manipulate computation using the structure of these spaces. In such spaces, we use two composition laws, the first noted “+” which allows operating between the elements of the space, called internal composition law; while the second allows implying a scalar space, $ \mathbb{R} $ or $ \mathbb{C} $. For example, if we designate a box of apples with $ E $. We can do the operation “apple + apple = $ 2 \cdot{\rm apples} $”. Here we have $ 2 \in\mathbb{R} $ and “$ {\rm apple} \in E $”. We then have another composition of law $ “\cdot” $, which is external on $ E $. The role of this law is to talk about the number of apples we have.

Vector spaces are used in other parts of linear algebra such as matrices for example.

How to prove that sets are vector spaces?

This is a very good question. In general, in vector spaces courses, we already studied typical or classical vector spaces such as $\mathbb{R}^n$, the complex numbers space $\mathbb{C}$, the space of all applications from a space $E$ to another space $F$ denoted by $\mathcal{F}(E,F)$.

Now let $H$ be a set of which we want to prove that it is a vector space. The technique consists in finding a classical vector space $E$ such that $H$ is a subspace of $E$, $H\subset E$. Then it suffices to show that $H$ is a subspace of $E$. To do this, we must first ensure that $H$ is not the empty set, here the neutral element of $E$ must be in $H$. In addition, for $x,y\in H$ and $\lambda\in \mathbb{K}=\mathbb{R}$ or $\mathbb{C}$, you need to verify that \begin{align*}x+y\in H,\qquad \lambda\cdot x\in H.\end{align*}We mention that these two relations are equivalent to $x+\lambda y\in H$.

Let us discuss another method that helps in proving that $H$ is a vector space. Let $E$ be a vector space and let $x_1,x_2,\cdots,x_r$ be no null vectors in $E$. We denote by ${\rm span}( x_1,x_2,\cdots,x_r )$ the space of all combinations of elements $x_1,x_2,\cdots,x_r$. That is $y\in {\rm span}( x_1,x_2,\cdots,x_r ) $ if and only if there exist scalars $\lambda_1,\lambda_2,\cdots,\lambda_r\in \mathbb{K}$ such that \begin{align*}y=\lambda_1 x_1+\lambda_2 x_2+\cdots+\lambda_r x_r.\end{align*} Then ${\rm span}( x_1,x_2,\cdots,x_r )$ is a subspace of $E$. Now to prove that $H$ is a subspace, sometimes, it suffices to show that $H$ coincides with a span space.

Examples and properties of such a vector space

Let the following sets of $\mathbb{R}^3$: \begin{align*}F&=\{(x,y,z)\in \mathbb{R}^3: x-y+z=0\}\cr G&=\{(x,y,z)\in \mathbb{R}^3: x+y-z=0\}.\end{align*} First, let us prove that $F$ and $G$ are subspaces of $\mathbb{R}^3$. In fact, for $u=(x,y,z)\in F$, we have \begin{align*}u=(y+z,y,z)=y(1,1,0)+z(1,0,1).\end{align*} This show that $F$ coincides with the subspace of $\mathbb{R}^3$ generated by the vectors $\{(1,1,0),(1,0,1)\}$. Hence $F$ is a subspace of $\mathbb{R}^3$. Similarly, one can show that $G$ is a subspace as well.

We now prove that $\mathbb{R}^3=F+G$. In fact, let $u=(x,y,z)\in \mathbb{R}^3$. We have $(1,1,0)\in G$. Now we look for some $\lambda\in \mathbb{R}$ such that $u-\lambda (1,1,0)\in G$. We then have\begin{align*}u-\lambda (1,1,0)\in G &\Longleftrightarrow (x-\lambda,y-\lambda,z)\in G\cr &\Longleftrightarrow (x-\lambda)+(y-\lambda)-z=0\cr & \Longleftrightarrow \lambda= \frac{x+y-z}{2}.\end{align*}For a such $\lambda,$ we have\begin{align*}u=\underset{\in F}{\underbrace{\lambda (1,1,0)}}+\underset{\in G}{\underbrace{(u-\lambda (1,1,0))}}.\end{align*}This means that $\mathbb{R}^3=F+G.$

Let us answer the question: Do $F$ and $G$ supplementally subspaces? We remark that for example $(0,1,1)\in F\cap G$. This implies that $F\cap G\neq \{0\},$ and hence $F$ and $G$ are not supplementally subspaces.

Basis of vector spaces

In order to have good control of the elements of a vector space, it is advisable to find a way for which all these vectors can be written in a unique way as a linear combination of known vectors of space. This means that we need to find vectors $e_1,e_2,\cdots,e_n$ such that $E={\rm span}(e_1,e_2,\cdots,e_n)$ and if $x=\lambda_1 e_1+\lambda_2 x_2+\cdots+\lambda_n e_n=0,$ then $\lambda_1=\lambda_2=\cdots=\lambda_n=0$, in this case we say that the the family $\{e_1,e_2,\cdots,e_n\}$ is linearly independent. A family that satisfies all these properties is called the basis of $E,$ and the cardinal of this family is called the dimension of $E,$ denoted by $\dim(E)=n$.

Let $F$ be the subset of $\mathbb{R}^3$ defined by\begin{align*}F=\{(x,y,z)\in \mathbb{R}^3: x+2y-z=0\}.\end{align*}Prove that $F$ is a subspace and determine its basis, we recall that a basis of a vector space is any linearly independent subset of it that spans the whole vector space. In fact, observe that $F$ is nonempty and $0_{\mathbb{R}^3}\in F$. On the other hand, $u=(x,y,z)\in F$ if and only if $z=x+2y$. Then\begin{align*}u&=(x,y,x+2y)\cr &= x(1,0,1)+y(0,1,2)\cr &:= x v_1+y v_2,\end{align*}where $v_1=(1,0,1)$ and $v_2=(0,1,2)$. This shows that $F$ is the subspace of $\mathbb{R}^3$ generated by the vectors $v_1$ and $v_2,$ that is $F={\rm span}\{v_1,v_2\}. $ Observe that the vectors $v_1$ and $v_2$ are not collinear. Hence $\{v_1,v_2\}$ is a basis of $F$ and $\dim(F)=2$, $F$ is a hyperplane.

Calculus in the space of sequences

We denote by $(\mathbb{R}^{\mathbb{N}},+,\cdot)$ the vector space of all real sequences. Let $u=(u_n)_n$, $v=(v_n)_n$ and $w=(w_n)_n$ be real sequences defined by\begin{align*}\forall n\in\mathbb{N},\quad u_n=2^n,\;v_n=3^n,\;w_n=5^n.\end{align*}Prove that the vectors $u,v$ and $w$ are linearly independent. Before solving this exercise, we would like to recall some facts about geometric sequences. Let $a\in \mathbb{R}$. The natural power of $a$ define a sequence $(a^n)_n$ called a geometric sequence of ratio $a$. Now if $|a|<1,$ then $a^n$ goes to $0$ as $n\to+\infty$.

Let us now prove that the vectors $u,v$, and $w$ are linearly independent. For this let $\alpha,\beta,\gamma\in \mathbb{R}$ such that \begin{align*}\alpha u+\beta v+\gamma w=(0,0,\cdots,0,\cdots).\end{align*}Then for any $n\in\mathbb{N}$ we have\begin{align*}\alpha 2^n+\beta 3^n+\gamma 5^n=0.\end{align*}By factorizing by $5^n$ we get\begin{align*}\alpha \left(\frac{2}{5}\right)^n+\beta\left(\frac{3}{5}\right)^n +\gamma=0.\end{align*}We take limit $n\to +\infty,$ we obtain\begin{align*}\lim_{n\to +\infty}\left(\alpha \left(\frac{2}{5}\right)^n+\beta\left(\frac{3}{5}\right)^n\right) +\gamma=0.\end{align*} But as $0< \frac{2}{5} < 1$ and $0< \frac{3}{5} < 1$, then\begin{align*}\lim_{n\to +\infty}\left(\alpha \left(\frac{2}{5}\right)^n+\beta\left(\frac{3}{5}\right)^n\right)=0.\end{align*} This implies that $\gamma=0$. So that\begin{align*}\alpha 2^n+\beta 3^n=0. \end{align*}Similarly, factorizing by $3^n$ we get\begin{align*}\alpha \left(\frac{2}{3}\right)^n+\beta=0.\end{align*}By take limit $n\to +\infty,$ we get $\beta=0$. This implies also $\alpha \left(\frac{2}{3}\right)^n=0$, so that $\alpha=0$. This ends the proof.

Parameter dependent integral

0

Parameter-dependent integral is studied in this article. We propose a selection of exercises on continuity, limit, and differentiability of such parametric integrals. Some known transformations in mathematics such as Fourier transform and Laplace transform are parametric integrals.

What is a parametric integral? 

Suppose we have a function of two variables $f:I\times J\to\mathbb{R}$ where $I$ and $J$ are intervals in $\mathbb{R}$. We prefer integrate $f$ with respect to one variable, says the second, we then obtain a function $F:J\to \mathbb{R}$ defined by \begin{align*} F(x)=\int_I f(x,t)dt.\end{align*} We say that the function $F$ is a parameter-dependent integral.

In general, it is very difficult to determine the explicit expression of the function $F$ by simply calculating the integral. But there are theorems that impose conditions on $f$ for which the function $F$ is continuous or differentiable.

Exercises on parameter-dependent integral

Exercise: Let the function \begin{align*} F(x)=\int^{+\infty}_0 \frac{e^{-xt}}{1+t}dt,\qquad \forall x>0. \end{align*}

  • Prove that $F$ is continuous on $(0,+\infty)$.
  • Determine the limit of $F(x)$ as $x\to +\infty$.

Solution: First of all, for any $x>0$, the quantity $F(x)$ is well defined, since \begin{align*} \forall t\ge 0,\quad 0<\frac{e^{-xt}}{1+t}\le e^{-xt},\quad\text{and}\; \int^{+\infty}_0 e^{-xt}dt=\frac{1}{x}. \end{align*}

1) Consider the function \begin{align*} f(x,t)=\frac{e^{-xt}}{1+t},\qquad (x,t)\in (0,+\infty)\times [0,+\infty). \end{align*} Remark that for each fixed $x>0,$ the function $t\in [0,+\infty)\mapsto f(x,t)$ is continuous as the quotient of continuous functions. On the other hand, as the exponential function is continuous, then for each $t\ge 0$ the function $x\in (0,+\infty)\mapsto f(x,t)$ is continuous. It remains to show that on each $[a,b]\times [0,+\infty),$ with $0 < a< b <+\infty$ are arbitrary, the function $f$ is estimated by an integrable function $\varphi [0,+\infty)\to [0,+\infty)$ independent of $x$. In fact for $x\in [a,b]$ and $t\ge 0,$ we have $-x t\le -a t,$ which implies that \begin{align*} |f(x,t)|\le \frac{e^{-at}}{1+t}=:\varphi(t) \end{align*} for all $(x,t)\in [a,b]\times [0,+\infty)$. As for all $t\ge 0,$ $0< \varphi(t)\le e^{-a t}$, it follows that $\varphi$ is integrable. According to the theorem of continuity of parameter integrals, we deduce that the function $F$ is continuous on $(0,+\infty)$.

2) According to the question $1,$ we have proved that the function $f$ is dominated by an integral function $\varphi$. On the other hand, for any $t\ge 0,$ have we have \begin{align*} \lim_{x\to +\infty} f(x,t)=0. \end{align*} By using the dominated convergence theorem, we deduce that \begin{align*} \lim_{x\to +\infty}F(x)=\int^{+\infty}_0 0 dt=0. \end{align*}

Exercises on convex functions and applications

0

In this article, we offer exercises on convex functions. In fact, our objectives are: to be able to show that a function is convex; to exploit the convexity to show inequality; to use the derivatives to show inequality; to exploit the Cauchy-Schwarz inequality to show inequalities.

What is a convex function?

A function $f:I\mapsto \mathbb{R}$ is convex on the interval $I$ if for any $x,y\in I,$ and $t\in [0,1],$ we have \begin{align*} f(tx+(1-t)y)\le tf(x)+(1-t)f(y).\end{align*} The function $f$ is called strictly convex if \begin{align*} f(tx+(1-t)y)< tf(x)+(1-t)f(y).\end{align*}

Convex functions are important and help to prove some useful inequalities like Holder’s inequality. Also, any convex function is a locally Lipschitz function, so the maximal solution of a Cauchy problem defined by a convex function exists and is unique.

A function $f$ is concave of the interval $I$ if $(-f)$ is a convex function, thus if for any $x,y\in I,$ and $t\in [0,1],$ we have \begin{align*} f(tx+(1-t)y)\ge tf(x)+(1-t)f(y).\end{align*}

There is a deep connection between convex functions and differential functions as shown in the following results:

  • If a function $f$ is differentiable on the interval $I$, then $f$ is convex if and only if its derivative function $f’$ is increasing.
  • In the case of $f$ twice differentiable, $f$ is a convex function if and only $f”\ge 0$.

Exercises on convex functions

Exercise: Prove that the logarithm function is concave. On the other hand, show that for any $(x_1,x_2,\cdots,x_n)\in (\mathbb{R}^+)^n$, \begin{align*} \sqrt[n]{x_1\cdots x_n}\le \frac{1}{n}(x_1+x_2+\cdots+x_n). \end{align*}

Solution: 1) Since the function $x\in (0,+\infty)\mapsto \ln(x)$ is twice differentiable, it suffices to show that $\ln(x)\le 0$ for any $x>0$. In fact, we know that $\ln'(x)=\frac{1}{x}$, hence $\ln”(x)=-\frac{1}{x^2}$ for all $x>0$. This shows that the function $x\in (0,+\infty)\mapsto \ln(x)$ is concave.

2) We have \begin{align*} \sqrt[n]{x_1\cdots x_n}&=\exp\left( \frac{1}{n}(\ln(x_1\cdots x_n)) \right)\cr &=\exp\left( \frac{1}{n}(\ln(x_1)+\cdots +\ln(x_n)) \right) \end{align*} According to the first question the logarithm function is concave. Then \begin{align*} \frac{1}{n}(\ln(x_1)+\cdots +\ln(x_n))\le \ln\left( \frac{1}{n}(x_1+\cdots +x_n)\right). \end{align*} On the other hand, as the exponential function is strictly increasing, it follows that \begin{align*} \sqrt[n]{x_1\cdots x_n}&\le \exp\left( \ln\left( \frac{1}{n}(x_1+\cdots +x_n)\right) \right)\cr &= \frac{1}{n}(x_1+\cdots +x_n). \end{align*}

Recurring sequences exercises

0

Recurring sequences are generally involved in the modeling of discrete systems. In fact, such sequences replace the differential equation by discretizing the time variable.

To well understand the material of this page, properties of convergent sequences are needed.

Facts about recurring sequences

Definition: Let $I$ be an interval of $\mathbb{R}$ and $f$ a function on $I$ such that $f(I)\subset I$. A recurrent sequence is defined by \begin{align*} u_0\in I,\qquad u_{n+1}=f(u_n),\qquad \forall n.\end{align*}

Suppose the interval is bounded. This implies that the recurring sequence $ (u_n) _n $ is bounded. Now to show that $ (u_n) _n $ is convergent, it suffices to prove that this sequence is monotonic, increasing or decreasing. This depends on the regularity of $f$. For instance, if $f$ increases and if $u_1ge u_0,$ then the sequence $ (u_n) _n $ is increasing, hence convergent.

Worksheet

Exercise: Consider the recurrent sequence \begin{align*} u_0\in [0,+\infty),\quad u_{n+1}=\sqrt{u_n},\;\forall n\in\mathbb{N}. \end{align*} Discuss the convergence of $(u_n)_n$ and determine the limit.

Solution: If we define the square function $f(x)=\sqrt{x}$ for $x\ge 0$, we then have $u_{n+1}=f(u_n)$. As $f([0,+\infty))\subset [0,+\infty)$ and $u_0\ge 0$ then the others terms of the sequence are positive as well. Thus the sequence $(u_n)_n$ is well defined. On the other hand, the function $f$ is increasing. Then $(u_n)_n$ is monotone depending on the sign of $u_1-u_0,$ if $u_1\ge u_0$ the sequence is increases and if $u_1\le u_0$ the sequence is decreases. On the other hand, as $f$ is continuous then if the sequence converges to $\ell$ then it is the solution of the algebraic equation $f(\ell)=\ell$, so that $\ell=0$ or $\ell=1,$ we say that $\ell$ is a fixed point of $f$.

Observe that \begin{align*} u_0\le u_1 \;\Longleftrightarrow\; u_0\le \sqrt{u_0} \;\Longleftrightarrow\; u_0\in [0,1]. \end{align*} Hence the sequence $(u_n)_n$ is increasing if $u_0\in [0,1],$ and decreasing if $u_0\ge 1$. We also have $f([0,1])\subset [0,1]$ and $f([1,+\infty))\subset [1,+\infty)$. We distinguish three cases

If $u_0=0$. Then $u_n=0$ for any $n\in\mathbb{N}$, and thus the sequence converges to $0$.

If $u_0\in (0,1]$ then the sequence is increasing and $0 < u_n\le 1,$ because $f((0,1])\subset (0,1]$. Then converge to $\ell\in (0,1]$. This $\ell=1$.

If $u_0\ge 1,$ then the sequence is decreasing and $u_n\ge 1,$ because $f((0,1])\subset (0,1]$. Thus the sequence converges to $1$.

Exercise: Let $(u_n)_n$ the recurring sequence defined by \begin{align*} u_0=\frac{1}{2} ,\quad u_{n+1}=\frac{2u_n}{1+u_n},\;\forall n\in\mathbb{N}. \end{align*}

  • Calculate $u_1,u_2,u_3$ and $u_4$.
  • Let $(v_n)_n$ be the sequence defined by \begin{align*} v_{n}=1-\frac{1}{u_n},\quad \forall n\in\mathbb{N}. \end{align*}
  • First, calculate $v_{n+1}$ in function of $u_n$. Deduce that $(v_n)_n$ is a geometric sequence. Second, calculate $v_n$ in function of $n$ and deduce the expression of $u_n$. Finally, calculate the limit of $u_n$ as $n\to+\infty$.

Solution: 1) We have \begin{align*} \begin{array}{cc} u_1=\frac{2u_0}{1+u_0}=\frac{2}{3}, & u_2=\frac{2u_1}{1+u_1}=\frac{4}{5} \\ u_3=\frac{2u_2}{1+u_2}=\frac{8}{9}, & u_4=\frac{2u_3}{1+u_3}=\frac{16}{17}. \end{array} \end{align*}

2) Let us determine the expressions of $v_n$ and $u_n$: For any $n\in\mathbb{N},$\begin{align*} v_{n+1}&=1-\frac{1}{u_{n+1}}=1-\frac{1}{\frac{2u_n}{1+u_n}}\cr &= \frac{2u_n-u_n-1}{2u_n}\cr &= \frac{1}{2} \frac{u_n-1}{u_n}= \frac{1}{2}\left(1-\frac{1}{u_n}\right)\cr &= \frac{1}{2} v_n. \end{align*} This $(v_n)$ is a geometric sequence of ratio $r=\frac{1}{2}$ and initial term $v_0=1-\frac{1}{u_0}=-1$.

As $(v_n)$ is a geometric sequence of ratio $r=\frac{1}{2}$ and initial term $v_0=-1$, then \begin{align*} v_n=v_0 r^n= – \left(\frac{1}{2}\right)^n. \end{align*} Moreover, the relation $v_n=1-\frac{1}{u_n}$ implies that \begin{align*} u_{u}=\frac{1}{1-v_n}=\frac{1}{1+\left(\frac{1}{2}\right)^n}. \end{align*}

As $\frac{1}{2}\in (0,1),$ then \begin{align*} \lim_{n\to +\infty} \left(\frac{1}{2}\right)^n=0. \end{align*}Thus \begin{align*} \lim_{n\to +\infty} u_n=1.\end{align*}

Convergence of sequences

0

We discuss the convergence of sequences and how to calculate the limit of a sequence. This subject is fundamental in real analysis because many proofs of theorems rely on the convergence of an appropriate sequence.

We also offer several exercises with detailed solutions to make good use of the material in this course. This course is for first-year students and part of the article may also relate to the final year of the high school math curriculum.

Convergence of sequences: definition and properties

We assume that the reader knows the real numbers and the distance between the numbers.

A sequence of real numbers is an application $u:\mathbb{N}\to \mathbb{R}$, such that for any $n\in\mathbb{R}$ we associate a real number $u(n)\in\mathbb{R}$. As usual, we use the notation $u(n):=u_n$, and the sequence will be denoted by $(u_n)_n$. For example, we select\begin{align*}u_n=\frac{1}{n},\quad n\ge 1.\end{align*} Note that we can also speak of sequences of complex numbers if we replace in the above definition the set of real numbers with the set of complex numbers $\mathbb{C}$.

Convergence to a real number: As a matter of fact, it is not easy to give a rigorous definition of the convergence of sequences for beginners. Here we will give a simple definition.

We say that the sequence $(u_n)_n$ converges to a real number $\ell$ if we can find a range $N\in\mathbb{N}$ such that all terms $u_n$ for $n\ge N$ are closed to $\ell$. This means that the distance between $u_n$ and $\ell$ is small enough whenever $n\ge N$. Mathematically, this distance is exactly $|u_n-\ell|$. More precisely this means that or any small real number $\varepsilon$ we have $|u_n-\ell|\le \varepsilon$ whenever the positive integer $n\ge N$.

Summary: the sequnce $(u_n)_n$ converges to $\ell$ if and only if for any small $\varepsilon>0,$ there existe a suffiently large $N\in\mathbb{N}$ such that for any $n\in \mathbb{N}$ with $n\ge N,$ we have $|u_n-\ell|\le \varepsilon$.

When the sequence $(u_n)_n$ converges to $\ell$, we say that $\ell$ is the limit of the sequence, and we write\begin{align*}\ell=\lim_{n\to+\infty}u_n.\end{align*}

The question is how to determine $N$. Let us show that the sequence $\frac{1}{n}$ converges to $0$. According to the discussion above, the distance $|\frac{1}{n}-0|= \frac{1}{n} $ should be small enough. This means that if we take a very small number $\varepsilon>0$ we will have $\frac{1}{n}< \varepsilon$. So $n>\frac{1}{\varepsilon}$. We then consider that $N$ is the smallest natural number greater than $ \frac{1}{\varepsilon}$.

Remarque: if the limit of a sequence exists then it is unique.

A geometric sequence: Let $u_n=a^n$ for any $n\in\mathbb{N}$, where $a\in\mathbb{R}$. Let us prove that when $a\in (-1,1)$, the geometric sequence converges to $0$. In fact, by the same arguments as above, we take an arbitrary very small real $\varepsilon\in (0,1)$ such that $|a|^n<\varepsilon$. An the function $x\mapsto \ln(x)$ is increasing, we have $n\ln(|a|)<\ln(\varepsilon)$. On the other hand, as $|a|$ and $\varepsilon$ are in $(0,1)$, then their logarithm is negative. Thus \begin{align*} n>\frac{ \ln(\varepsilon) }{ \ln(|a|) }.\end{align*} We choose $N$ the smallest natural number such that $N> \ln(\varepsilon)/ \ln(|a|) $. Hence \begin{align*}\lim_{n\to+\infty} a^n=0\end{align*}for any $a\in (-1,1)$.

Convergence to $+\infty$: A sequence converges to $+\infty$ if: for any sufficiently large real number $A>0$, there exists an integer sufficiently large number $N\in\mathbb{N}$ such for any $n\in\mathbb{N},$ $n\ge N$ implies that $u_n>A$.

Convergence to $-\infty$: A sequence converges to $-\infty$ if: for any sufficiently large real number $A>0$, there exists an integer sufficiently large number $N\in\mathbb{N}$ such for any $n\in\mathbb{N},$ $n\ge N$ implies that $u_n<-A$.

Divergence sequences: We say that a sequence is divergent if its limit is equal to $\pm \infty,$ or the limit does not exist at all.

Proposition: Every convegrente sequence $(u_n)_n$ is bounded, i.e. there exists a real number $M>0$ such that $|u_n|\le M$ for any $n\in\mathbb{N}$.

Proof: Assume that $(u_n)_n$ converges to $\ell\in\mathbb{R}$. Thus for $\varepsilon=1,$ there exists $N\in \mathbb{N}$ such that $|u_n-\ell|\le 1$ for any $n\ge N$, so that $$|u_n|=|(u_n-\ell)+\ell|\le |u_n-\ell|+|\ell|\le 1+|\ell|$$ for any $n\ge N$. We set $\delta:=\max\{|u_0|,\cdots,|u_{N-1}|\}$. Now if we select $M:=\max\{1+|\ell|,\delta\}$, we obtain $|u_n|\le M$ for any $n\in\mathbb{N}$.

How to calculate the limit of a sequence?

In this paragraph, we provide techniques to prove the convergence of sequences and in some cases even calculate the exact limit of the sequence. We first start with some definitions:

Increasing and decreasing: A sequence $(u_n)_n$ is increasing if $u_{n+1}\ge u_n$ for all $n\in\mathbb{N}$. It is decreasing if $u_{n+1}\le u_n$ for all $n$. A sequence is said to be monotone if it is increasing or decreasing. We can also speak of a strictly monotone sequence if we replace respectively “$\le$” and “$\ge$” by “$<$” and “$>$”.

Proposition: Let $(u_n)$ be a real sequence. The following assertions hold:

  • If $(u_n)$ is increasing and there exists a real number $M>0$ such that $u_n\le M$ for any $n,$ the the sequence $(u_n)_m$ is convergente.
  • The sequence $(u_n)_n$ is convergent if it is decreasing and there exists a real number $m>0$ such that $u_n\ge m$ for any $n$.

This proposition provides us only information about the convergence of the sequences but not about the exact limit. The following result gives at the same time the convergence and the value of the limit.

The Squeeze theorem: Let three sequences $(u_n)_n,(v_n)_n$ and $(w_n)_n$ such that\begin{align*}&v_n\le u_n\le w_n,\quad\text{for all}\;n\in\mathbb{N},\quad\text{and}\cr & \lim_{n\to+\infty}w_n=\lim_{n\to+\infty}v_n=\ell,\end{align*}then the sequence $(u_n)_n$ also converges to the same limit $\ell$.

A practical consequence of this theorem is: if there exists a positive sequence $(\alpha_n)_n$ converging to $0$ and if a sequence $(u_n)_n$ satisfies $|u_n|\le\alpha_n$, then $(u_n)_n$ converges also to $0$. This is because $-\alpha_n\le u_n\le \alpha_n$ for all $n$.

The use of continuous functions: Many sequences are of the form $u_n=f(v_n),$ where $f$ is a continuous function and $(v_n)_n$ is a convergence sequence that we know its limit $\ell$. Thus $(u_n)_n$ is also a convergent sequence and its limit is exactly $f(\ell)$. For example $u_n=\sin(\frac{1}{n})$. We know that $v_n=\frac{1}{n}\to 0$ and the function $f(x)=\sin(x)$ is continuous on $\mathbb{R}$. Then $(u_n)_n$ converges to $f(0)=\sin(0)=0$.

Exercises on sequences

In the following examples, we show that the computation of the limit of some sequences strongly depends on the squeeze theorem, on the geometric sequences, and on the fact that the sequence $(\frac{1}{n})_n$ converges to $0$.

Exercise: Determine the limit of the sequence $u_n=\frac{\sin(n^2)}{n}$ for natural numbers $n\ge 1$.

Proof: We know that the sinus of any real number belongs to $[-1,1]$. Using this fact, we have $|\sin(n^2)|\le 1$. Then \begin{align*}|u_n|\le \frac{1}{n}.\end{align*} Using the background of the previous paragraph, we deduce that the sequence $(u_n)_n$ converges to $0$.

Exercise: Compute the limit of the sequence $v_n=2^n\sin(\frac{1}{3^n})$ for $n\in\mathbb{N}$.

Proof: We recall that $|\sin(x)|\le |x|$ for any real number $x,$. Applying this inequality, we obtain \begin{align*}|v_n|\le \left(\frac{2}{3}\right)^n.\end{align*} On the other hand, as $\frac{2}{3}\in (-1,1)$, then the geometric sequence $((\frac{2}{3})^n)_n$ converges to $0$. Thus the sequence $(v_n)_n$ converges to $0$.

Exercise: Calculate the limit of the sequence $u_n=\arctan(\frac{2^n-3^n}{2^n+3^n})$.

Proof: We select $f(x)=\arctan(x)$ for $x\in\mathbb{R}$ and $v_n=\frac{2^n-3^n}{2^n+3^n}$. Then $u_n=f(v_n)$. As $f$ is continuous on $\mathbb{R},$ to compute the limit of the sequence $(u_n)$, it suffices to determine that of the sequence $(v_n)_n$. Observe that \begin{align*} v_n&= \frac{2^n \left(1- \left( \frac{2}{3}\right)^n\right)}{2^n \left(1+ \left( \frac{2}{3}\right)^n\right)}\cr &=\frac{1- \left( \frac{2}{3}\right)^n}{1+ \left( \frac{2}{3}\right)^n}.\end{align*} As the geometric sequence $((\frac{2}{3})^n)_n$ converges to $0$. then $(v_n)_n$ converges to $1$. Hence $(u_n)_n$ converges to $f(1)=\arctan(1)=\frac{\pi}{4}$.

Exercise: Prove that any real number is a limit of a sequence in $\mathbb{Q},$ the rational numbers set.

Proof: Here we will use the squeeze theorem. In fact, let $x\in\mathbb{R}$. We construct a sequence, of course depending on $x$, of elements in $\mathbb{Q}$ that converges to $x$. We select \begin{align*} u_n=\frac{[nx]}{n},\quad n\in \mathbb{N}^\ast,\end{align*} where $[nx]$ is the integer part of the real number $nx$. It satisfies $nx-1<[nx]\le nx$. Thus, by dividing by $n$ for any $n\in \mathbb{N}^\ast,$  we obtain $$ x-\frac{1}{n}<u_n\le x.$$ Now by the squeeze theorem, the sequence $(u_n)_n$ converges to $x$. The result follows now because $u_n\in\mathbb{Q}$ for any $n$.