Home Blog Page 3

Geometric Series

0

Geometric series are a fundamental concept in mathematics, frequently encountered in various fields, including calculus, finance, and physics. A such series is a sequence of terms where each term is obtained by multiplying the previous term by a constant ratio.

In this article, we will delve into the properties of geometric series, discuss their convergence behavior, and explore methods for finding their sums.

Definition and Form

A geometric series is defined as the sum of the terms of a geometric sequence, where each term is obtained by multiplying the preceding term by a constant ratio. A geometric sequence takes the form a, ar, ar², ar³, …, where $a$ is the first term and $r$ is the common ratio. The sum of a such series can be expressed as $$ \sum_{n=0}^\infty a_n=a+ar+ar^2+ar^3+\cdots=\sum_{n=0}^\infty ar^n.$$

We define and discuss the convergence of the geometric series.

Convergence of Geometric Series

The convergence behavior of a geometric series is determined by the value of the common ratio $r$. The series converges when the absolute value of $r$ is less than 1, and it diverges otherwise.

Convergent Geometric Series

When |r| < 1, the geometric series converges, and its sum can be calculated using the formula S = a / (1 – r), where $S$ represents the sum of the series. This formula can be derived using the concept of limits and algebraic manipulation. It is important to note that the formula is valid only when |r| < 1.

Divergent of the Series

When |r| ≥ 1, the geometric series diverges, meaning it does not have a finite sum. In this case, the series grows indefinitely as more terms are added.

Examples of Geometric Series

Example 1: Consider the series $\sum_{n=0}^\infty$ 2ⁿ. Here, the first term ‘a’ is 1, and the common ratio ‘r’ is 2. As |r| = 2, which is greater than 1, the series diverges. To find the sum of the series, we use the formula S = a / (1 – r). Plugging in the values, we get S = 1 / (1 – 2) = -1. Therefore, the series $\sum_{n=0}^\infty$ 2ⁿ diverges.

Example 2: Consider the series $\sum_{n=0}^\infty$ (1/4)ⁿ. In this case, the first term ‘a’ is 1, and the common ratio ‘r’ is 1/4. As |r| = 1/4, which is less than 1, the series converges. To find the sum of the series, we use the formula S = a / (1 – r). Plugging in the values, we get S = 1 / (1 – 1/4) = 4/3. Therefore, the series $\sum_{n=0}^\infty$ (1/4)ⁿ converges, and its sum is 4/3.

We will see below that the convergence of the above series depends on the position of the real number $r$.

Applications to the convergence of series

Exercise: Prove the following series is convergent $$ \sum_{n=0}^\infty \sin(e^{-n}).$$ Proof: We recall that the Euler number $e>1$. So that $0<e^{-1}<n$. This implies that the series $\sum_{n=0}^{+\infty} e^{-n}$ is convergent, due to the previous paragraph. Using the property $|\sin(x)|\le |x|$ for any $x\in\mathbb{R},$ we have $$ |\sin(e^{-n})|\le e^{-n},\quad n\ge 0.$$ By using the comparison test for series of positive terms, we deduce that the series $ \sum_{n=0}^\infty \sin(e^{-n})$ is absolutely convergent, then it is a convergent series.

You may also consult the convergence of series using the ratio test for more examples and exercises with detailed answers.

Harmonic series

0

The harmonic series is a widely studied mathematical series that arises from the harmonic sequence, which consists of the reciprocals of positive integers. In this article, we will explore the properties of this series, discuss its convergence behavior, examine its divergent nature, and shed light on its mathematical significance.

Definition and Form of the Harmonic Series

The harmonic series is defined as the sum of the reciprocals of positive integers. It takes the form $$\sum_{n=1}^\infty \frac{1}{n} = 1 + 1/2 + 1/3 + 1/4 + ….$$

Divergence of the Harmonic Series

The harmonic series is a remarkable example of a divergent series. As the series progresses, the terms gradually decrease but never reach zero. Consequently, the sum of a such series grows indefinitely, meaning it does not converge to a finite value.

Proof of Divergence:

We start by proving classical results on sequences. If a sequence $(u_n)_n$ is increasing and has no upper bound. Then $\lim_{n\to+\infty}u_n=+\infty$. In fact, by assumption for any $M>0$, there exist $N\in \mathbb{N}$ such that $u_N>M$. Now by the fact that $(u_n)_n$ is increasing, for any $n>N$ we have $u_n\ge u_N>M$. This is exactly the definition of $\lim_{n\to \infty}u_n=+\infty$.

Now we shall use this result to prove that the following harmonic series satisfies $$ \sum_{n=1}^{+\infty} \frac{1}{n}=+\infty.$$ To this end, we select $$ H_n=1+\frac{1}{2}+\cdots+\frac{1}{n}=\sum^{n}_{k=1}\frac{1}{k},\qquad (n\ge 1).$$ Clearly, we have $H_{n+1}-H_n=\frac{1}{n}>0$. Thus $(H_n)_n$ is strictly increasing. According to the above result to prove that $\lim_{n\to+\infty}H_n=+\infty,$ it suffices to show that the sequence has no upper bound. By contradiction, assume that this sequence has an upper bound. Then it is a convergent sequence. Thus, there exists a real number $\ell$ such that $H_n \to \ell,$ as $n\to\infty$. So that $H_{2n}\to \ell$ as $n\to \infty,$ and then $H_{2n}-H_n\to 0$ as $n\to\infty$. On the other hand, we have $$ H_{2n}-H_n=\sum_{k=n+1}^{2n} \frac{1}{k}\ge \sum_{k=n+1}^{2n} \frac{1}{2n}=\frac{1}{2}.$$ Now by letting $n\to\infty,$ we obtain $0\ge \frac{1}{2},$ a contradiction.

Alternative proofs of the divergence

The divergence of the harmonic series can be demonstrated using various methods. One approach is to employ the integral test, which establishes a connection between the series and the integral of the corresponding function. Integrating the function $f(x) = 1/x $ from 1 to $\infty$ yields the natural logarithm of infinity, $\lim_{x\to+\infty}\ln(x)$, which diverges. Since the integral diverges, the series diverges as well.

Another method to prove the divergence of the harmonic series is by employing the comparison test. By comparing this series to a known divergent series, such as the series of natural numbers, it can be shown that the terms of the harmonic series grow at least as fast as the terms of the divergent series. Hence, the series must also diverge.

Partial Sums and Divergence Rate

The partial sums of the harmonic series grow logarithmically. Specifically, the nth partial sum, denoted as $H_n$, is approximately equal to the natural logarithm of n, denoted as $\ln(n)$. As n increases, the growth of $H_n$ becomes slower and eventually approaches infinity.

Mathematical Significance

The harmonic series has profound implications in various mathematical contexts, such as number theory and calculus. Some notable aspects include:

  1. Infinitely Many Primes: The divergence of the harmonic series plays a crucial role in proving that there are infinitely many prime numbers. Euler’s proof, utilizing the divergence of this series, demonstrates that the sum of the reciprocals of prime numbers diverges.
  2. Harmonic Mean: The harmonic mean is a statistical measure derived from the harmonic series. It provides a way to calculate the average of a set of numbers when their reciprocals are considered. The harmonic mean is particularly useful when dealing with rates, ratios, and inversely proportional quantities.
  3. Zeta Function: The harmonic series is intimately connected to the Riemann zeta function. The Riemann zeta function provides valuable insights into number theory and has connections to prime numbers and the distribution of their reciprocals.

Conclusion: The harmonic series, formed by summing the reciprocals of positive integers, stands as a paradigmatic example of a divergent series. Its divergence is well-established through various mathematical proofs, highlighting its fundamental properties and importance in mathematical analysis. This famous series finds applications in number theory,

Limits at infinity

0

For several mathematical models, one generally needs to study the limits at infinity of the solution. It is a kind of stability of systems. We then recall this concept and give some examples.

We already discussed the limits at the finite points of functions. Now we discuss the case of infinite points.

Definitions and properties of the limits at infinity

Let $a,$ $b$, $L,$ and $K$ be real numbers.

  • Limit a plus infinity $+\infty$: Informally, we say that the function $f;[a,+\infty)$ has the limit $L$ at $+\infty$ and we write $\lim_{x\to+\infty}f(x)=L$ if when the number $x$ is very large and get closed to $+\infty,$ it image $f(x)$ get very closed to the real number $L$. Formally, this means that for any small real number $\varepsilon>0,$ we can find a sufficiently large real number $A>0,$ such that for any $x>A$ we have $|f(x)-L|<\varepsilon$.
  • Limit at minus infinity $-\infty$: Similarly, we say that the function $g:(-\infty,b]\to \mathbb{R}$ has $K$ as a limit at $-\infty$ if $g(x)$ approaches $k$ whenever, the real number $x$ is sufficiently small a get closed to $-\infty$. In this case, we write $\lim_{x\to -\infty}g(x)=K$. Formally, it means that for any $\varepsilon>0,$ small enough, there exists a sufficiently large real number $A>0,$ such that for any $x<-A$ we have $|g(x)-K|<\varepsilon$.

Limits rational functions at infinity

Let us start with the fonction $f(x)=\frac{1}{x}$ with $x\in \mathbb{R}^\ast:=\mathbb{R}\setminus\{0\}$.

We have $$ \lim_{x\to+\infty}\frac{1}{x}=0.$$ In fact, let $\varepsilon>0$ be a very small real number. For $x>0,$ the inequality $\frac{1}<\varepsilon$ is verified whenever $x>\frac{1}{\varepsilon}$. Now if we select $A:=\frac{1}{\varepsilon}$, then for any $x>A$ we have $|\frac{1}{x}-0|=\frac{1}{x}<\frac{1}{A}=\varepsilon$. This ends the proof.

Now if $x\to -\infty,$ then $-x\to +\infty$. So that $$ \lim_{x\to -\infty}\frac{1}{x}=\lim_{x\to -\infty} \frac{-1}{-x}=- \lim_{y\to +\infty} \frac{1}{y}=0.$$

For any $n\in\mathbb{N},$ we have $$ \lim_{n\to \pm\infty} \frac{1}{x^{n}}=0.$$ More generally, let a polynomial $Q(x)=b_n x^n+b_{n-1}x^{n-1}+\cdots+b_1 x+b_0,$ where all the $b_i$ are real numbers with $b_n\neq 0$. Then \begin{align*} \frac{1}{Q(x)}=\frac{1}{x^n}\times \frac{1}{b_n+\frac{b_{n-1}}{x}+\cdots+\frac{a_0}{x^n}}.\end{align*}Thus $$ \lim_{n\to \pm \infty} \frac{1}{Q(x)}=0.$$

Next, we consider rational functions of the form $$ f(x)=\frac{P(x)}{Q(x)}$$ for polynomials $P$ and $Q$ with real coefficients with the degree of $P$ is less than the degree of $Q,$ $\deg P\le \deg Q,$ and $x\in D_f:=\{x\in \mathbb{R}: Q(x)\neq 0\}$. We distinguish two cases:

If $\deg P=\deg Q=n,$, says that $P(x)=a_n x^n+P_1(x)$ and $Q(x)=b_n x^n+Q_1(x)$ with $p=\deg P_1<n$ and $q=\deg Q_1<n$ with coefficient $\alpha_i$ and $\beta_i,$ respectively. Then, factoring by $x^n,$ we obtain \begin{align*} \frac{P(x)}{Q(x)}=\frac{a_n+\alpha_p x^{p-n}+\cdots +\alpha_0 x^{-n}}{b_n+\beta_q x^{q-n}+\cdots+\beta_0 x^{-n}}.\end{align*} As $n>p$ and $n>q,$ then according to the above limits we have $$ \lim_{x\pm \infty}\frac{P(x)}{Q(x)}=\frac{a_n}{b_n}.$$ As an example $$ \lim_{x\to\pm \infty}\frac{2x^6+3x^5+10x^2+4}{3x^6+1}=\frac{2}{3}.$$

Now assume that $p=\deg P<q=\deg Q$ and assume that $a_i$ are the coefficients of $P$ and $b_i$ are the coefficients of $Q$. Then $$ \frac{P(x)}{Q(x)}=\frac{1}{x^p} \frac{a_p x^{p-q}+\cdots+a_0 x^{-q}}{b_q+b_{q-1}x^{-1}+\cdots+b_0 x^{-q}}.$$ The fact that $q>q$ implies that \begin{align*} \lim_{x\to \pm \infty} \frac{P(x)}{Q(x)}= 0\times \frac{0}{b_q}=0.\end{align*} As an example $$ \lim_{x\to\pm\infty} \frac{x^3+x+1}{x^8+1}=0.$$

Limits at infinity problems and solutions

Exercise 1: Determine the limits at infinity of the following functions \begin{align*} f(x)=\frac{\sin(x)}{x^2},\quad g(x)=2^{\frac{1}{x}}. \end{align*} Solution: We know that $|\sin(x)|\le 1$ for any $x\in \mathbb{R}$. Thus for $x\in\mathbb{R}\setminus\{0\},$ we have $|f(x)|\le \frac{1}{x^2}$. This means that \begin{align*} -\frac{1}{x^2}\le f(x)\le \frac{1}{x^2}.\end{align*} According to the squeeze theorem for the limits, we deduce that $\lim_{x\to\pm\infty}f(x)=0$. On the other hand, we have $$ g(x)={\rm exp }(\ln(2^{\frac{1}{x}}))=e^{\frac{\ln(2)}{x}}.$$ We now that $\frac{\ln(2)}{x}\to 0$ as $x\to \pm\infty$ and the exponential function is continuous then $g(x)\to e^0=1$ as $x\to\pm\infty$.

Exercise 2: Determine the limit at infinity of functions $$ f(x)={x+\sin(x)}{x+\arctan(x)},\quad g(x)=x^2\sin\left( \frac{1}{x^4}\right).$$ Solution: Factoring by $x,$ we have $$ f(x)=\frac{1+\frac{\sin(x)}{x}}{1+\frac{\arctan(x)}{x}}.$$ As in Exercise 1, we have $\lim_{x\to\pm \infty}\frac{\sin(x)}{x}=0$. Similarly, as $|\arctan(x)|\le \frac{\pi}{2},$ we also have $\lim_{x\to\pm \infty}\frac{\arctan(x)}{x}=0$. Thus $f(x)\to 1$ as $x\to\pm\infty$. For the function $g,$ we estimate $$ |g(x)|\le x^2 \frac{1}{x^4}=\frac{1}{x^2}.$$ According to the squeeze theorem, we have $g(x)\to 0$ as $\to\pm \infty$.

Exercise 3: Determine the limit $$ \lim_{x\to+\infty}\left(1+\frac{1}{x}\right)^x.$$ Proof: We ewrite \begin{align*} \left(1+\frac{1}{x}\right)^x=e^{x\ln\left(1+\frac{1}{x}\right)}.\end{align*} On the other hand, we know that when $x\to\infty,$ we have \begin{align*} \ln\left(1+\frac{1}{x}\right)= \frac{1}{x}-\frac{1}{2}\frac{1}{x^2}+o(\frac{1}{x^2}).\end{align*} Then \begin{align*} \left(1+\frac{1}{x}\right)^x= e^{1-\frac{1}{2x}+o(\frac{1}{x})}.\end{align*} Thus $$ \lim_{x\to+\infty} \left(1+\frac{1}{x}\right)^x=e.$$

Uniformly continuous function

0

Any uniformly continuous function is actually a continuous function. However, as we will see below, the converse is not true. This important class of functions satisfies useful properties. In this article, we will discuss all these facts in detail.

The uniform continuity of a function

In some mathematical problems, continuity is not enough to make a decision. In fact, the function needs more regularities. In this section, we will see how a uniformly continuous function can help produce important properties for the function. Before that, let us first recall the definition of the norm continuity of a function.

The definition and first properties of the uniform continuity

Definition: Let $I$ be s subset of the real number set $\mathbb{R}$. We say that $f:I\to\mathbb{R}$ is a uniformly continuous function on $I,$ if for any $\varepsilon>0,$ there exists $\alpha>0$ such that for any $x,y\in I$, we have $$ |x-y|<\alpha \Rightarrow |f(x)-f(y)|<\varepsilon.$$

Uniform continuity mainly means that if two arbitrary points in the interval $I$ are very close to each other, then their images must also be very close to each other. This also implies that the graph of the function $f$ is located in a band of small width.

Proposition: Every uniformly continuous function is continuous.

Proof: Let the function $f:I\to \mathbb{R}$ be uniformly continuous, and let $a\in I$. By definition, for any $x\in I,$ closed to $a$, there exists $\alpha>0$ such that $$ |x-a|<\alpha \Rightarrow |f(x)-f(a)|<\varepsilon.$$ This maens that $f$ is continuous at $a$.

Examples: 1- The function $x\mapsto \sin(x)$ is uniformly continuous on $\mathbb{R}$. In fact, we know that the sinus function is differentiable on $\mathbb{R}$. Thus by the mean value theorem, for any $x,y\in \mathbb{R}$ there exists a real number $c$ between $x$ and $y$ such that $\sin(x)-\sin(y)=\cos(c)(x-y)$. Thus $|\sin(x)-\sin(y)|\le |x-y|$. Now take $\varepsilon>0,$ and select $\alpha=\varepsilon$. thus $|x-y|<\alpha$ implies that $|\sin(x)-\sin(y)|<\varepsilon$. This ends the proof.

2- Similarly, the function $x\mapsto \arctan(x)$ is uniformly continuous on $\mathbb{R}$. In fact, again the mean value theorem, for any $x,y$, we have $|\arctan(x)-\arctan(y)|\le |x-y|$. We then use the same technique as for the sinus function.

3- The square root function is uniformly continuous on $\mathbb{R}^+$. In fact, as a simple exercise one can see that for any $x,y\in \mathbb{R}^+,$ $$ |\sqrt{x}-\sqrt{y}|\le \sqrt{|x-y|}.$$ Let now $\varepsilon>0$ and choose $\alpha=\varepsilon^2$. We then have for any $x,y\in\mathbb{R}^+$, if $|x-y|<\alpha= \varepsilon^2,$ then $\sqrt{|x-y|}< \varepsilon$. This implies that $|\sqrt{x}-\sqrt{y}|< \varepsilon$. So that the function $x\mapsto \sqrt{x}$ is uniformly continuous on $\mathbb{R}^+$.

Example of a continuous not uniformly continuous function

We first state and prove a result that characterizes the norm continuity of functions.

Theorem: A function $f:I\to\mathbb{R}$ is uniformly continuous on $I$ if and only if, for any sequences $(u_n)_n,(v_n)-n\subset I$ sucn that $u_n-v_n$ to $0$ as $n\to\infty,$ we have $f(u_n)-f(v_n)\to 0$ as $n\to \infty$.

Proof: Let $\varepsilon>0$. We first prove the direct implication. On the one hand, there exists $\alpha>0$ such that for any $x,y\in I$, we have $$ |x-y|<\alpha \Rightarrow |f(x)-f(y)|<\varepsilon.$$ On the other hand, as $u_n-v_n$ to $0$ as $n\to\infty,$ then there exists $N\in\mathbb{N}$ such that for any $n\in\mathbb{N},$ $n>N$ implies that $|u_n-v_n|<\alpha$. This implies that $|f(u_n)-f(v_n)|<\varepsilon,$ whenever $n>N$. Thus $f(u_n)-f(v_n)\to 0$ as $n\to\infty$. Conversely, assume that $f$ is not uniformly continuous on $I$. This means that there exists $\varepsilon>0$, for all $\alpha>0$, there exist $x,y\in I$ such that $|x-y|<\alpha$ and $|f(x)-f(y)|\ge \varepsilon$. Take $\alpha=\frac{1}{n}$ for any $n\in\mathbb{N}^\ast$. There exists $x_n,y_n\in I$ for each $n$ such that $|x_n-y_n|<\alpha$ and $|f(x_n)-f(y_n)|\ge \varepsilon$. This implies that that $x_n-y_n\to 0$ and $|f(x_n)-f(x_n)|$ does not goes to zero as $n\to \infty$. This is a contradiction.

The example: The function $f(x)=\sin(x^2)$ is not uniformly continuous on $\mathbb{R}$. In fact, we will apply the above theorem. Let’s consider the sequences $$ u_n=\sqrt{\pi n},\quad v_n=\sqrt{\left( n+\frac{1}{2}\right)\pi}.$$ First, we have \begin{align*} u_n-v_n&= \frac{\pi n- ( n+\frac{1}{2}}{\sqrt{\pi n}+\sqrt{\left( n+\frac{1}{2}\right)\pi}}\cr& = -\frac{\pi}{2} \frac{1}{\sqrt{\pi n}+\sqrt{\left( n+\frac{1}{2}\right)\pi}}.\end{align*} Clearly $u_n-v_n\to 0$ as $n\to \infty$. On the other hand, \begin{align*}f(u_n)-f(v_n)=\sin(n\pi)-\sin(\frac{\pi}{2}+n\pi)=-\cos(n\pi)=(-1)^{n+1}.\end{align*} Thus the sequence $(|f(u_n)-f(v_n)|)_n$ does not goes to zero. This ends the proof.

The Continuous Extension Theorem

We recall that a $(u_n)_n$ is a Cauchy sequence if for any $\varepsilon>0,$ there exists positive integer $N\in \mathbb{N}$ such that for any $p,q\in \mathbb{N},$ $$ p>N,\;q>N \Rightarrow |u_p-u_q|<\varepsilon.$$

Theorem: A function $f:I\to\mathbb{R}$ is uniformly continuous on $I$ if and only if for any Cauchy sequence $(u_n)_n\subset I$, $(f(u_n))_n$ is a Cauchy sequence.

Proof: The proof is very similar to the proof of the previous theorem, we omit it.

Theorem: Let $a$ and $b$ be two real numbers with $a<b$ and let $f:(a,b)\to \mathbb{R}$ be a uniformly continuous function. The we can extend $f$ to a continuous function $\tilde{f}$ on $[a,b]$.

Proof: It suffices to prove that the limits of $f$ at the points $a$ and $b$ exist. Since $a$ and $b$ play the same role, we only focus on point $a$. Let $(u_n)_n\subset (a,b)$ such that $u_n\to a$ and let us prove that $(f(u_n))_n$ has a limit. As $(u_n)_n$ is a convergent sequence, then it is a Cauchy sequence. So, by the above theorem $(f(u_n))_n$ is a Cauchy sequence. As we work in $\mathbb{R},$ the sequence $(f(u_n))_n$ has a limit. This ends the proof.

The extension of $f$ is the function $\tilde{f}:[a,b]\to\mathbb{R}$ defined by \begin{align*}\tilde{f}(x)=\begin{cases} \ell_1,& x=a,\cr f(x),& x\in (a,b),\cr \ell_1,& x=b,\end{cases}\end{align*} where $$ \ell_1=\lim_{x\to a}f(x),\quad \ell_2=\lim_{x\to b}f(x).$$

This is a great theorem of continuous extension functions. Also, the result of this page can also be extended to vector-valued spaces where the normed vector space is complete, that is, in which every Cauchy sequence converges in the same space.

Functions of one variable

0

We provide all necessary properties of functions of one variable such as limit at a given point, continuity, and differentiability. Such functions are defined over a domain of the set of real numbers.

Continuity of functions of one variable

We denote by $\mathbb{R}$ the set of real numbers. If $f$ is a real function, then we denote by $\mathscr{D}_f$ the domain of the definition of $f$. That is a part of $\mathbb{R}$ in which $f$ is well defined. For some function we can have $\mathscr{D}_f=\mathbb{R}$.

To represent the function, we write $f:\mathscr{D}_f\to \mathbb{R}$ and the graph of $f$ is a subset of $\mathbb{R}^2$ defined by $$ G(f):=\{(x,f(x)):x\in \mathscr{D}\}.$$

Algebraic properties of functions

A function $f:\mathscr{D}_f\to \mathbb{R}$ is said to be injective if $f(x)=f(y)$ implies that $x=y$. Injectivity is related to the uniqueness of the solution of algebraic equations. As an example the function $f(x)=2^x$ for $x\in\mathscr{D}_f=\mathbb{R}$ is injective. In fact, if $f(x)=f(y),$ the $2^x=2^y$. By applying in both sides the logarithmic function we obtain $x\ln(2)=y\ln(2),$ so that $x=y$.

We say that $f$ is surjective if for any $y\in \mathbb{R},$ there exists a real number $x\in \mathscr{D}_f$ such that $y=f(x)$. We note that the surjectivity is related to the existence of the solution of algebraic equations.

The function $f$ is said to be bijective if it is both injective and surjective. This means that for every $y\in \mathbb{R},$ there exists a unique $x\in \mathscr{D}_f$ such that $y=f(x)$. We note that bijectivity is related to the existence and uniqueness of the solutions of algebraic equations.

If $f:\mathscr{D}_f\to\mathbb{R}$ is bijective, we denote by $f^{-1}:\mathbb{R}\to \mathscr{D}_f$ the inverse of $f$. This inverse it satisfies $f\circ f^{-1}=f^{-1}\circ f=id$.

Limit and continuity of a function

Let $f:\mathscr{D}_f\to\mathbb{R}$ be a function of one variable. Assume that there exists $x_0\in \mathbb{R}\setminus \mathscr{D}_f$ and that for any $\varepsilon>0,$ $(x_0-\varepsilon,x_0+\varepsilon)\cap \mathscr{D}_f\neq \emptyset$.

We say that $f$ admits a real number $\ell$ at $x_0$ is for any $\varepsilon>0,$ there exists $\alpha>0$ such that for any $x\in \mathbb{D_f},$ we have $$ |x-x_0|<\alpha \Rightarrow |f(x)-\ell|<\varepsilon.$$

We say that $f$ admit $+\infty$ a a limit at $x_0$ if for any $A>0,$ $\alpha>0$ such that for any $x\in \mathbb{D_f}$, $$ |x-x_0|<\alpha \Rightarrow f(x)>A.$$

The limit of the function $f$ at $x_0$ is $-\infty,$ if for any $A>0,$ $\alpha>0$ such that for any $x\in \mathbb{D_f}$, $$ |x-x_0|<\alpha \Rightarrow f(x)<-A.$$

For more details on limits, you may also consult: on how to find the limit of a function.

Let us now discuss the continuity of functions of one variable.

A function $f:\mathscr{D}_f\to\mathbb{R}$ is continuous at a point $a\in \mathscr{D}_f$ if the limit of $f$ at $a$ is exactly $f(a)$. That is, for any $\varepsilon>0,$ there exists $\alpha>0$ such that for any $x\in \mathscr{D}_f,$ we have $$ |x-a|<\alpha \Rightarrow |f(x)-f(a)|<\varepsilon.$$

Sometimes it is more practical to verifie $\lim_{h\to 0}f(a+h)=f(a)$ to prove that the function $f$ is continuous at $a$.

We also say that $f$ is continuous on $\mathscr{D}_f$ if the function $f$ is continuous at any point of $\mathscr{D}_f$.

Examples of continuous functions of one variable

The function $x\mapsto \sin(x)$ is continuous on $\mathbb{R}$. In fact, let $a,h\in\mathbb{R}$. By using the following trigonometric identity $$ \sin(a+h)=\sin(a)\cos(h)+\cos(a)\sin(h),$$ we have \begin{align*}\sin(a+h)-\sin(a)=\sin(a)(\cos(h)-1)+\cos(a)\sin(h).\end{align*} We know that $$ \lim_{h\to 0}\cos(h)=1.\quad \lim_{h\to 0}\sin(h)=0.$$ Then $\lim_{h\to 0}\sin(a+h)=\sin(a)$. This ends the proof.

You may also consult a post on continuous functions of one variable for more details and exercises with detailed answers.

Mean value theorem

0

One of the most fundamental theorems in mathematical analysis is the mean value theorem. Geometrically, the theorem says that somewhere between points A and B on a differentiable curve there is at least one tangent line parallel to the secant line AB.

Let’s discover together this great theorem and give it some applications. Here will use the concept of differential functions.

Statement and applications of the mean value theorem

The mean value theorem is used to prove certain regularities of differentiable functions. In fact, it is used to prove that a function is norm continuity; to demonstrate that a function is a Lipschitz function, etc.

Theorem: Let $a$ and $b$ be real numbers and le $f$ be a real-valued function continuous on the closed interval $[a,b]$ and differentiable on the open interval $(a,b)$. Then there exists a real number $c\in (a,b)$ such that $f(b)-f(a)=f'(c)(b-a)$.

Some applications: Show that for any real numbers $x,y$, we have $|\sin(x)-\sin(y)|\le |x-y|$ and $|\arctan(x)-\arctan(y)|\le |x-y|$. In fact, without loss of generality, we can assume that $x<y$. The functions $t\mapsto \sin(t)$ and $t\mapsto \arctan(t)$ are continuous and differentiable on $\mathbb{R}$. Thus, according to the above theorem, there exists $c_1\in (x,y)$ and $c_2\in (x,y)$ such that $\sin(x)-\sin(y)=\cos(c_1)(x-y)$ and $\arctan(x)-\arctan(y)=\frac{1}{1+c_2^2}(x-y)$. This is because $\sin'(t)=\cos(t)$ and $\arctan'(t)=\frac{1}{1+t^2}$ for any $t\in\mathbb{R}$. Now we take the absolute and use the fact that $|\cos(c_1)|\le 1$ and $\frac{1}{1+c_2^2}\le 1$ to ends the proof.

The vectorial version of the mean value theorem

Consider a normed vector space $(E,\|\cdot\|)$. We deal with vector-valued functions of the form $f:[a,b]\to E$. Then the mean value theorem for such functions is somehow different. In fact, instead of equality, we have inequality. More precisely, we have the following result.

Theorem: Assume that a function $f:[a,b]\to E$ is continue on $[a,b]$ and differential on $(a,b)$. On the other hand, assume that there exists a constant $M>0$ such that $\|f'(t)\|\le M$ for any $t\in (a,b)$. Then $$ \|f(b)-f(a)\|\le M |a-b|.$$

Ratio test for series

0

The Ratio Test stands as one of the powerful tools at our disposal for investigating the convergence behavior of series. By examining the ratio of consecutive terms, the Ratio Test allows us to make conclusive statements about the convergence or divergence of a series. In this article, we will explore the Ratio Test, discuss its convergence criteria, and demonstrate its applications in analyzing series.

We mention that in the French mathematical school, this test is called the d’Alembert rule. This is because it was founded by the French mathematician d’Alembert.

What is a ratio test?

Before diving into the Ratio Test, let’s briefly review the concept of series convergence. Given an infinite series represented as$\sum_{n=0}^na_n$, where $a_n$ denotes the nth term of the series, convergence refers to the behavior of the series as we add more terms. If the series approaches a finite limit as the number of terms increases, we say that the series converges. On the other hand, if the series does not approach a limit or grows indefinitely, it diverges. More prcisely, we recall that a series $\sum_{n= 0}^{+\infty}u_n$ is convergent if the partial sums sequence $(S_n)_n$ defined by $S_n=u_0+cdots+u_n$ has a limit. It is called divergent if the limit is $\pm\infty$ of the limit does not exist at all.

The Ratio Test is a convergence test that investigates the limiting behavior of the ratio of consecutive terms in a series. It provides valuable insights into the convergence behavior and determines whether a series converges or diverges.

Theorem: Assume a sequence of real numbers $(u_n)_n$ satisfies $u_n>0$ and there exists a real number $\ell$ such that $$ \lim_{n\to+\infty} \frac{u_{n+1}}{u_n}=\ell.$$ Then the following assertions hold:

  • The series $\sum_{n=0}^{+\infty}u_n$ is convergent if $\ell<1$;
  • it is divergent if $\ell>1$;
  • finally, if $\ell=1$ we cannot conclude.

An overview of the bounded ratio test

D’Alembert was a French mathematician and philosopher, born in Paris in 1717 and died in 1783. He had a successful career at law school. However, the legal profession did not appeal to d’Alembert, so he decided to take courses in medicine. Again, only after a while does he turn to mathematics. He later became one of the greatest masters of mathematics.

Examples of application of the convergent test

We give some applications of the ration test. Here we give classical series.

Example 1: Discuss the convergence of the following series \begin{align*} \sum_{n=0}^{+\infty} \frac{1}{n!},\quad \sum_{n=0}^{+\infty} \frac{n}{2^n}.\end{align*} Solution: Let us put $u_n=\frac{1}{n!}$. Then we have \begin{align*} \frac{u_{n+1}}{u_n}=\frac{\frac{1}{(1+n)!}}{\frac{1}{n!}}=\frac{1}{n+1}.\end{align*} Clearly the limit of this ratio is $0$. Then the series $\sum_{n=0}^{+\infty} \frac{1}{n!}$ is convergent. On the other hand, select $v_n=\frac{n}{2^n}$. So that $$ \frac{v_{n+1}}{v_n}=\frac{n+1}{2^{n+1}}\times \frac{2^n}{n}=2 \frac{n+1}{n}.$$ The limit of this ratio is $2$. Thus the ratio test, the series $ \sum_{n=0}^{+\infty} \frac{n}{2^n}$ is divergent.

Convergent series examples

0

The main purpose of this article is to provide examples of convergent series. We also speak of divergent series. Before that, we first give a concise summary of the properties of the series and recall the convergence criteria of the series with the proofs.

In fact, to fully understand the contents of this page, information on convergence sequences is necessary.

Convergent series, definition, and properties

In the sequel, we recall some generalities about convergent series.

In mathematics, a numerical series is the production of a sequence of real numbers $(u_n)$ and a sequence of partial sums $S_n:=u_0+\cdots+u_n$ for all $n\in\mathbb{N}$.

The series is convergent if the sequence $(S_n)_n$ has a finite limit. It is said to be divergent if the limit of $(S_n)_n$ is $\pm\infty$ or does not exist.

In general, the series associated with a sequence $(u_n)_n$ is denoted by $\sum_{n=0}^{+\infty} u_n,$ or $\sum_{n\ge 0} u_n$ if the sequence $u_n$ is defined for and $n\ge 0$. It is denoted by $\sum_{n=N}^{+\infty} u_n,$ or $\sum_{n\ge N} u_n$ is the sequence $u_n$ is define for any $n\ge N$, for some $N\in\mathbb{N}$. We mention that the convergence of series does not depend on the initial index $N$.

Examples: Here we give the classic series that you absolutely must know:

  • The geometric series: Let $a\in (-1,1)$. Then $\sum_{n=0}^{+\infty}a^n$ is convergent series and \begin{align*} \sum_{n=0}^{+\infty}a^n=\frac{1}{1-a}.\end{align*} In fact, according to the Remarkable identities we have \begin{align*} S_n=1+a+a^2+\cdots+a^n=\frac{1-a^{n+1}}{1-a}.\end{align*} On the other hand, as $a\in (-1,1),$ then the geometric sequence $(a^n)_n$ is convergent and we have $a^{n+1}\to 0$ as $n\to+\infty$. Thus the result follows.
  • The harmonic series: The series $\sum_{n=1}^{+\infty}\frac{1}{n}$ is divergent and we have \begin{align*}\sum_{n=1}^{+\infty}\frac{1}{n}=+\infty.\end{align*} In fact, by contradiction, assume that this series is convergent. Then there exists a real number $\ell$ such that $S_n\to \ell$ and $n\to +\infty$. Remark that \begin{align*} S_{2n}-S_n=\sum_{k=n+1}^{2n} \frac{1}{k}\ge \sum_{k=n+1}^{2n} \frac{1}{2n}=\frac{1}{2}.\end{align*} But we have also $S_{2n}\to \ell,$ so that by letting $n\to\infty$ in both sides of the above inequality, we obtain $0\ge \frac{1}{2}$. This is a contradiction. Thus the harmonic series is not convergent.

A series $\sum_{n=0}^{+\infty}u_n$ is called absolutely convergent if the series $\sum_{n=0}^{+\infty}|u_n|$ is convergent in the above sense.

Remark: Every absolutely convergent series is a convergent series. In fact, denote by $(S_n)_n$ and $(S^|u|_n)_n$ the partial sums of sequences $(u_n)_n$ and $(|u_n|)_n$. By assumption, the sequence is convergent, so it is a Cauchy sequence. Let us know to prove that $(S_n)_n$ is convergent. In fact, for any $p,q\in\mathbb{N}$ with $p>q$, we have $$ |S^u_p-S^u_q|\le \sum^{p}_{k=q+1} |u_k|=S^{|u|}_p-S^{|v|}_q.$$ This implies that $(S^u_n)_n$ is a Cauchy sequence. Thus it converges. This ends the proof.

Series of positive terms

Let a series $\sum_{n=0}^{n}u_n$ be such that the terms $u_n\ge 0$ for any $n$. As $S_{n+1}-S_n=u_n\ge 0,$ the partial sums sequence $(S_n)_n$ is increasing. Thus the series $\sum_{n=0}^{n}u_n$ is convergent if and only if there exists a real number $M>0$ such that $0\le S_n\le M$ for any $n$.

What you should take into account for a series of positive terms is that the convergence of the series is equivalent to just finding an upper bound of the sequence $(S_n)_n$. We then use this remark to prove the following comparison criteria for convergence and divergence of series.

Theorem: Let $\alpha\in (1,+\infty)$. The the series $$ \sum_{n=1}^\infty \frac{1}{n^\alpha}$$ is convergent.

Proof: We will use background from improper integrals. We now that for $\alpha>0,$ we have \begin{align*} \int^{+\infty}_1 \frac{1}{x^\alpha}dx=\left[ \frac{x^{1-\alpha}}{1-\alpha}\right]^{x=+\infty}_{x=1}=\frac{1}{\alpha-1}.\end{align*} As the terms of the series $u_n=\frac{1}{n^\alpha}$ are positive, we will use the above integral to fine an upper bounded of the partial sums sequence $$ S_n=1+\frac{1}{2^\alpha}+\cdots+\frac{1}{n^\alpha},\quad \forall n\in\mathbb{N}^\ast.$$ We have \begin{align*} S_n=1+\frac{1}{2^\alpha}+\cdots+\frac{1}{n^\alpha}&<1+\int^n_1 \frac{1}{x^\alpha}dx\cr & < 1+\int^{+\infty}_1 \frac{1}{x^\alpha}dx=1+\frac{1}{\alpha-1}=\frac{\alpha}{\alpha-1}.\end{align*}This ends the proof.

Comparison test for series

Proposition: Convergence of series using comparison test: in fact, assume that we have two sequences $(u_n)_n$ and $(v_n)_n$ such that $0\le u_n\le v_n$ for all $n\ge 0$, then

  1. if the $\sum_{n=0}^{+\infty}v_n$ converges then the series $\sum_{n=0}^{+\infty}u_n$ also converges.
  2. The series $\sum_{n=0}^{+\infty}u_n$ diverges implies that the series $\sum_{n=0}^{+\infty}v_n$ diverges as well.

Proof: 1- Let denote by $(S^u_n)_n$ and $(S^v_n)_n$ the partial sums sequences associated with the sequence $(u_n)$ and $(v_n)$. As, by assumption, $(S^v_n)_n$ is convergent, then there exists a real $M>0$ such that $S_n^v\le M$. But $S_n^u\le S^v_n\le M$ for any $n$. This implies that the series $\sum_{n=0}^{+\infty}v_n$ is convergent.

2- This is just the contraposition of the first result.

Examples: 1- The series $\sum_{n=0}^{+\infty}\sin\left(\frac{1}{2^n}\right)$ is convergent. In fact, using the fact that $|\sin(x)|\le |x|$ for any $x\in \mathbb{R},$ we deduce that $\left|\sin\left(\frac{1}{2^n}\right)\right|\le \left( \frac{1}{2}\right)^n$ for any $n$. We know that the geometric series $\sum_{n=0}^\infty \left( \frac{1}{2}\right)^n$ is convergent. Thus the series $\sum_{n=0}^{+\infty}\sin\left(\frac{1}{2^n}\right)$ is absolutely convergent, then convergent.

2- The series $\sum_{n=1}^{+\infty} \frac{1}{\sqrt{n}}$ is divergent. In fact, for any $n\ge 1,$ we have $\frac{1}{n}\le \frac{1}{\sqrt{n}}$ and the harmonic series $\sum_{n=1}^{+\infty}\frac{1}{n}$ is divergent. The result follows using the above proposition.

3- The series $\sum_{n\ge 0}\frac{n}{n^3+1}$ is convergent. In fact, for any $n\ge 1,$ we have $0<\frac{n}{n^3+1}\le \frac{n}{n^3}=\frac{1}{n^2}$. Thus the convergence follows by the convergence of the series $\sum_{n=1}^\infty \frac{1}{n^2}$. Here we take $\alpha=2$ in the above theorem.

The equivalence test for convergent series

Theorem: Let $(u_n)_n$ and $(u_n)_n$ be a positive equivalent sequences, $u_n\sim v_n$. That is $\frac{u_n}{v_n}\to 0$ as $n\to\infty$. Then the series $\sum_{n=0}^\infty u_n$ and $\sum_{n=0}^\infty v_n$ are of the same nature, both convergent, or both divergent.

Proof: By using the sequences limit definition, pour any $\varepsilon\in (0,1)$, there exists $N\in\mathbb{N}$ such that for any $n\ge N$, we have $|\frac{u_n}{v_n}-1|\le \varepsilon$. We also have \begin{align*} (1-\varepsilon) v_n\le u_n\le (1+\varepsilon),\quad \forall n\ge N.\end{align*} The result follows by using the comparaison test, see the previous subsectio.

Examples: 1- The series $\sum_{n=0}^{+\infty}\sin(\frac{1}{n})$ is divergent. In fact, we have $\sin(1/n)\ge 0$ because for any $n\ge 1,$ $\frac{1}{n}\in(0,1]\subset [0,\frac{\pi}{2}]$. On the other hand, \begin{align*} \lim_{n\to\infty}\frac{\sin(\frac{1}{n})}{\frac{1}{n}}=1.\end{align*} Thus $\sin(1/n)\sim (1/n)$. The result now follows from the divergence of the harmonic series $\sum_{n\ge 1} \frac{1}{n}$.

Probability density function

0

The probability Density Function (PDF) is a fundamental concept in probability theory and statistics that allows us to describe the likelihood of a continuous random variable taking on a specific value or falling within a particular range. Whether you’re a student, researcher, or simply someone curious about probability, this article aims to provide a comprehensive understanding of Probability Density Functions and their significance in various fields.

The density function simplifies the expression of the probability distribution. Let’s discover together without further delay this great function.

What is a probability density function?

A Probability Density Function (PDF) is a mathematical function that describes the probability distribution of a continuous random variable. Unlike discrete random variables, which have a probability mass function, continuous random variables require a PDF to define their probability distribution. The PDF represents the relative likelihood of observing different outcomes within a range of values.

To give more precision on (PDF), let $(\Omega,\mathscr{A},\mathbb{P})$ be a probability space, $\mathscr{B}$ the Borel algebra formed by the open sets of $\mathbb{R}$. Moreover, if $x\in \mathbb{R}$ and $X$ is a random variable on $(\Omega,\mathscr{A}),$ we denote $(X\le x)=X^{-1}((-\infty,x])=\{\omega\in\Omega: X(\omega)\le x\}$. More generally, if $B$ is a Borel set, we denote $(X\in B)=X^{-1}(B)=\{\omega\in \Omega: X(\omega)\in B\}$.

Definition: The probability density function, PDF, of a continuous random variable $X:(\Omega,\mathscr{A})\to (\mathbb{R},\mathscr{B})$ is a positive integrable function $f_X$ on $\mathbb{R}$ such that $$ \int^{+\infty}_{-\infty}f_X(x)dx=1$$ and for any $a,b\in\mathbb{R}$ with $a<b$, we have $$ \mathbb{P}(a\le X\le b)=\int^b_a f(x)dx.$$ In this case, we say that the random variable $X$ has a probability density function $f_X$.

Relation with cumulative distribution function

While the PDF describes the likelihood of obtaining specific values or ranges, the Cumulative Distribution Function (CDF) provides the probability of a random variable being less than or equal to a certain value. The CDF can be obtained by integrating the PDF. The relationship between the PDF and CDF is crucial in probability theory and statistical inference.

Let $X$ be a random variable and denote by $F_X$ its cumulative distribution function, CDF. That is, for any $x\in X,$ $F_X(x)=\mathbb{P}(X\le x)$.

Assume that a random variable $X$ has a density $f_X$. Then according to the previous paragraph, we have $\mathbb{P}(X=x)=\mathbb{P}(x\le X\le x)=0$, for any $x\in \mathbb{R}$. Thus, for any $a,b\in\mathbb{R}$ with $a<b,$ we can write \begin{align*} F_X(b)-F_X(a)&=\mathbb{P}(a<X\le b)\cr &= \mathbb{P}(a\le X< b)\cr& = \mathbb{P}(a\le X\le b)\cr &=\int^b_a f(x)dx.\end{align*} Let us now use properties of the cumulative distribution function to derive further properties on the density function. We know that $F_X(x)\to 1$ as $x\to+\infty$ and $F_X(x)\to 0$ as $x\to-\infty$. The by letting $a\to -\infty,$ we obtain \begin{align*} F_X(b)=\int^b_{-\infty}f(x)dx.\end{align*} On the other hand, the fact that \begin{align*} \frac{F_X(x)-F_X(x)}{x-a}=\frac{1}{x-a}\int^x_a f(t)dt\end{align*} shows that if $f_X$ is continuous at the point $a,$ then the function $F_X$ is differentiable at $a$ and $F’_X(a)=f(a)$. From this, we also deduce that if the density function $f_X$ is piecewise continuous, then the cumulative distribution function $F_X$ is piecewise differentiable and $F_X'(x)=f(x)$ for almost every $x$. This can also reformulated as $dF_X(x)=f(x)dx.$

Common Probability Density Functions

  • Normal Distribution: The bell-shaped curve that appears frequently in natural phenomena.
  • Uniform Distribution: All values within a given range have equal probability.
  • Exponential Distribution: Describes the time between events in a Poisson process.
  • Beta Distribution: Often used to model probabilities and proportions.
  • Gamma Distribution: Used to model waiting times or survival analysis.

Applications of Probability Density Functions

  • Statistical Analysis: PDFs play a vital role in statistical analysis, allowing us to estimate parameters, test hypotheses, and make inferences about populations.
  • Risk Assessment: Probability density functions are used to model and assess risks in various fields, such as finance, insurance, and engineering.
  • Data Modeling: PDFs help in modeling and understanding data distribution, enabling the development of predictive models and simulations.
  • Signal Processing: PDFs are utilized in signal processing to analyze noise, estimate signal properties, and detect anomalies.

Estimating PDFs from Data

In practice, PDFs are often estimated from empirical data using techniques such as kernel density estimation, histogram-based methods, or parametric modeling. These approaches allow us to approximate the underlying PDF based on observed data points.

Conclusion: Probability Density Functions are essential tools for understanding the behavior of continuous random variables and analyzing real-world phenomena. By providing insights into the likelihood of specific outcomes or ranges, PDFs facilitate statistical analysis, modeling, and decision-making across various disciplines. Understanding PDFs empowers researchers, analysts, and professionals to make more informed interpretations and predictions based on data-driven probabilistic reasoning.

Remember, probability density functions are at the core of probability theory and statistics, shaping our understanding of uncertainty and aiding in making sense of the world around us.

Set theory for beginners

0

Here you will find an overview of set theory for beginners. In fact, set theory is a foundational branch of mathematics that deals with the study of sets, which are collections of distinct objects called elements. This theory is essential in probability theory.

Here’s a beginner’s overview of sets:

  1. Set Notation:
    • Sets are typically denoted by capital letters (e.g., A, B, C).
    • The elements of a set are enclosed in curly braces (e.g., {1, 2, 3}).
  2. Set Membership:
    • A symbol (∈) is used to indicate that an element belongs to a set.
    • For example, if 2 is an element of set A, we write 2 ∈ A.
  3. Set Equality:
    • Two sets are considered equal if they have precisely the same elements.
    • For example, if A = {1, 2, 3} and B = {2, 3, 1}, then A = B.
  4. Subset and Superset:
    • If all the elements of set A are also elements of set B, then A is a subset of B.
    • This is denoted as A ⊆ B.
    • If B contains all the elements of A, then B is a superset of A.
    • This is denoted as B ⊇ A.
  5. Proper Subset and Proper Superset:
    • If A is a subset of B, but A is not equal to B, then A is a proper subset of B.
    • This is denoted as A ⊂ B.
    • If B is a superset of A, but B is not equal to A, then B is a proper superset of A.
    • This is denoted as B ⊃ A.
  6. Intersection:
    • The intersection of two sets A and B is the set of elements that are common to both A and B.
    • This is denoted as A ∩ B.
  7. Union:
    • The union of two sets A and B is the set of all elements that belong to either A or B (or both).
    • This is denoted as A ∪ B.
  8. Complement:
    • The complement of a set A, denoted as A’, is the set of all elements that are not in A but are in the universal set.
    • The universal set is the set that contains all possible elements under consideration.
  9. Venn Diagrams:
    • Venn diagrams are graphical representations that use circles or overlapping shapes to visualize set relationships and operations.
  10. Set Operations:
    • Other set operations include set difference (A – B), the symmetric difference (A Δ B), and the Cartesian product (A × B).

These are some basic concepts in set theory. As you delve deeper, you will encounter more advanced topics such as power sets, cardinality, set operations with multiple sets, and set theory applications in various branches of mathematics and computer science.

Exercises with solutions on set theory for beginners

Here are a few exercises with solutions to help you practice set theory concepts:

Exercise 1: Let A = {1, 2, 3, 4} and B = {3, 4, 5, 6}. Find:

a) A ∩ B,

b) A ∪ B,

c) A’

d) A – B

Solution: a) A ∩ B = {3, 4} b) A ∪ B = {1, 2, 3, 4, 5, 6} c) A’ = Universal Set – A = {5, 6} d) A – B = {1, 2}

Exercise 2: Let C = {2, 4, 6, 8, 10} and D = {3, 6, 9}. Find:

a) C ⊆ D

b) D ⊆ C

c) C ∩ D

d) C ∪ D

Solution: a) C ⊆ D: False (C is not a subset of D since 2, 4, and 8 are not elements of D) b) D ⊆ C: False (D is not a subset of C since 3 and 9 are not elements of C) c) C ∩ D = {6} (the only element common to both sets C and D) d) C ∪ D = {2, 3, 4, 6, 8, 9, 10} (the combined elements of sets C and D)

Exercise 3: Let E = {a, b, c, d} and F = {c, d, e, f}. Find:

a) E × F

b) F’

c) E ∩ F’

Solution: a) E × F = {(a, c), (a, d), (a, e), (a, f), (b, c), (b, d), (b, e), (b, f), (c, c), (c, d), (c, e), (c, f), (d, c), (d, d), (d, e), (d, f)} b) F’ = Universal Set – F = {a, b} c) E ∩ F’ = {a, b} (the elements that are in set E and not in set F)

These exercises should give you some practice with set theory concepts and operations. Make sure to understand the solutions and reasoning behind them to strengthen your understanding of set theory.

Geometric sequence

0

We provide you with all details about the geometric sequence. In fact, this simple sequence appears frequently in many subjects such as series and probability. Let us discover together this magic sequence.

What is a geometric sequence?

Let $a$ be a real number, we denote $a\in\mathbb{R},$ where $\mathbb{R}$ is the set of real numbers. A sequence of the form $$ u_n=a^n,\quad n\in\mathbb{N},$$ is called a geometric sequence.

Let us discuss the convergence of this sequence. This means looking at the behavior of $u_n$ when $n$ is very large. As this sequence depends on $a,$ logically the study of convergence also depends on the number $a$.

Let us start with the simple case $a=1$. Then $u_n=1$ for all $n$. This is the constant sequence equal to $1$. Thus it converges to the same value of $1$.

Assume that $a=-1$. Then $u_n=(-1)^n$. This sequence is not convergent, because $u_{2n}=1$ converges to $1,$ while $u_{2n+1}=-1$ converges to -1. We recall the following result: As sequence $(u_n)_n$ is convergent to a real number $\ell$ if and only if the sequences $(u_{2n})_n$ and $(u_{2n+1})_n$ converge to the same number $\ell$.

Now assume that $a\in (-1,1)$, this means that the absolute value of $a$ satisfies $|a|<1$. We will prove that in this case, we have $$\lim_{n\to\infty} u_n=0.$$ Remark that $|u_n|=|a^n|\le |a|^n$. Thus $-|a|^n<u_n<|a|^n.$ Now according to the squeeze theorem, to prove that the limit of the sequence $(u_n)$ is zero, it suffices to prove that $|a|^n$ goes to zero as $n$ goes to $+\infty$.

First, we need to be sure that the sequence $(|a|^n)_n$ is convergent. In fact, for all $n,$ we have $0\le |a|^n<1$. Moreover, the sequence $(|a|^n)_n$ is decreasing. This is because $|a|^{n+1}-|a|^n=(|a|-1) |a|^n\le 0$, as $|a|<1$. Thus the sequence $(|a|^n)_n$ is convergent, and thus, there exists a real number $\ell\in\mathbb{R}$ such that $\lim_{n\to\infty}|a|^n=\ell.$

Let’s prove that $\ell=0$. In fact, remark that $\lim_{n\to \infty}|a|^{n+1}=0$, think about this fact: when $n$ is large, $n+1$ is also large. Now the fact that $|a|^{n+1}-|a|^n=(|a|-1) |a|^n$ and letting $n\to\infty,$ we get $0=(|a|-1)\ell$. But $|a|\neq 1,$ this implies that $\ell=0$. This ends the proof.

Exercises with detailed answers

Exercise: Calculate the limits of the following sequences \begin{align*} v_n=1+\frac{1}{3}+\frac{1}{3^2}+\cdots+\frac{1}{3^n}.\end{align*}

Proof: By using the remarkable identities we have \begin{align*} v_n= \frac{1-\left(\frac{1}{3}\right)^{n+1}}{1-\frac{1}{3}}=\frac{3}{2}\left(1-\left(\frac{1}{3}\right)^{n+1}\right).\end{align*} According to the previous section, we have $$\lim_{n\to+\infty} \left(\frac{1}{3}\right)^{n+1}=0.$$ Thus $$\lim_{n\to+\infty}v_n=\frac{3}{2}.$$

Exercise: Calculate the limits of the following sequences \begin{align*} w_n=\sin\left(\frac{1}{2^n}\right).\end{align*}

Proof: We recall that $|\sin(x)|\le |x|$ for any $x\in \mathbb{R}$. Now we can estimate \begin{align*} |v_n|=\left|\sin\left(\frac{1}{2^n}\right) \right|\le \left(\frac{1}{2}\right)^n. \end{align*} This means that $$ -\left(\frac{1}{2}\right)^n\le v_n\le \left(\frac{1}{2}\right)^n$$ for all $n$. It suffices to apply the squeeze theorem since by the previous section the geometric sequence $(\frac{1}{2^n})_n$ goes to zero as $n\to\infty$. Hence $$ \lim_{n\to+\infty}v_n=0.$$

Probability distribution of a random variable

0

We discuss the properties of the probability distribution of a random variable. It is a measure of probability used in probability theory.

What is a random variable?

Consider a probability space $(\Omega,\mathscr{A},\mathbb{P})$. We also denote by $\mathscr{B}$ the Borel algebra defined by the open sets of $\mathbb{B}$. So a set $B\in\mathscr{B}$ is called a Borel set.

We say that $X\mapsto \mathbb{R}$ is a random variable if for any $B\in \mathscr{B},$ the event $$\{\omega\in\Omega:X(\omega)\in\mathscr{A}\}.$$ Formally, this definition means that the values of the random variable is corresponding to the outcomes of the random experiment.

Throughout this post, we use the following notation $$(X^{-1}(B))=(X\in B):=\{\omega\in\Omega:X(\omega)\in\mathscr{A}\}.$$ The sum, the product of radom variables are random variables.

The probability distribution of a random variable

According to the previous paragraph, if $B\in\mathscr{B}$ is a Borel set, then $X^{-1}(B)\in \mathscr{A},$ is an event. So that the probability $\mathbb{P}(X^{-1}(B))$ is well-defined. This allows us to set the following concept.

Definition: Consider a random variable $X:(\Omega,\mathscr{A})\to(\mathbb{R},\mathscr{B})$. We define a probability measure associated with $X$ by $(\mathbb{R},\mathscr{B})\to \mathbb{R}$ and \begin{align*}\mathbb{P}_X (B)=\mathbb{P}(X^{-1}(B)),\qquad \forall B\in \mathscr{B}.\end{align*} The probability $\mathbb{P}_X$ is called the probability distribution of the random variable $X$.

Notice that in several situations the probability distribution $\mathbb{P}_X$ replaces the initial probability $\mathbb{P}$ in the sense that the initial probability space $(\Omega,\mathscr{A},\mathbb{P})$ remains in the background, hidden; it is replaced with the more appropriate measure space $(\mathbb{R},\mathscr{R},\mathscr{P}_X)$.

We the random variable $X$ is discrete, that is, $X(\Omega)\subset \mathbb{N},$ then the probability distribution of the discrete random variable $X$ is $p_n=\mathbb{P}(X=n)$. Here $(X=n)=\{ \omega\in \Omega: X(\omega)=n\}$. This notion is used in elementary probability courses.

The image of a random variable by a real measurable function

Let $\psi:(\mathbb{R},\mathscr{B})\to (\mathbb{R},\mathscr{B})$ be a measurable function; in the sense that $\psi^{-}(B)\in\mathscr{B}$ for any Borel set $B\in \mathscr{B}$. Now if $X:(\Omega,\mathscr{A})\to(\mathbb{R},\mathscr{B})$ is a random variable, then the expectation of the new random variable $f(X)$ is given by \begin{align*}\mathbb{E}(\psi(X))=\int^{+\infty}_{-\infty} \psi(x)d\mathbb{P}_X(x).\end{align*} This result is known as the transfer theorem.

Some known probability distributions

In this section, we list some classical probability distributions:

Bernoulli distribution of parameter $p\in [0,1]$: it is associated with the discrete random variable $X$ such that $X(\Omega)=\{0,1\}$ such that $\mathbb{P}(X=0)=1-p$ and $\mathbb{P}(X=1)=p$. We only have two possibilities “success” when $X=1$ and “failure” when $X=0$. In this case we write $X\in \mathcal{B}(p)$.

Binomial distribution of parameters $n\in\mathbb{N}$ and $p\in [0,1]$: A random variable $X$ have this kind of distribution if it is of the form $X=X_1+\cdots+X_n$, where $X_i$ are independent random variables and $X_i\in \mathcal{B}(p)$ for any $i=1,\cdots,n$. In this case, we write $X\in \mathcal{B}(n,p)$ and we have \begin{align*} \mathbb{P}(X=k)=(\begin{smallmatrix}n\\ k\end{smallmatrix})p^k(1-p)^{1-k}.\end{align*} here the Binomial coefficients are defined by $$ (\begin{smallmatrix}n\\ k\end{smallmatrix})=\frac{n!}{k!(n-k)!}.$$