We provide vector spaces exercises with detailed answers. Such spaces are natural in mathematics in which we can manipulate computation using the structure of these spaces. In such spaces, we use two composition laws, the first noted “+” which allows operating between the elements of the space, called internal composition law; while the second allows implying a scalar space, $ \mathbb{R} $ or $ \mathbb{C} $. For example, if we designate a box of apples with $ E $. We can do the operation “apple + apple = $ 2 \cdot{\rm apples} $”. Here we have $ 2 \in\mathbb{R} $ and “$ {\rm apple} \in E $”. We then have another composition of law $ “\cdot” $, which is external on $ E $. The role of this law is to talk about the number of apples we have.

Vector spaces are used in other parts of linear algebra such as matrices for example.

**How to prove that sets are vector spaces?**

This is a very good question. In general, in vector spaces courses, we already studied typical or classical vector spaces such as $\mathbb{R}^n$, the complex numbers space $\mathbb{C}$, the space of all applications from a space $E$ to another space $F$ denoted by $\mathcal{F}(E,F)$.

Now let $H$ be a set of which we want to prove that it is a vector space. The technique consists in finding a classical vector space $E$ such that $H$ is a subspace of $E$, $H\subset E$. Then it suffices to show that $H$ is a subspace of $E$. To do this, we must first ensure that $H$ is not the empty set, here the neutral element of $E$ must be in $H$. In addition, for $x,y\in H$ and $\lambda\in \mathbb{K}=\mathbb{R}$ or $\mathbb{C}$, you need to verify that \begin{align*}x+y\in H,\qquad \lambda\cdot x\in H.\end{align*}We mention that these two relations are equivalent to $x+\lambda y\in H$.

Let us discuss another method that helps in proving that $H$ is a vector space. Let $E$ be a vector space and let $x_1,x_2,\cdots,x_r$ be no null vectors in $E$. We denote by ${\rm span}( x_1,x_2,\cdots,x_r )$ the space of all combinations of elements $x_1,x_2,\cdots,x_r$. That is $y\in {\rm span}( x_1,x_2,\cdots,x_r ) $ if and only if there exist scalars $\lambda_1,\lambda_2,\cdots,\lambda_r\in \mathbb{K}$ such that \begin{align*}y=\lambda_1 x_1+\lambda_2 x_2+\cdots+\lambda_r x_r.\end{align*} Then ${\rm span}( x_1,x_2,\cdots,x_r )$ is a subspace of $E$. Now to prove that $H$ is a subspace, sometimes, it suffices to show that $H$ coincides with a span space.

**Examples and properties of such a vector space**

Let the following sets of $\mathbb{R}^3$: \begin{align*}F&=\{(x,y,z)\in \mathbb{R}^3: x-y+z=0\}\cr G&=\{(x,y,z)\in \mathbb{R}^3: x+y-z=0\}.\end{align*} First, let us prove that $F$ and $G$ are subspaces of $\mathbb{R}^3$. In fact, for $u=(x,y,z)\in F$, we have \begin{align*}u=(y+z,y,z)=y(1,1,0)+z(1,0,1).\end{align*} This show that $F$ coincides with the subspace of $\mathbb{R}^3$ generated by the vectors $\{(1,1,0),(1,0,1)\}$. Hence $F$ is a subspace of $\mathbb{R}^3$. Similarly, one can show that $G$ is a subspace as well.

We now prove that $\mathbb{R}^3=F+G$. In fact, let $u=(x,y,z)\in \mathbb{R}^3$. We have $(1,1,0)\in G$. Now we look for some $\lambda\in \mathbb{R}$ such that $u-\lambda (1,1,0)\in G$. We then have\begin{align*}u-\lambda (1,1,0)\in G &\Longleftrightarrow (x-\lambda,y-\lambda,z)\in G\cr &\Longleftrightarrow (x-\lambda)+(y-\lambda)-z=0\cr & \Longleftrightarrow \lambda= \frac{x+y-z}{2}.\end{align*}For a such $\lambda,$ we have\begin{align*}u=\underset{\in F}{\underbrace{\lambda (1,1,0)}}+\underset{\in G}{\underbrace{(u-\lambda (1,1,0))}}.\end{align*}This means that $\mathbb{R}^3=F+G.$

Let us answer the question: Do $F$ and $G$ supplementally subspaces? We remark that for example $(0,1,1)\in F\cap G$. This implies that $F\cap G\neq \{0\},$ and hence $F$ and $G$ are not supplementally subspaces.

**Basis of vector spaces**

In order to have good control of the elements of a vector space, it is advisable to find a way for which all these vectors can be written in a unique way as a linear combination of known vectors of space. This means that we need to find vectors $e_1,e_2,\cdots,e_n$ such that $E={\rm span}(e_1,e_2,\cdots,e_n)$ and if $x=\lambda_1 e_1+\lambda_2 x_2+\cdots+\lambda_n e_n=0,$ then $\lambda_1=\lambda_2=\cdots=\lambda_n=0$, in this case we say that the the family $\{e_1,e_2,\cdots,e_n\}$ is linearly independent. A family that satisfies all these properties is called the basis of $E,$ and the cardinal of this family is called the dimension of $E,$ denoted by $\dim(E)=n$.

Let $F$ be the subset of $\mathbb{R}^3$ defined by\begin{align*}F=\{(x,y,z)\in \mathbb{R}^3: x+2y-z=0\}.\end{align*}Prove that $F$ is a subspace and determine its basis, we recall that a basis of a vector space is any linearly independent subset of it that spans the whole vector space. In fact, observe that $F$ is nonempty and $0_{\mathbb{R}^3}\in F$. On the other hand, $u=(x,y,z)\in F$ if and only if $z=x+2y$. Then\begin{align*}u&=(x,y,x+2y)\cr &= x(1,0,1)+y(0,1,2)\cr &:= x v_1+y v_2,\end{align*}where $v_1=(1,0,1)$ and $v_2=(0,1,2)$. This shows that $F$ is the subspace of $\mathbb{R}^3$ generated by the vectors $v_1$ and $v_2,$ that is $F={\rm span}\{v_1,v_2\}. $ Observe that the vectors $v_1$ and $v_2$ are not collinear. Hence $\{v_1,v_2\}$ is a basis of $F$ and $\dim(F)=2$, $F$ is a hyperplane.

**Calculus in the space of sequences**

We denote by $(\mathbb{R}^{\mathbb{N}},+,\cdot)$ the vector space of all real sequences. Let $u=(u_n)_n$, $v=(v_n)_n$ and $w=(w_n)_n$ be real sequences defined by\begin{align*}\forall n\in\mathbb{N},\quad u_n=2^n,\;v_n=3^n,\;w_n=5^n.\end{align*}Prove that the vectors $u,v$ and $w$ are linearly independent. Before solving this exercise, we would like to recall some facts about geometric sequences. Let $a\in \mathbb{R}$. The natural power of $a$ define a sequence $(a^n)_n$ called a geometric sequence of ratio $a$. Now if $|a|<1,$ then $a^n$ goes to $0$ as $n\to+\infty$.

Let us now prove that the vectors $u,v$, and $w$ are linearly independent. For this let $\alpha,\beta,\gamma\in \mathbb{R}$ such that \begin{align*}\alpha u+\beta v+\gamma w=(0,0,\cdots,0,\cdots).\end{align*}Then for any $n\in\mathbb{N}$ we have\begin{align*}\alpha 2^n+\beta 3^n+\gamma 5^n=0.\end{align*}By factorizing by $5^n$ we get\begin{align*}\alpha \left(\frac{2}{5}\right)^n+\beta\left(\frac{3}{5}\right)^n +\gamma=0.\end{align*}We take limit $n\to +\infty,$ we obtain\begin{align*}\lim_{n\to +\infty}\left(\alpha \left(\frac{2}{5}\right)^n+\beta\left(\frac{3}{5}\right)^n\right) +\gamma=0.\end{align*} But as $0< \frac{2}{5} < 1$ and $0< \frac{3}{5} < 1$, then\begin{align*}\lim_{n\to +\infty}\left(\alpha \left(\frac{2}{5}\right)^n+\beta\left(\frac{3}{5}\right)^n\right)=0.\end{align*} This implies that $\gamma=0$. So that\begin{align*}\alpha 2^n+\beta 3^n=0. \end{align*}Similarly, factorizing by $3^n$ we get\begin{align*}\alpha \left(\frac{2}{3}\right)^n+\beta=0.\end{align*}By take limit $n\to +\infty,$ we get $\beta=0$. This implies also $\alpha \left(\frac{2}{3}\right)^n=0$, so that $\alpha=0$. This ends the proof.