Examples of vector spaces

Kapitoly: Vector spaces, Examples of vector spaces, Vector subspace, Linear combinations of vectors, Linear wrapper, Bases of vector space, Dimensions of vector space, Transition matrix

In the previous section we defined vector spaces, or linear spaces, and showed an example of a classical vector space 2. In this article we will look at other examples.

Vector space Rn

We are already familiar with the vector space 2, which consists of vectors of the form [a, b], where a, b are real numbers. We will show that in fact any set of ordered n-tic n, for n > 0, forms a vector space.

For n = 2, we get the set of pairs of [a, b]. If we raise n to n = 3, we get the set of triples. This takes us out of the plane and into the space. 2 thus represents the classical plane, 3 the classical three-dimensional space.

The definitions of the addition and multiplication operations are the same as in the case of n = 2, only we must always add/multiply all parts of the vectors:

$$\begin{eqnarray} \left[a_1, a_2, …, a_n\right] + \left[b_1, b_2, …, b_n\right] &=& \left[a_1+b_1, a_2+b_2, …, a_n+b_n\right]\\ k\cdot\left[a_1, a_2, …, a_n\right] &=& \left[k\cdot a_1, k\cdot a_2, …, k\cdot a_n\right] \end{eqnarray}$$

Now we should perform a proof that the vector space 3, or more generally n, satisfies all seven points from the definition of vector spaces. However, the proofs would be mostly the same, we would just work with a vector of three components instead of a vector of two components. For example, we would prove the first point talking about commutativity of vector addition for n = 3 as follows:

We have to prove: x + y = y + x, where x, y ∈ ℝ3. We would break down the addition of two vectors:

$$\left[a, b, c\right] + \left[d, e, f\right] = \left[a+d, b+e, c+f\right]$$

If we swap the vectors on the left side, we get:

$$\left[d, e, f\right] + \left[a, b ,c\right] = \left[d+a, e+b, f+c\right]$$

But the property of adding two real numbers implies that the resulting vectors are the same, i.e. [a + d, b + e, c + f] = [d + a, e + b, f + c].

If you compare this with the proof for n = 2 in the previous article, you will see that the proof is practically the same. So the other six points can be proved in a similar way.

But interestingly, the vector space is also 1 = ℝ, itself a set of real numbers. Adding and multiplying vectors then turns into ordinary addition and multiplication of real numbers, and the operations of adding and multiplying real numbers certainly satisfy all 7 points.

We'll definitely come across the n space again, it's a fairly common vector space.

Matrix vector space m× n

The set of all matrices that have m rows and n columns and that contain only real numbers, together with the operations of matrix addition and matrix scalar multiplication, form a vector space. For short, we will denote this space by m× n.

For m = 2, n = 3 we would get matrices that have two rows and three columns. An example of a particular matrix is

$$\begin{pmatrix} 4&5&1\\ 9&1&3 \end{pmatrix}$$

We define the addition operation as follows:

$$ \begin{pmatrix} a_{11}&a_{12}&…&a_{1n}\\ a_{21}&a_{22}&…&a_{2n}\\ \vdots&\vdots&\vdots&\vdots\\ a_{m1}&a_{m2}&…&a_{mn}\\ \end{pmatrix} + \begin{pmatrix} b_{11}&b_{12}&…&b_{1n}\\ b_{21}&b_{22}&…&b_{2n}\\ \vdots&\vdots&\vdots&\vdots\\ b_{m1}&b_{m2}&…&b_{mn}\\ \end{pmatrix} =$$

$$ = \begin{pmatrix} a_{11}+b_{11}&a_{12}+b_{12}&…&a_{1n}+b_{1n}\\ a_{21}+b_{21}&a_{22}+b_{22}&…&a_{2n}+b_{2n}\\ \vdots&\vdots&\vdots&\vdots\\ a_{m1}+b_{m1}&a_{m2}+b_{m2}&…&a_{mn}+b_{mn}\\ \end{pmatrix} $$

Classical matrix addition. We will define scalar multiplication similarly:

$$ k \cdot \begin{pmatrix} a_{11}&a_{12}&...&a_{1n}\a_{21}&a_{22}&...&a_{2n}\\\vdots&\vdots&\vdots&\vdots\ a_{m1}&a_{m2}&...&a_{mn}\\end{pmatrix} \begin{pmatrix} k\cdot a_{11}&k\cdot a_{12}&...&k\cdot a_{1n}\\k\cdot a_{21}&k\cdot a_{22}&...&k\cdot a_{2n}\\\vdots&\vdots&\vdots&\vdots\k\cdot a_{m1}&k\cdot a_{m2}&...&k\cdot a_{mn}\\\end{pmatrix} $$

Thus defined, the operations on the set of all matrices m× n form a vector space. Importantly, we must always take the set of all matrices of the same type. If a matrix of a different type got in the way, for example if we added a matrix of type 3 × 1 to 2× 2, we would have a problem because we can't even add a matrix of type 3 × 1 to a matrix of type 2 × 2.

Now, we should again check that the vector space defined in this way satisfies all 7 conditions we impose on the vector space. Since we would be subscribing in the case of matrices, well, I would be subscribing in particular, just briefly:

Adding two matrices will return a new matrix of the same type. Multiplying by a scalar will return a new matrix of the same type, so our operations match in type.

  1. x + y = y + x: The addition of matrices is obviously commutative, because the element at coordinates ij in the resulting matrix will have the form $x_{ij}+y_{ij} = y_{ij}+x_{ij}$.
  • (x+y)+z = x+(y+z): matrix addition is also associative, because $(x_{ij}+y_{ij})+z_{ij}=x_{ij}+(y_{ij}+z_{ij})$
  • a · (b · x) = (a · b) · x: again, the $a \cdot (b\cdot x_{ij}) = (a\cdot b)\cdot x_{ij}$
  • a · (x + y) = a · x + a · y: again for elements at coordinates ij we get equality: $a \cdot (x_{ij} + y_{ij}) = a\cdot x_{ij} + a\cdot y_{ij}$
  • (a + b) · x = a · x+b · x: again we just express the element at the coordinates ij: $(a+b)\cdot x_{ij} = a\cdot x_{ij}+b\cdot x_{ij}$
  • 1 · x = x: because $1\cdot x_{ij} = x_{ij}$, this point also holds.
  • Existence of a zero element. This is a null matrix, because for all the matrices x of our vector space, it holds that 0 · xij = 0

Vector space of polynomials

A polynomial, or otherwise a polynomial, is denoted by p(x) and is an expression of the form:

$$ p(x)=a_0 + a_1x + a_2x^2 + … + a_n x^n. $$

We assume that an≠0. The degree of such a polynomial is then just the number n. The real numbers a0, …, an are called the coefficients of the polynomial. An example of a polynomial is the expression 4 + 3x − 7x2, where a0 = 4, a1 = 3, a2 = −7 and the degree of the polynomial is 2. Some coefficients may be zero, so the expression x2+π x7 is a polynomial of degree 7 and a2 = 1, a7 = π and the other coefficients are zero.

We can simply add polynomials - in short, we add their coefficients. Example:

$$ (2+3x-x^2) + (4-5x+101x^2+5x^3) = 6-2x+100x^2+5x^3 $$

In general, we could define the addition of polynomials as follows:

$$ (a_0 + a_1x + a_2x^2 + … + a_n x^n) + (b_0 + b_1x + b_2x^2 + … + b_m x^m) = $$

$$ = (a_0 + b_0) + (a_1 + b_1)x + (a_2+b_2)x^2 + … + (a_q+b_q) x^q,$$

where q is the maximum of m and n. k-the polynomial p(x) is then obtained by multiplying k by all the coefficients:

$$k\cdot (a_0 + a_1x + a_2x^2 + … + a_n x^n) =$$ $$= (k\cdot a_0) + (k\cdot a_1)x + (k\cdot a_2)x^2 + … + (k\cdot a_n) x^n$$

For example:

$$ 7\cdot (3+6x-5x^2) = 21+42x-35x^2 $$

The set of all polynomials, let us denote it by P(X), with the addition and multiplication operations thus defined, forms a vector space. Verification:

The sum of two polynomials will again give us a polynomial, multiplication by a scalar will also create a new polynomial. The operations we have defined match in type.

You can already check the other seven properties that a vector space must satisfy for your homework. The proof will be practically the same as for the previous matrices, only instead of xij you will write xi. The null polynomial is then the polynomial p(x) = 0.

While for matrices we required that only matrices of the same type be in the vector space, we do not require this for polynomials. We cannot even require it. If we took the set of all polynomials of degree two, this set would not form a vector space. Why? We can illustrate this with a counterexample. Let's add these two polynomials:

$$ \left(1+2x+3x^2\right) + \left(1+2x-3x^2\right) = 2+4x $$

By putting 3x2 in the first polynomial and −3x2 in the second, we have obtained 0x2 after addition, but this has reduced the degree of the polynomial. The polynomial 2 + 4x is a polynomial of degree 1, not 2.

References and sources