Homogeneous systems of linear equations

Kapitoly: Systems of equations, Cramer's Rule, Homogeneous systems

A system of equations is said to be homogeneous if the right-hand side of each equation is zero.

Definition of a homogeneous system

The general form of a homogeneous system of equations looks like this:

$$ \begin{array}{ccccccccc} a_{11}x_1&+&a_{12}x_2&+&\ldots&+&a_{1n}x_n&=&0\\ a_{21}x_1&+&a_{22}x_2&+&\ldots&+&a_{2n}x_n&=&0\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&=&\vdots\\ a_{m1}x_1&+&a_{m2}x_2&+&\ldots&+&a_{mn}x_n&=&0\\ \end{array} $$

This system contains m equations with n unknowns, and the right-hand sides of all the equations are zero. If they were not equal to zero, it would not be a homogeneous system of equations. We can rewrite the previous system of equations into a matrix by successively rewriting all the coefficients, i.e. all the terms aij, into the matrix at the same positions they are in the system. We will not rewrite the right-hand zero column into the matrix because it is unnecessary. So, like this:

$$A=\left( \begin{array}{cccc} a_{11}&a_{12}&\ldots&a_{1n}\\ a_{21}&a_{22}&\ldots&a_{2n}\\ \vdots&\vdots&\ddots&\vdots\\ a_{m1}&a_{m2}&\ldots&a_{mn} \end{array} \right) $$

This matrix represents the system of equations described above with m equations and n unknowns. Each row of this matrix corresponds to one equation. The whole system can then be written as

$$Ax,=,0$$

Basic properties of the system

From the article on systems of linear equations and from Frobenius' theorem, we can get some interesting statements about homogeneous systems of equations.

  • Every homogeneous system has a solution.
  • The set of all solutions of a homogeneous system always contains zero solutions. If we substitute zero after all the variables, we get m of equations 0 = 0.
  • From Frobenius' theorem, we know that to write the general solution of a system of equations, we need n − k parameters, where n is the number of variables and k is the rank of the extended matrix of the system, in our case k = rank(A). If n = k, then we do not need any parameter and the system has a single, zero, solution. Doing n = k just when the matrix A is regular - if it contains no linearly dependent rows.
  • It also follows from the previous point that the system has a non-zero solution precisely when the matrix A contains some linearly dependent row.
  • If the system has more unknowns than equations, the system has a non-zero solution. If it has more unknowns than equations, it means that k = rank(A) can be equal to at most the number of equations, i.e. m. And if k < n holds, then n − k > 0 and hence there will be other solutions, which we describe using n − k parameters.

Properties of the solution set

  • The sum of arbitrary solutions of the system Ax = 0 is also a solution of the system. If x1 and x2 are solutions of the system, then Ax1 = 0 and Ax2 = 0 hold, and by the previous statement, A(x1 + x2) = 0 also holds.

The proof is simple. Suppose Ax1 = 0 and Ax2 = 0 hold and we prove A(x1 + x2) = 0. The addition and multiplication of matrices satisfy the distributive law, so we can write

$$A(x_1+x_2) = Ax_1+Ax_2.$$

But by assumption Ax1 = 0 and Ax2 = 0, so we can rewrite the expression Ax1 + Ax2 to 0+0. We get the equation 0 = 0, so the theorem holds.

In particular: if we have two solutions to the system: x1 = (0, 5, 6) and x2 = (7, 3, 5), then the solution to the system is also x3 = x1 + x2 = (0 + 7, 5 + 3, 6 + 5) = (7, 8, 11).

  • For each constant c ∈ ℝ, c-times the solution of the system is also a solution. If x1 is a solution of the system Ax = 0, then c · x1 is also a solution of the system.

The proof is again simple. Suppose that x1 is a solution of the system Ax = 0 and c ∈ ℝ. Then we prove that equality holds

$$A(cx_1)=0$$

Since c is a numerical constant, we can put it in front of the whole expression:

$$A(cx_1) = c\cdot Ax_1$$

By assumption, Ax1 = 0 holds, so from c · Ax1 we get c · 0 = 0.

  • A corollary of the previous two points is that any linear combination of solutions of the system is also a solution of the system.

Now the question arises - if some solutions are just linear combinations of other solutions, is there some set of solutions through which we are able to generate all other solutions?

Let us now try to write the solutions of the system in a matrix. That is, if $x_1 = (x_{11}, x_{12}, …, x_{1n})$ is a solution and $x_2 = (x_{21}, x_{22}, …, x_{2n})$ is a solution and so on for xi, we will create a matrix F, which will contain these solutions as its columns:

$$ F=\begin{pmatrix} x_{11}&x_{21}&…\\ x_{12}&x_{22}&…\\ \vdots&\vdots&\ddots\\ x_{1n}&x_{2n}&… \end{pmatrix} $$

Since we can create infinitely many different linear combinations, and hence solutions, the matrix F will be infinite. We will now be interested in whether the matrix F can be reduced so that it contains only a finite number of columns with which we can generate all the other solutions.

How many columns will be left? The matrix F has n rows because the original system had n variables and each column represents one solution. So the maximum possible rank of the F matrix is just n. If the F matrix has more than n columns, it is certain that at least one column is linearly dependent. Hence, we are able to write the entire set of solutions into a matrix of maximal rank n × n and compute the other solutions using linear combinations. Any additional column would certainly be redundant. The question is whether we are able to reduce the size of the matrix F even further.

Fundamental solution system

From Frobenius' theorem, we know that we can express the general solution of the system Ax = 0 using n − k parameters, where n is the number of variables and k = rank(A). We denote the parameters by t1, t2, …, tn − k. With these parameters, how could we generate two linearly independent solutions?

In the first case, we put one after the parameter t1 and zero after the others. Thus, t1 = 1 and t2 = t3 = … = tn − k = 0. In the second case, we do the same thing, but move the one to the second parameter: t2 = 1 and t1 = t3 = t4 = … = tn − k = 0. If we plug these parameters into the general solution, we get two different solutions that are linearly independent.

Example: let

$$(t_1, t_2, t_3, 2t_1, 4t_3)$$

be the general solution of some homogeneous system of equations with three parameters. For the choice of parameters t1 = 1, t2 = t3 = 0 we get the partial solution x1 = (1, 0, 0, 2, 0), for t1 = t3 = 0, t2 = 1 we get x2 = (0, 1, 0, 0, 0) and for t1 = t2 = 0, t3 = 1 we get x3 = (0, 0, 1, 0, 4). We write down the partial solution in the matrix F:

$$F=\begin{pmatrix} 1&0&0\\ 0&1&0\\ 0&0&1\\ 2&0&0\\ 0&0&4 \end{pmatrix} $$

From the first three rows we can clearly see that the columns are linearly independent. Thus we have constructed three different partial solutions that are linearly independent of each other. At the same time, any other solution is linearly dependent. If we choose the parameters t1 = 1, t2 = 2, t3 = 0 and compute the solution, we get x4 = (1, 2, 0, 2, 0).

But we can express this solution as x1 + 2x2 = (1, 0, 0, 2, 0) + 2(0, 1, 0, 0, 0) = (1, 2, 0, 2, 0). Similarly for the other solutions. There we have obtained a matrix that has three columns and defines all the solutions of the system of equations.

We can generalize the procedure. If we have a system Ax = 0 and its general solution that uses n − k parameters, then we can express the solution of the system using n − k partial linearly independent solutions of the system. We then call such a solution a fundamental solution system.

For example, we obtain these solutions by successively putting a one after each t1, t2, …, tn − k parameter and making the rest of the parameters zero. This yields n − k solutions of the system that are linearly independent.

This is not the only way to produce n − k linearly independent partial solutions of the system. We can substitute any non-zero number for the parameter ti instead of one and still get linearly independent solutions.

Once more the definition of the fundamental system of solutions of the system Ax = 0: it is the set of partial solutions {x1, x2, …, xn − k} such that

  • x1, x2, …, xn − k are linearly independent,
  • any solution of the system Ax = 0 can be expressed by a linear combination of the partial solutions x1, x2, …, xn − k.

Example

Solve a homogeneous system of equations and determine the fundamental system of solutions:

$$ \begin{array}{cccccccccccc} 2x_1&-&5x_2&+&7x_3&+&x_4&=&0\\ 4x_1&+&3x_2&+&x_3&&&=&0\\ 2x_1&-&18x_2&+&20x_3&+&3x_4&=&0\\ 8x_1&-&20x_2&+&28x_3&+&4x_4&=&0\\ \end{array} $$

The matrix of the system A will have the form

$$ A=\begin{pmatrix} 2&-5&7&1\\ 4&3&1&0\\ 2&-18&20&3\\ 8&-20&28&4 \end{pmatrix} $$

We can calculate the determinant of this matrix. This is equal to zero. Thus the system matrix is singular and the system has a non-zero solution. We find the general solution of the system using the Gaussian elimination method. The last, fourth, row is equal to the sum of the first three rows. Thus, we zero out the entire fourth row. We then multiply the first line by three, the second minus one, and the third line is equal to the sum of the first two lines:

$$\begin{eqnarray} \begin{pmatrix} 2&-5&7&1\\ 4&3&1&0\\ 2&-18&20&3\\ 8&-20&28&4 \end{pmatrix} &\sim& \begin{pmatrix} 2&-5&7&1\\ 4&3&1&0\\ 2&-18&20&3\\ 0&0&0&0 \end{pmatrix}\\ &\sim& \begin{pmatrix} 6&-15&21&3\\ -4&-3&-1&0\\ 2&-18&20&3\\ 0&0&0&0 \end{pmatrix}\\ &\sim& \begin{pmatrix} 6&-15&21&3\\ -4&-3&-1&0\\ 0&0&0&0\\ 0&0&0&0 \end{pmatrix}\\ &\sim& \begin{pmatrix} 2&-5&7&1\\ 4&3&1&0\\ 0&0&0&0\\ 0&0&0&0 \end{pmatrix} \end{eqnarray}$$

There is no point in further adjustments. We see that the matrix has rank two, rank(A) = 2. To write the general solution of the system we will need two parameters, we will call them s and t. To write the fundamental solution we will need two partial solutions.

Now we come to this system of equations:

$$ \begin{array}{cccccccccccc} 2x_1&-&5x_2&+&7x_3&+&x_4&=&0\\ 4x_1&+&3x_2&+&x_3&&&=&0\\ \end{array} $$

From the second equation, we isolate x3.

$$x_3=-4x_1-3x_2$$

After x1 and x2 we substitute the parameters, so s = x1 and t = x2. Then we can write that x3 = −4s − 3t. It remains to calculate x4. We will find that from the first equation:

$$\begin{eqnarray} x_4&=&-2s+5t-7(-4s-3t)\\ x_4&=&-2s+5t+28s+21t\\ x_4&=&26s+26t\\ x_4&=&26(s+t) \end{eqnarray}$$

The general solution is thus equal to : (s, t, −4s − 3t, 26(s + t)). To obtain the fundamental solution system, we need to find two partial solutions (i.e., two particular solutions) that are linearly independent. We find these by first putting s = 1, t = 0 after the parameters and then s = 0, t = 1. For the first pair of parameters, we have a partial solution

$$X_1=(x_1, x_2, x_3, x_4) = (1, 0, -4, 26)$$

and for the second pair

$$X_2=(x_1, x_2, x_3, x_4) = (0, 1, -3, 26).$$

These solutions are linearly independent. Since we used two parameters in the general solution, we only need two partial solutions to form the fundamental system. Thus, the fundamental system of solutions looks like this

$$\left\{(1, 0, -4, 26), (0, 1, -3, 26)\right\}.$$

We can also verify that these two solutions are valid by substituting them into the original equations. For X1 we get the following equations:

$$ \begin{array}{cccccccccccc} 2\cdot1&-&5\cdot0&+&7\cdot(-4)&+&26&=&0\\ 4\cdot1&+&3\cdot0&-&4&&&=&0\\ 2\cdot1&-&18\cdot0&+&20\cdot(-4)&+&3\cdot26&=&0\\ 8\cdot1&-&20\cdot0&+&28\cdot(-4)&+&4\cdot26&=&0\\ \end{array} $$

After adjustments, we have:

$$\begin{eqnarray} 2-28+26&=&0\\ 4-4&=&0\\ 2-80+78&=&0\\ 8-112+104&=&0 \end{eqnarray}$$

We see that each equation comes out 0 = 0. It seems that we have calculated correctly.

References and sources