Trivial matrix solution. Homogeneous systems of linear equations. Solving elementary systems of linear algebraic equations

2.4.1. Definition. Let us be given an inhomogeneous system of linear equations

Consider a homogeneous system

whose matrix of coefficients coincides with the matrix of coefficients of system (2.4.1). Then system (2.4.2) is called reduced homogeneous system (2.4.1).

2.4.2. Theorem. The general solution of an inhomogeneous system is equal to the sum of some particular solution of the inhomogeneous system and the general solution of the reduced homogeneous system.

Thus, to find a general solution to the inhomogeneous system (2.4.1) it is sufficient:

1) Research it for compatibility. In case of compatibility:

2) Find the general solution of the reduced homogeneous system.

3) Find any particular solution to the original (inhomogeneous) one.

4) By adding the found particular solution and the general solution of the given one, find the general solution of the original system.

2.4.3. Exercise. Investigate the system for compatibility and, in the case of compatibility, find its general solution in the form of the sum of the particular and the general given.

Solution. a) To solve the problem, we apply the above scheme:

1) We examine the system for compatibility (by the method of bordering minors): The rank of the main matrix is ​​3 (see the solution to Exercise 2.2.5, a), and the non-zero minor of the maximum order is composed of elements of the 1st, 2nd, 4th rows and 1st, 3 -th, 4th columns. To find the rank of the extended matrix, we border it with the 3rd row and 6th column of the extended matrix: =0. Means, rg A =rg=3, and the system is consistent. In particular, it is equivalent to the system

2) Let's find a general solution X 0 reduced homogeneous system

X 0 ={(-2a - b ; a ; b ; b ; b ) | a , b Î R}

(see solution to Exercise 2.2.5, a)).

3) Let us find any particular solution x h of the original system . To do this, in system (2.4.3), equivalent to the original one, the free unknowns x 2 and x We assume that 5 is equal to, for example, zero (this is the most convenient data):

and solve the resulting system: x 1 =- , x 3 =- , x 4 =-5. Thus, (- ; 0; - ; -5; 0) ¾ is a particular solution of the system.

4) Find the general solution X n of the original system :

X n={x h }+X 0 ={(- ; 0; - ; -5; 0)} + {(-2a - b ; a ; b ; b ; b )}=

={(- -2a - b ; a ; - + b ; -5+b ; b )}.

Comment. Compare the answer you received with the second answer in example 1.2.1 c). To obtain the answer in the first form for 1.2.1 c) the basic unknowns are taken x 1 , x 3 , x 5 (the minor for which is also not equal to zero), and as free ¾ x 2 and x 4 .

§3. Some applications.

3.1. On the issue of matrix equations. We remind you that matrix equation over the field F is an equation in which the unknown is a matrix over the field F .


The simplest matrix equations are equations of the form

AX=B , XA =B (2.5.1)

Where A , B ¾ given (known) matrix over a field F , A X ¾ such matrices, upon substitution of which equations (2.5.1) turn into true matrix equalities. In particular, the matrix method of certain systems is reduced to solving a matrix equation.

In the case when the matrices A in equations (2.5.1) are non-degenerate, they have solutions, respectively X =A B And X =B.A. .

In the case when at least one of the matrices on the left side of equations (2.5.1) is singular, this method is no longer suitable, since the corresponding inverse matrix A does not exist. In this case, finding solutions to equations (2.5.1) is reduced to solving systems.

But first, let's introduce some concepts.

Let us call the set of all solutions of the system general decision . Let us call a separately taken solution of an indefinite system private solution .

3.1.1. Example. Solve matrix equation over field R.

A) X = ; b) X = ; V) X = .

Solution. a) Since =0, then the formula X =A B is not suitable for solving this equation. If in the work XA =B matrix A has 2 rows, then the matrix X has 2 columns. Number of lines X must match the number of lines B . That's why X has 2 lines. Thus, X ¾ some square matrix of the second order: X = . Let's substitute X into the original equation:

Multiplying the matrices on the left side of (2.5.2), we arrive at the equality

Two matrices are equal if and only if they have the same dimensions and their corresponding elements are equal. Therefore (2.5.3) is equivalent to the system

This system is equivalent to the system

Solving it, for example, using the Gaussian method, we come to a set of solutions (5-2 b , b , -2d , d ), Where b , d run independently of each other R. Thus, X = .

b) Similar to a) we have X = and.

This system is inconsistent (check it out!). Therefore, this matrix equation has no solutions.

c) Let us denote this equation by AX =B . Because A has 3 columns and B has 2 columns, then X ¾ some matrix of dimension 3´2: X = . Therefore we have the following chain of equivalences:

We solve the last system using the Gaussian method (we omit comments)

Thus, we arrive at the system

whose solution is (11+8 z , 14+10z , z , -49+8w , -58+10w ,w ) Where z , w run independently of each other R.

Answer: a) X = , b , d Î R.

b) There are no solutions.

V) X = z , w Î R.

3.2. On the issue of permutability of matrices. In general, the product of matrices is non-commutable, that is, if A And B such that AB And B.A. are defined, then, generally speaking, AB ¹ B.A. . But an example of an identity matrix E shows that commutability is also possible A.E. =E.A. for any matrix A , if only A.E. And E.A. were determined.

In this section we will consider problems of finding the set of all matrices that commute with a given one. Thus,

Unknown x 1 , y 2 and z 3 can take any value: x 1 =a , y 2 =b , z 3 =g . Then

Thus, X = .

Answer. A) X d ¾ any number.

b) X ¾ set of matrices of the form , where a , b And g ¾ any numbers.

Kaluga branch of the federal state budgetary educational institution of higher professional education

"Moscow State Technical University named after N.E. Bauman"

(Kharkov Branch of Moscow State Technical University named after N.E. Bauman)

Vlaykov N.D.

Solution of homogeneous SLAEs

Guidelines for conducting exercises

on the course of analytical geometry

Kaluga 2011

Lesson objectives page 4

Lesson plan page 4

Necessary theoretical information p.5

Practical part p.10

Monitoring the mastery of the material covered p. 13

Homework p.14

Number of hours: 2

Lesson objectives:

    Systematize the acquired theoretical knowledge about the types of SLAEs and methods for solving them.

    Gain skills in solving homogeneous SLAEs.

Lesson plan:

    Briefly outline the theoretical material.

    Solve a homogeneous SLAE.

    Find the fundamental system of solutions of a homogeneous SLAE.

    Find a particular solution of a homogeneous SLAE.

    Formulate an algorithm for solving a homogeneous SLAE.

    Check your current homework.

    Carry out verification work.

    Present the topic of the next seminar.

    Submit current homework.

Necessary theoretical information.

Matrix rank.

Def. The rank of a matrix is ​​the number that is equal to the maximum order among its nonzero minors. The rank of the matrix is ​​denoted by .

If a square matrix is ​​non-singular, then its rank is equal to its order. If a square matrix is ​​singular, then its rank is less than its order.

The rank of a diagonal matrix is ​​equal to the number of its non-zero diagonal elements.

Theor. When a matrix is ​​transposed, its rank does not change, i.e.
.

Theor. The rank of a matrix does not change with elementary transformations of its rows and columns.

The theorem on the basis minor.

Def. Minor
matrices is called basic if two conditions are met:

a) it is not equal to zero;

b) its order is equal to the rank of the matrix .

Matrix may have several basis minors.

Matrix rows and columns , in which the selected basic minor is located, are called basic.

Theor. The theorem on the basis minor. Basic rows (columns) of the matrix , corresponding to any of its basis minors
, are linearly independent. Any rows (columns) of the matrix , not included in
, are linear combinations of the basis rows (columns).

Theor. For any matrix, its rank is equal to the maximum number of its linearly independent rows (columns).

Calculating the rank of a matrix. Method of elementary transformations.

Using elementary row transformations, any matrix can be reduced to echelon form. The rank of a step matrix is ​​equal to the number of non-zero rows. The basis in it is the minor, located at the intersection of non-zero rows with the columns corresponding to the first non-zero elements from the left in each of the rows.

SLAU. Basic definitions.

Def. System

(15.1)

Numbers are called SLAE coefficients. Numbers
are called free terms of equations.

The SLAE entry in the form (15.1) is called coordinate.

Def. A SLAE is called homogeneous if
. Otherwise it is called heterogeneous.

Def. A solution to an SLAE is a set of unknown values ​​such that, upon substitution, each equation of the system turns into an identity. Any specific solution of an SLAE is also called its particular solution.

Solving SLAE means solving two problems:

Find out whether the SLAE has solutions;

Find all solutions if they exist.

Def. An SLAE is called joint if it has at least one solution. Otherwise, it is called incompatible.

Def. If the SLAE (15.1) has a solution, and a unique one, then it is called definite, and if the solution is not unique, then it is called indefinite.

Def. If in equation (15.1)
,The SLAE is called square.

SLAU recording forms.

In addition to the coordinate form (15.1), SLAE records are often used in other representations of it.

(15.2)

The relation is called the vector form of SLAE notation.

If we take the product of matrices as a basis, then SLAE (15.1) can be written as follows:

(15.3)

or
.

The notation of SLAE (15.1) in the form (15.3) is called matrix.

Homogeneous SLAEs.

Homogeneous system
linear algebraic equations with unknowns is a system of the form

Homogeneous SLAEs are always consistent, since there is always a zero solution.

Criterion for the existence of a non-zero solution. For a nonzero solution to exist for a homogeneous square SLAE, it is necessary and sufficient that its matrix be singular.

Theor. If the columns
,
, …,
are solutions to a homogeneous SLAE, then any linear combination of them is also a solution to this system.

Consequence. If a homogeneous SLAE has a non-zero solution, then it has an infinite number of solutions.

It is natural to try to find such solutions
,
, …,
systems so that any other solution is represented as a linear combination of them and, moreover, in a unique way.

Def. Any set of
linearly independent columns
,
, …,
, which are solutions of a homogeneous SLAE
, Where - the number of unknowns, and - the rank of its matrix , is called the fundamental system of solutions of this homogeneous SLAE.

When studying and solving homogeneous systems of linear equations, we will fix the basis minor in the matrix of the system. The basis minor will correspond to basis columns and, therefore, basis unknowns. We will call the remaining unknowns free.

Theor. On the structure of the general solution of a homogeneous SLAE. If
,
, …,
- arbitrary fundamental system of solutions of a homogeneous SLAE
, then any of its solutions can be represented in the form

Where , …,- some are permanent.

That. the general solution of a homogeneous SLAE has the form

Practical part.

    Consider possible sets of solutions of the following types of SLAEs and their graphical interpretation.

;
;
.

    Consider the possibility of solving these systems using Cramer’s formulas and the matrix method.

    Explain the essence of the Gauss method.

    Solve the following problems.

Example 1. Solve a homogeneous SLAE. Find FSR.

.

Let us write down the matrix of the system and reduce it to stepwise form.

.

the system will have infinitely many solutions. The FSR will consist of
columns.

Let's discard the zero lines and write the system again:

.

We will consider the basic minor to be in the upper left corner. That.
- basic unknowns, and
- free. Let's express
through free
:

;

Let's put
.

Finally we have:

- coordinate form of the answer, or

- matrix form of the answer, or

- vector form of the answer (vector - columns are FSR columns).

Algorithm for solving a homogeneous SLAE.

Find the FSR and the general solution of the following systems:

2.225(4.39)

. Answer:

2.223(2.37)

. Answer:

2.227(2.41)

. Answer:

Solve a homogeneous SLAE:

. Answer:

Solve a homogeneous SLAE:

. Answer:

Presentation of the topic of the next seminar.

Solving systems of linear inhomogeneous equations.

Monitoring the mastery of the material covered.

Test work 3 - 5 minutes. 4 students participate with odd numbers in the journal, starting from No. 10

Follow these steps:

;
;

Follow these steps:

Calculate the determinant:

Follow these steps:

undefined

Follow these steps:

Find the inverse matrix of this one:

Calculate the determinant:

Homework:

1. Solve problems:

№ 2.224, 2.226, 2.228, 2.230, 2.231, 2.232.

2.Work through lectures on the following topics:

Systems of linear algebraic equations (SLAEs). Coordinate, matrix and vector forms of recording. Kronecker-Capelli criterion for compatibility of SLAEs. Heterogeneous SLAEs. A criterion for the existence of a non-zero solution of a homogeneous SLAE. Properties of solutions of a homogeneous SLAE. Fundamental system of solutions of a homogeneous SLAE, the theorem on its existence. Normal fundamental system of solutions. Theorem on the structure of the general solution of a homogeneous SLAE. Theorem on the structure of the general solution of an inhomogeneous SLAE.

The linear system is called homogeneous , if all its free terms are equal to 0.

In matrix form, a homogeneous system is written:
.

Homogeneous system (2) is always consistent . Obviously, the set of numbers
,
, …,
satisfies every equation of the system. Solution
called zero or trivial decision. Thus, a homogeneous system always has a zero solution.

Under what conditions will the homogeneous system (2) have non-zero (non-trivial) solutions?

Theorem 1.3 Homogeneous system (2) has non-zero solutions if and only if the rank r its main matrix fewer unknowns n .

System (2) – uncertain
.

Corollary 1. If the number of equations m homogeneous system has fewer variables
, then the system is uncertain and has many non-zero solutions.

Corollary 2. Square homogeneous system
has non-zero solutions if and when the main matrix of this system degenerate, i.e. determinant
.

Otherwise, if the determinant
, a square homogeneous system has the only thing zero solution
.

Let the rank of the system (2)
that is, system (2) has non-trivial solutions.

Let And - particular solutions of this system, i.e.
And
.

Properties of solutions of a homogeneous system


Really, .


Really, .

Combining properties 1) and 2), we can say that if

…,
- solutions of a homogeneous system (2), then any linear combination of them is also its solution. Here
- arbitrary real numbers.

Can be found
linearly independent partial solutions homogeneous system (2), with the help of which you can obtain any other particular solution of this system, i.e. obtain a general solution to system (2).

Definition 2.2 Totality
linearly independent partial solutions

…,
homogeneous system (2) such that each solution of system (2) can be represented as a linear combination of them is called fundamental system of solutions (FSR) of a homogeneous system (2).

Let

…,
is a fundamental system of solutions, then the general solution of the homogeneous system (2) can be represented as:

Where

.

Comment. To obtain the FSR, you need to find private solutions

…,
, giving one free variable the value “1” in turn, and all other free variables the value “0”.

We get ,, …,- FSR.

Example. Find the general solution and fundamental system of solutions of the homogeneous system of equations:

Solution. Let's write down the extended matrix of the system, having previously put the last equation of the system in first place, and bring it to a stepwise form. Since the right-hand sides of the equations do not change as a result of elementary transformations, remaining zero, the column

may not be written out.

̴
̴
̴

System rank where
- number of variables. The system is uncertain and has many solutions.

Basic minor for variables
non-zero:
choose
as basic variables, the rest
- free variables (take any real values).

The last matrix in the chain corresponds to a stepwise system of equations:

(3)

Let's express the basic variables
through free variables
(reverse of the Gaussian method).

From the last equation we express :
and substitute it into the first equation. We'll get it. Let us open the brackets, give similar ones and express :
.

Believing
,
,
, Where
, let's write

- general solution of the system.

Let's find a fundamental system of solutions

,,.

Then the general solution of the homogeneous system can be written as:

Comment. The FSR could have been found in another way, without first finding a general solution to the system. To do this, the resulting step system (3) had to be solved three times, assuming for :
; For :
; For :
.

Homogeneous systems of linear algebraic equations

As part of the lessons Gaussian method And Incompatible systems/systems with a common solution we considered inhomogeneous systems of linear equations, Where free member(which is usually on the right) at least one from the equations was different from zero.
And now, after a good warm-up with matrix rank, we will continue to polish the technique elementary transformations on homogeneous system of linear equations.
Based on the first paragraphs, the material may seem boring and mediocre, but this impression is deceptive. In addition to further development of techniques, there will be a lot of new information, so please try not to neglect the examples in this article.

What is a homogeneous system of linear equations?

The answer suggests itself. A system of linear equations is homogeneous if the free term everyone equation of the system is zero. For example:

It is absolutely clear that a homogeneous system is always consistent, that is, it always has a solution. And, first of all, what catches your eye is the so-called trivial solution . Trivial, for those who do not understand the meaning of the adjective at all, means without a show-off. Not academically, of course, but intelligibly =) ...Why beat around the bush, let's find out if this system has any other solutions:

Example 1

Solution: to solve a homogeneous system it is necessary to write system matrix and with the help of elementary transformations bring it to a stepwise form. Please note that here there is no need to write down the vertical bar and the zero column of free terms - after all, no matter what you do with zeros, they will remain zeros:

(1) The first line was added to the second line, multiplied by –2. The first line was added to the third line, multiplied by –3.

(2) The second line was added to the third line, multiplied by –1.

Dividing the third line by 3 doesn't make much sense.

As a result of elementary transformations, an equivalent homogeneous system is obtained , and, using the inverse of the Gaussian method, it is easy to verify that the solution is unique.



Answer:

Let us formulate an obvious criterion: a homogeneous system of linear equations has just a trivial solution, If system matrix rank(in this case 3) is equal to the number of variables (in this case – 3 pieces).

Let's warm up and tune our radio to the wave of elementary transformations:

Example 2

Solve a homogeneous system of linear equations

From the article How to find the rank of a matrix? Let us recall the rational technique of simultaneously decreasing the matrix numbers. Otherwise, you will have to cut large, and often biting fish. An approximate example of a task at the end of the lesson.

Zeros are good and convenient, but in practice the case is much more common when the rows of the system matrix linearly dependent. And then the emergence of a general solution is inevitable:

Example 3

Solve a homogeneous system of linear equations

Solution: let's write down the matrix of the system and, using elementary transformations, bring it to a stepwise form. The first action is aimed not only at obtaining a single value, but also at decreasing the numbers in the first column:

(1) A third line was added to the first line, multiplied by –1. The third line was added to the second line, multiplied by –2. At the top left I got a unit with a “minus”, which is often much more convenient for further transformations.

(2) The first two lines are the same, one of them was deleted. Honestly, I didn’t push the solution - it turned out that way. If you perform transformations in a template way, then linear dependence lines would have been revealed a little later.

(3) The second line was added to the third line, multiplied by 3.

(4) The sign of the first line was changed.

As a result of elementary transformations, an equivalent system was obtained:

The algorithm works exactly the same as for heterogeneous systems. The variables “sitting on the steps” are the main ones, the variable that did not get a “step” is free.

Let's express the basic variables through a free variable:

Answer: common decision:

The trivial solution is included in the general formula, and it is unnecessary to write it down separately.

The check is also carried out according to the usual scheme: the resulting general solution must be substituted into the left side of each equation of the system and a legal zero must be obtained for all substitutions.

It would be possible to finish this quietly and peacefully, but the solution to a homogeneous system of equations often needs to be represented in vector form by using fundamental system of solutions. Please forget about it for now analytical geometry, since now we will talk about vectors in the general algebraic sense, which I opened a little in the article about matrix rank. There is no need to gloss over the terminology, everything is quite simple.

Homogeneous system of linear equations AX = 0 always together. It has non-trivial (non-zero) solutions if r= rank A< n .

For homogeneous systems, the basic variables (the coefficients of which form the basic minor) are expressed through free variables by relations of the form:

Then n-r Linearly independent vector solutions will be:

and any other solution is a linear combination of them. Vector solutions form a normalized fundamental system.

In a linear space, the set of solutions to a homogeneous system of linear equations forms a subspace of dimension n-r; - the basis of this subspace.

System m linear equations with n unknown(or, linear system

Here x 1 , x 2 , …, x n a 11 , a 12 , …, a mn- system coefficients - and b 1 , b 2 , … b m a iji) and unknown ( j

System (1) is called homogeneousb 1 = b 2 = … = b m= 0), otherwise - heterogeneous.

System (1) is called square, if number m equations equal to the number n unknown.

Solution systems (1) - set n numbers c 1 , c 2 , …, c n, such that the substitution of each c i instead of x i into system (1) turns all its equations into identities.

System (1) is called joint non-joint

Solutions c 1 (1) , c 2 (1) , …, c n(1) and c 1 (2) , c 2 (2) , …, c n various

c 1 (1) = c 1 (2) , c 2 (1) = c 2 (2) , …, c n (1) = c n (2) .

certain uncertain. If there are more equations than unknowns, it is called redefined.

Solving systems of linear equations

Solving matrix equations ~ Gauss method

Methods for solving systems of linear equations are divided into two groups:

1. precise methods, which are finite algorithms for calculating the roots of a system (solving systems using an inverse matrix, Cramer’s rule, Gauss’s method, etc.),

2. iterative methods, which make it possible to obtain a solution to the system with a given accuracy through convergent iterative processes (iteration method, Seidel method, etc.).

Due to inevitable rounding, the results of even exact methods are approximate. When using iterative methods, in addition, the error of the method is added.

The effective use of iterative methods significantly depends on the successful choice of the initial approximation and the speed of convergence of the process.

Solving matrix equations

Consider the system n linear algebraic equations with respect to n unknown X 1 , X 2 , …, x n:

. (15)

Matrix A, the columns of which are the coefficients for the corresponding unknowns, and the rows are the coefficients for the unknowns in the corresponding equation, is called matrix of the system; matrix-column b, the elements of which are the right-hand sides of the equations of the system, is called right-hand side matrix or simply right side of the system. Column matrix X, whose elements are the unknown unknowns, is called system solution.

If matrix A- non-special, that is, det A n e is equal to 0, then system (13), or the matrix equation (14) equivalent to it, has a unique solution.

In fact, provided det A is not equal 0 there is an inverse matrix A-1 . Multiplying both sides of equation (14) by the matrix A-1 we get:

(16)

Formula (16) gives a solution to equation (14) and it is unique.

It is convenient to solve systems of linear equations using the function lsolve.

lsolve( A, b)

The solution vector is returned x such that Oh= b.

Arguments:

A- square, non-singular matrix.

b- a vector having the same number of rows as there are rows in the matrix A .

Figure 8 shows the solution to a system of three linear equations in three unknowns.

Gauss method

The Gaussian method, also called the Gaussian elimination method, consists in the fact that system (13) is reduced by sequential elimination of unknowns to an equivalent system with a triangular matrix:

In matrix notation, this means that first (the direct approach of the Gaussian method), by elementary operations on rows, the extended matrix of the system is reduced to a stepwise form:

and then (reverse of the Gaussian method) this step matrix is ​​transformed so that in the first n columns we get a unit matrix:

.

Last, ( n+ 1) the column of this matrix contains the solution to system (13).

In Mathcad, the forward and backward moves of the Gaussian method are performed by the function rref(A).

Figure 9 shows the solution of a system of linear equations using the Gaussian method, which uses the following functions:

rref( A)

The step form of the matrix is ​​returned A.

augment( A, IN)

Returns an array formed by the location A And IN side by side. Arrays A And IN must have the same number of lines.

submatrix( A, ir, jr, ic, jc)

Returns a submatrix consisting of all elements with ir By jr and columns with ic By jc. Make sure that ir jr And

ic jc, otherwise the order of the rows and/or columns will be reversed.

Figure 9.

Description of the method

For a system of n linear equations with n unknowns (over an arbitrary field)

with the determinant of the system matrix Δ different from zero, the solution is written in the form

(the i-th column of the system matrix is ​​replaced by a column of free terms).
In another form, Cramer’s rule is formulated as follows: for any coefficients c1, c2, ..., cn the following equality holds:

In this form, Cramer's formula is valid without the assumption that Δ is different from zero; it is not even necessary that the coefficients of the system be elements of an integral ring (the determinant of the system can even be a divisor of zero in the coefficient ring). We can also assume that either the sets b1,b2,...,bn and x1,x2,...,xn, or the set c1,c2,...,cn, do not consist of elements of the coefficient ring of the system, but some module above this ring. In this form, Cramer's formula is used, for example, in the proof of the formula for the Gram determinant and Nakayama's Lemma.

35) Kronecker-Capelli theorem
In order for a system of m inhomogeneous linear equations in n unknowns to be consistent, it is necessary and sufficient that Proof of necessity. Let system (1.13) be consistent, that is, there exist such numbers X 1 =α 1 , X 2 =α 2 , …, x n = α n , What (1.15) Let us subtract from the last column of the extended matrix its first column, multiplied by α 1, the second - by α 2, ..., nth - multiplied by α n, that is, from the last column of matrix (1.14) we should subtract the left sides of the equalities ( 1.15). Then we get the matrix whose rank will not change as a result of elementary transformations and . But it is obvious, and therefore proof of sufficiency. Let and for definiteness let a non-zero minor of order r be located in the upper left corner of the matrix: This means that the remaining rows of the matrix can be obtained as linear combinations of the first r rows, that is, the m-r rows of the matrix can be represented as the sums of the first r rows multiplied by some numbers. But then the first r equations of system (1.13) are independent, and the rest are their consequences, that is, the solution to the system of the first r equations is automatically a solution to the remaining equations. There are two possible cases. 1. r=n. Then the system consisting of the first r equations has the same number of equations and unknowns and is consistent, and its solution is unique. 2.r (1.16) “Free” unknown x r +1, x r +2 , …, x n can be given any values. Then the unknowns get the corresponding values x 1 , x 2 , …, x r. System (1.13) in this case is consistent, but uncertain. Comment. Nonzero minor of order r, where r X 1 , X 2 , …, X r are also called basic, the rest are free. System (1.16) is called shortened. If free unknowns are denoted x r +1 =c 1 , x r +2 =c 2 , …, x n = c n - r, then the basic unknowns will depend on them, that is, the solution to a system of m equations with n unknowns will have the form X = ( x 1 (c 1 , …, c n - r), x 2 (c 1 , …, c n - r), …, x r(c 1 , …, c n - r), c 1 , c 2 , …, c n - r) T , where the T symbol means transpose. This solution of the system is called general.

36) certainty, uncertainty
System m linear equations with n unknown(or, linear system) in linear algebra is a system of equations of the form

Here x 1 , x 2 , …, x n- unknowns that need to be determined. a 11 , a 12 , …, a mn- system coefficients - and b 1 , b 2 , … b m- free members - are assumed to be known. Coefficient indices ( a ij) systems denote equation numbers ( i) and unknown ( j), at which this coefficient stands, respectively.

System (1) is called homogeneous, if all its free terms are equal to zero ( b 1 = b 2 = … = b m= 0), otherwise - heterogeneous.

System (1) is called joint, if it has at least one solution, and non-joint, if she does not have a single solution.

A joint system of type (1) may have one or more solutions.

Solutions c 1 (1) , c 2 (1) , …, c n(1) and c 1 (2) , c 2 (2) , …, c n(2) joint systems of the form (1) are called various, if at least one of the equalities is violated:

c 1 (1) = c 1 (2) , c 2 (1) = c 2 (2) , …, c n (1) = c n (2) .

A joint system of the form (1) is called certain, if it has a unique solution; if it has at least two different solutions, then it is called uncertain

37) Solving systems of linear equations using the Gauss method

Let the original system look like this

Matrix A is called the main matrix of the system, b- column of free members.

Then, according to the property of elementary transformations over rows, the main matrix of this system can be reduced to a stepwise form (the same transformations must be applied to the column of free terms):

Then the variables are called main variables. All others are called free.

[edit]Condition of compatibility

The above condition for all can be formulated as a necessary and sufficient condition for compatibility:

Recall that the rank of a joint system is the rank of its main matrix (or extended matrix, since they are equal).

Algorithm

Description

The algorithm for solving SLAEs using the Gaussian method is divided into two stages.

§ At the first stage, the so-called direct move is carried out, when, through elementary transformations over the rows, the system is brought to a stepped or triangular form, or it is established that the system is incompatible. Namely, among the elements of the first column of the matrix, select a non-zero one, move it to the uppermost position by rearranging the rows, and subtract the resulting first row from the remaining rows after the rearrangement, multiplying it by a value equal to the ratio of the first element of each of these rows to the first element of the first row, zeroing thus the column below it. After these transformations have been completed, the first row and first column are mentally crossed out and continued until a zero-size matrix remains. If at any iteration there is no non-zero element among the elements of the first column, then go to the next column and perform a similar operation.

§ At the second stage, the so-called reverse move is carried out, the essence of which is to express all the resulting basic variables in terms of non-basic ones and construct a fundamental system of solutions, or, if all the variables are basic, then express numerically the only solution to the system of linear equations. This procedure begins with the last equation, from which the corresponding basic variable is expressed (and there is only one) and substituted into the previous equations, and so on, going up the “steps”. Each line corresponds to exactly one basis variable, so at every step except the last (topmost), the situation exactly repeats the case of the last line.

Gaussian method requires order O(n 3) actions.

This method relies on:

38)Kronecker-Capelli theorem.
A system is consistent if and only if the rank of its main matrix is ​​equal to the rank of its extended matrix.