Solving a system of equations using the matrix method is a rule. Solving slough using the inverse matrix method

The online calculator solves the system linear equations matrix method. It is given very detailed solution. To solve a system of linear equations, select the number of variables. Choose a method for calculating the inverse matrix. Then enter the data in the cells and click on the "Calculate" button.

×

Warning

Clear all cells?

Close Clear

Data entry instructions. Numbers are entered as integers (examples: 487, 5, -7623, etc.), decimals (ex. 67., 102.54, etc.) or fractions. The fraction must be entered in the form a/b, where a and b are integers or decimal numbers. Examples 45/5, 6.6/76.4, -7/6.7, etc.

Matrix method for solving systems of linear equations

Consider the following system of linear equations:

Given the definition of an inverse matrix, we have A −1 A=E, Where E- identity matrix. Therefore (4) can be written as follows:

Thus, to solve the system of linear equations (1) (or (2)), it is enough to multiply the inverse of A matrix per constraint vector b.

Examples of solving a system of linear equations using the matrix method

Example 1. Solve the following system of linear equations using the matrix method:

Let's find the inverse of matrix A using the Jordan-Gauss method. On the right side of the matrix A Let's write the identity matrix:

Let's exclude the elements of the 1st column of the matrix below the main diagonal. To do this, add lines 2,3 with line 1, multiplied by -1/3, -1/3, respectively:

Let's exclude the elements of the 2nd column of the matrix below the main diagonal. To do this, add line 3 with line 2 multiplied by -24/51:

Let's exclude the elements of the 2nd column of the matrix above the main diagonal. To do this, add line 1 with line 2 multiplied by -3/17:

Separate the right side of the matrix. The resulting matrix is ​​the inverse matrix of A :

Matrix form of writing a system of linear equations: Ax=b, Where

Let's calculate all algebraic complements of the matrix A:

,
,
,
,
,
,
,
,
.

The inverse matrix is ​​calculated from the following expression.

Let there be a square matrix of nth order

Matrix A -1 is called inverse matrix in relation to matrix A, if A*A -1 = E, where E is the identity matrix of the nth order.

Identity matrix- such a square matrix in which all the elements along the main diagonal, passing from the upper left corner to the lower right corner, are ones, and the rest are zeros, for example:

inverse matrix may exist only for square matrices those. for those matrices in which the number of rows and columns coincide.

Theorem for the existence condition of an inverse matrix

In order for a matrix to have an inverse matrix, it is necessary and sufficient that it be non-singular.

The matrix A = (A1, A2,...A n) is called non-degenerate, if the column vectors are linearly independent. The number of linearly independent column vectors of a matrix is ​​called the rank of the matrix. Therefore, we can say that in order for there to exist inverse matrix, it is necessary and sufficient that the rank of the matrix is ​​equal to its dimension, i.e. r = n.

Algorithm for finding the inverse matrix

  1. Write matrix A into the table for solving systems of equations using the Gaussian method and assign matrix E to it on the right (in place of the right-hand sides of the equations).
  2. Using Jordan transformations, reduce matrix A to a matrix consisting of unit columns; in this case, it is necessary to simultaneously transform the matrix E.
  3. If necessary, rearrange the rows (equations) of the last table so that under the matrix A of the original table you get the identity matrix E.
  4. Write down the inverse matrix A -1, which is located in the last table under the matrix E of the original table.
Example 1

For matrix A, find the inverse matrix A -1

Solution: We write matrix A and assign the identity matrix E to the right. Using Jordan transformations, we reduce matrix A to the identity matrix E. The calculations are given in Table 31.1.

Let's check the correctness of the calculations by multiplying the original matrix A and the inverse matrix A -1.

As a result of matrix multiplication, the identity matrix was obtained. Therefore, the calculations were made correctly.

Answer:

Solving matrix equations

Matrix equations can look like:

AX = B, HA = B, AXB = C,

where A, B, C are the specified matrices, X is the desired matrix.

Matrix equations are solved by multiplying the equation by inverse matrices.

For example, to find the matrix from the equation, you need to multiply this equation by on the left.

Therefore, to find a solution to the equation, you need to find the inverse matrix and multiply it by the matrix on the right side of the equation.

Other equations are solved similarly.

Example 2

Solve the equation AX = B if

Solution: Since the inverse matrix is ​​equal to (see example 1)

Matrix method in economic analysis

Along with others, they are also used matrix methods . These methods are based on linear and vector-matrix algebra. Such methods are used for the purposes of analyzing complex and multidimensional economic phenomena. Most often, these methods are used when it is necessary to make a comparative assessment of the functioning of organizations and their structural divisions.

In the process of applying matrix analysis methods, several stages can be distinguished.

At the first stage a system of economic indicators is being formed and on its basis a matrix of initial data is compiled, which is a table in which system numbers are shown in its individual rows (i = 1,2,....,n), and in vertical columns - numbers of indicators (j = 1,2,....,m).

At the second stage For each vertical column, the largest of the available indicator values ​​is identified, which is taken as one.

After this, all amounts reflected in this column are divided by highest value and a matrix of standardized coefficients is formed.

At the third stage all components of the matrix are squared. If they have different significance, then each matrix indicator is assigned a certain weight coefficient k. The value of the latter is determined by expert opinion.

On the last one, fourth stage found rating values Rj are grouped in order of their increase or decrease.

The matrix methods outlined should be used, for example, when comparative analysis various investment projects, as well as when assessing other economic indicators of organizations.

A system of m linear equations with n unknowns called a system of the form

Where a ij And b i (i=1,…,m; b=1,…,n) are some known numbers, and x 1 ,…,x n– unknown. In the designation of coefficients a ij first index i denotes the equation number, and the second j– the number of the unknown at which this coefficient stands.

We will write the coefficients for the unknowns in the form of a matrix , which we'll call matrix of the system.

The numbers on the right side of the equations are b 1 ,…,b m are called free members.

Totality n numbers c 1 ,…,c n called decision of a given system, if each equation of the system becomes an equality after substituting numbers into it c 1 ,…,c n instead of the corresponding unknowns x 1 ,…,x n.

Our task will be to find solutions to the system. In this case, three situations may arise:

A system of linear equations that has at least one solution is called joint. Otherwise, i.e. if the system has no solutions, then it is called non-joint.

Let's consider ways to find solutions to the system.


MATRIX METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS

Matrices make it possible to briefly write down a system of linear equations. Let a system of 3 equations with three unknowns be given:

Consider the system matrix and matrices columns of unknown and free terms

Let's find the work

those. as a result of the product, we obtain the left-hand sides of the equations of this system. Then using the definition of matrix equality this system can be written in the form

or shorter AX=B.

Here are the matrices A And B are known, and the matrix X unknown. It is necessary to find it, because... its elements are the solution to this system. This equation is called matrix equation.

Let the determinant of the matrix be different from zero | A| ≠ 0. Then matrix equation is solved as follows. Multiply both sides of the equation on the left by the matrix A-1, inverse of the matrix A: . Because the A -1 A = E And EX = X, then we obtain a solution to the matrix equation in the form X = A -1 B .

Note that since the inverse matrix can only be found for square matrices, the matrix method can only solve those systems in which the number of equations coincides with the number of unknowns. However, matrix recording of the system is also possible in the case when the number of equations is not equal to the number of unknowns, then the matrix A will not be square and therefore it is impossible to find a solution to the system in the form X = A -1 B.

Examples. Solve systems of equations.

CRAMER'S RULE

Consider a system of 3 linear equations with three unknowns:

Third-order determinant corresponding to the system matrix, i.e. composed of coefficients for unknowns,

called determinant of the system.

Let's compose three more determinants as follows: replace sequentially 1, 2 and 3 columns in the determinant D with a column of free terms

Then we can prove the following result.

Theorem (Cramer's rule). If the determinant of the system Δ ≠ 0, then the system under consideration has one and only one solution, and

Proof. So, let's consider a system of 3 equations with three unknowns. Let's multiply the 1st equation of the system by the algebraic complement A 11 element a 11, 2nd equation – on A 21 and 3rd – on A 31:

Let's add these equations:

Let's look at each of the brackets and the right side of this equation. By the theorem on the expansion of the determinant in elements of the 1st column

Similarly, it can be shown that and .

Finally, it is easy to notice that

Thus, we obtain the equality: .

Hence, .

The equalities and are derived similarly, from which the statement of the theorem follows.

Thus, we note that if the determinant of the system Δ ≠ 0, then the system has a unique solution and vice versa. If the determinant of the system is equal to zero, then the system either has an infinite number of solutions or has no solutions, i.e. incompatible.

Examples. Solve system of equations


GAUSS METHOD

The previously discussed methods can be used to solve only those systems in which the number of equations coincides with the number of unknowns, and the determinant of the system must be different from zero. The Gauss method is more universal and suitable for systems with any number of equations. It consists in the consistent elimination of unknowns from the equations of the system.

Consider again a system of three equations with three unknowns:

.

We will leave the first equation unchanged, and from the 2nd and 3rd we will exclude the terms containing x 1. To do this, divide the second equation by A 21 and multiply by – A 11, and then add it to the 1st equation. Similarly, we divide the third equation by A 31 and multiply by – A 11, and then add it with the first one. As a result, the original system will take the form:

Now from the last equation we eliminate the term containing x 2. To do this, divide the third equation by, multiply by and add with the second. Then we will have a system of equations:

From here, from the last equation it is easy to find x 3, then from the 2nd equation x 2 and finally, from 1st - x 1.

When using the Gaussian method, the equations can be swapped if necessary.

Often instead of writing new system equations, are limited to writing out the extended matrix of the system:

and then bring it to a triangular or diagonal form using elementary transformations.

TO elementary transformations matrices include the following transformations:

  1. rearranging rows or columns;
  2. multiplying a string by a number other than zero;
  3. adding other lines to one line.

Examples: Solve systems of equations using the Gauss method.


Thus, the system has an infinite number of solutions.

Let's consider system of linear algebraic equations(SLAU) relatively n unknown x 1 , x 2 , ..., x n :

This system in a “collapsed” form can be written as follows:

S n i=1 a ij x j = b i , i=1,2, ..., n.

In accordance with the matrix multiplication rule, the considered system of linear equations can be written in matrix form Ax=b, Where

, ,.

Matrix A, the columns of which are the coefficients for the corresponding unknowns, and the rows are the coefficients for the unknowns in the corresponding equation is called matrix of the system. Column matrix b, the elements of which are the right-hand sides of the equations of the system, is called the right-hand side matrix or simply right side of the system. Column matrix x , whose elements are the unknown unknowns, is called system solution.

A system of linear algebraic equations written in the form Ax=b, is matrix equation.

If the system matrix non-degenerate, then it has an inverse matrix and then the solution to the system is Ax=b is given by the formula:

x=A -1 b.

Example Solve the system matrix method.

Solution let's find the inverse matrix for the coefficient matrix of the system

Let's calculate the determinant by expanding along the first line:

Because the Δ ≠ 0 , That A -1 exists.

The inverse matrix was found correctly.

Let's find a solution to the system

Hence, x 1 = 1, x 2 = 2, x 3 = 3 .

Examination:

7. The Kronecker-Capelli theorem on the compatibility of a system of linear algebraic equations.

System of linear equations has the form:

a 21 x 1 + a 22 x 2 +... + a 2n x n = b 2, (5.1)

a m1 x 1 + a m1 x 2 +... + a mn x n = b m.

Here a i j and b i (i = ; j = ) are given, and x j are unknown real numbers. Using the concept of product of matrices, we can rewrite system (5.1) in the form:

where A = (a i j) is a matrix consisting of coefficients for the unknowns of system (5.1), which is called matrix of the system, X = (x 1 , x 2 ,..., x n) T , B = (b 1 , b 2 ,..., b m) T are column vectors composed respectively of unknowns x j and free terms b i .

Ordered collection n real numbers (c 1 , c 2 ,..., c n) are called system solution(5.1), if as a result of substituting these numbers instead of the corresponding variables x 1, x 2,..., x n, each equation of the system turns into an arithmetic identity; in other words, if there is a vector C= (c 1 , c 2 ,..., c n) T such that AC  B.

System (5.1) is called joint, or solvable, if it has at least one solution. The system is called incompatible, or unsolvable, if it has no solutions.

,

formed by assigning a column of free terms to the right side of the matrix A is called extended matrix of the system.

The question of compatibility of system (5.1) is solved by the following theorem.

Kronecker-Capelli theorem . A system of linear equations is consistent if and only if the ranks of matrices A andA coincide, i.e. r(A) = r(A) = r.

For the set M of solutions of system (5.1) there are three possibilities:

1) M =  (in this case the system is inconsistent);

2) M consists of one element, i.e. the system has a unique solution (in this case the system is called certain);

3) M consists of more than one element (then the system is called uncertain). In the third case, system (5.1) has an infinite number of solutions.

The system has a unique solution only if r(A) = n. In this case, the number of equations is not less than the number of unknowns (mn); if m>n, then m-n equations are consequences of the others. If 0

To solve an arbitrary system of linear equations, you need to be able to solve systems in which the number of equations is equal to the number of unknowns - the so-called Cramer type systems:

a 11 x 1 + a 12 x 2 +... + a 1n x n = b 1,

a 21 x 1 + a 22 x 2 +... + a 2n x n = b 2, (5.3)

... ... ... ... ... ...

a n1 x 1 + a n1 x 2 +... + a nn x n = b n .

Systems (5.3) are solved in one of the following ways: 1) the Gauss method, or the method of eliminating unknowns; 2) according to Cramer's formulas; 3) matrix method.

Example 2.12. Explore the system of equations and solve it if it is consistent:

5x 1 - x 2 + 2x 3 + x 4 = 7,

2x 1 + x 2 + 4x 3 - 2x 4 = 1,

x 1 - 3x 2 - 6x 3 + 5x 4 = 0.

Solution. We write out the extended matrix of the system:

.

Let's calculate the rank of the main matrix of the system. It is obvious that, for example, the second-order minor in the upper left corner = 7  0; the third-order minors containing it are equal to zero:

Consequently, the rank of the main matrix of the system is 2, i.e. r(A) = 2. To calculate the rank of the extended matrix A, consider the bordering minor

this means that the rank of the extended matrix r(A) = 3. Since r(A)  r(A), the system is inconsistent.

Matrix method SLAU solutions applied to solving systems of equations in which the number of equations corresponds to the number of unknowns. The method is best used for solving low-order systems. The matrix method for solving systems of linear equations is based on the application of the properties of matrix multiplication.

This method, in other words inverse matrix method, so called because the solution reduces to an ordinary matrix equation, to solve which you need to find the inverse matrix.

Matrix solution method A SLAE with a determinant that is greater or less than zero is as follows:

Suppose there is a SLE (system of linear equations) with n unknown (over an arbitrary field):

This means that it can be easily converted into matrix form:

AX=B, Where A— the main matrix of the system, B And X— columns of free terms and solutions of the system, respectively:

Let's multiply this matrix equation from the left by A−1— inverse matrix to matrix A: A −1 (AX)=A −1 B.

Because A −1 A=E, Means, X=A −1 B. The right side of the equation gives the solution column of the initial system. The condition for the applicability of the matrix method is the non-degeneracy of the matrix A. A necessary and sufficient condition for this is that the determinant of the matrix is ​​not equal to zero A:

detA≠0.

For homogeneous system of linear equations, i.e. if vector B=0, the opposite rule holds: the system AX=0 there is a non-trivial (i.e. not equal to zero) solution only when detA=0. This connection between solutions of homogeneous and inhomogeneous systems of linear equations is called Fredholm alternative.

Thus, the solution of the SLAE using the matrix method is carried out according to the formula . Or, the solution to the SLAE is found using inverse matrix A−1.

It is known that for a square matrix A order n on n there is an inverse matrix A−1 only if its determinant is nonzero. Thus, the system n linear algebraic equations with n We solve unknowns using the matrix method only if the determinant of the main matrix of the system is not equal to zero.

Despite the fact that there are limitations on the applicability of such a method and the difficulties of calculations for large values ​​of coefficients and high-order systems, the method can be easily implemented on a computer.

An example of solving a non-homogeneous SLAE.

First, let’s check whether the determinant of the coefficient matrix of unknown SLAEs is not equal to zero.

Now we find union matrix, transpose it and substitute it into the formula to determine the inverse matrix.

Substitute the variables into the formula:

Now we find the unknowns by multiplying the inverse matrix and the column of free terms.

So, x=2; y=1; z=4.

When moving from the usual form of SLAE to the matrix form, be careful with the order of the unknown variables in the equations of the system. For example:

It CANNOT be written as:

It is necessary, first, to order the unknown variables in each equation of the system and only after that proceed to matrix notation:

In addition, you need to be careful with the designation of unknown variables, instead x 1, x 2 , …, x n there may be other letters. Eg:

in matrix form we write it like this:

The matrix method is better for solving systems of linear equations in which the number of equations coincides with the number of unknown variables and the determinant of the main matrix of the system is not equal to zero. When there are more than 3 equations in a system, finding the inverse matrix will require more computational effort, therefore, in this case, it is advisable to use the Gaussian method for solving.