Determinant
The title of this article is ambiguous. For other meanings, see Determinant (disambiguation).
In linear algebra, the determinant is a number (a scalar) that is assigned to a square matrix and can be calculated from its entries. It indicates how the volume changes in the linear mapping described by the matrix and is a useful tool in solving systems of linear equations. More generally, one can assign a determinant to each linear self-mapping (endomorphism). Common notations for the determinant of a square matrix are or .
For example, the determinant of a matrix can be
with the formula
be calculated.
With the help of determinants, for example, one can determine whether a linear system of equations is uniquely solvable and can explicitly state the solution with the help of Cramer's rule. The system of equations is uniquely solvable exactly when the determinant of the coefficient matrix is not equal to zero. Accordingly, a square matrix with entries from a body is exactly invertible if its determinant is not equal to zero.
If one writes vectors in as columns of a square matrix, the determinant of this matrix can be formed. If, in this determination, the vectors form a basis, the sign of the determinant can be used to define the orientation of Euclidean spaces. The absolute value of this determinant also corresponds to the volume of the n-parallelotope (also called spat) spanned by these vectors.
If the linear mapping represented by the matrix and is any measurable subset, then it follows that the volume of given by
If the linear mapping represented by the matrix and is any measurable subset, then in general the -dimensional volume of given by
The concept of determinant is of interest for matrices with For it degenerates to triviality : Thus a linear system of equations for the case of an equation . Solvability criterion and strategy for this equation are known: If set .
The 2x2 determinant is equal to the oriented area of the parallelogram spanned by its column vectors
Definition
There are several ways to define the determinant (see below). The most common is the following recursive definition.
Development of the determinant according to a column or row:
For n=2:
For n=3: Development according to the 1st column
Correspondingly for n=4,....
Laplace's development theorem (see below) says:
- You may develop a determinant according to any column or row, as long as you keep to the chessboard sign pattern:
Formally, it can be written like this:
(development according to the -th column)
(development after the -th line),
where is the submatrix of obtained by deleting the i}th row and j}th column
Example:
Properties (summary, see below)
- for unit matrix
- where is the transposed matrix of
- For square matrices and same size, the determinant multiplication theorem applies:
- for an matrix and a number .
- For a triangular matrix
- If a row or column consists of zeros, the determinant is 0.
- If two columns (rows) are equal, the determinant is 0.
- If you swap two columns (rows), a determinant changes its sign.
- If the column vectors (row vectors) of a matrix and is a number, then the following applies:
a1)
a2) ,
correspondingly for the other column vectors (row vectors).
b) is the (oriented) volume (area in case n=2) of the polytope (parallelogram) spanned by the vectors
- Addition of a multiple of one column (row) to another column (row) does not change a determinant. One can therefore transform a determinant into a triangular determinant with a weakened Gauss algorithm and use property 6. Note property 9. and 10.a2).
- Only for -determinants Sarrus' rule applies:
Example, application of rules 11, 10, 8:
Rule of Sarrus
Axiomatic description
A mapping from the space of square matrices into the underlying body maps each matrix to its determinant if it satisfies the following three properties (axioms according to Karl Weierstrass), where a square matrix is written column-wise as :
- It is multilinear, i.e. linear in each column:
For all holds:
For all and all holds:
- It is alternating, i.e. if two columns contain the same argument, the determinant is equal to 0:
For all and all holds:
It follows that the sign changes when two columns are swapped:
For all and all holds:
This inference is often used to define alternating. In general, however, it is not equivalent to the above. If alternating is defined in the second way, there is no unique determinant form if the body over which the vector space is formed has an element different from 0 with (characteristic 2).
- It is normalised, i.e. the unit matrix has the determinant 1:
It can be proved (and Karl Weierstrass did so in 1864 or even earlier) that there is one and only one such normalised alternating multilinear form on the algebra of matrices over the underlying body - namely, this determinant function (Weierstrass determinant labelling). The already mentioned geometric interpretation (volume property and orientation) also follows from this.
Leibniz formula
For an matrix, the determinant was defined by Gottfried Wilhelm Leibniz by the formula for the determinant of a matrix known today as the Leibniz formula:
The sum is calculated over all permutations σ the symmetric group degree n. denotes the signum of the permutation σ (+1 if σ an even permutation and -1 if it is odd) and σ is the function value of the permutation σ at location .
This formula contains summands and therefore becomes more unwieldy the larger is. However, it is well suited for proving statements about determinants. For example, with its help the continuity of the determinant function can be seen.
An alternative notation of the Leibniz formula uses the Levi-Civita symbol and Einstein's summation convention:
Determinant of an endomorphism
Since similar matrices have the same determinant, the definition of the determinant of square matrices can be transferred to the linear self-mappings (endomorphisms) represented by these matrices:
The determinant of a linear mapping a vector space in itself is the determinant a representation matrix of with respect to a basis of . It is independent of the choice of basis.
Here can be any finite-dimensional vector space over any body . More generally, one can also consider a commutative ring with one element and a free modulus of rank over
The definition can be formulated without using matrices as follows: Let ω be a determinant function. Then is determined by , where is the back transport of multilinear forms through be a basis of . Then holds:
It is choice of ω and the basis. Geometrically interpreted, the volume of the spate spanned by obtained by calculating the volume of the spate spanned by multiplied by the factor
An alternative definition is the following: Let be the dimension of and Λ be the -th outer power of . Then there is a uniquely determined linear mapping Λ , which is given by
is fixed. (This mapping Λ is obtained by universal construction as a continuation of onto the outer algebra Λ restricted to the component of degree .)
Since the vector space Λ is one-dimensional, Λ simply multiplication by a body element. This body element is . It is therefore valid
.
Further possibilities for calculation
Spat product
If a matrix is present, its determinant can also be calculated via the spar product.
Gaussian elimination method for determinant calculation
In general, determinants can be calculated with the Gaussian elimination method using the following rules:
- If a triangular matrix, then the product of the main diagonal elements is the determinant of .
- If results from by swapping two rows or columns, then .
- If results from by adding a multiple of one row or column to another row or column, then .
- If results from forming the a row or column, then .
Starting with any square matrix, use the last three of these four rules to transform the matrix into an upper triangular matrix, and then calculate the determinant as the product of the diagonal elements.
The determinant calculation by means of the LR decomposition is also based on this principle. Since both and are triangular matrices, their determinants result from the product of the diagonal elements, which are all normalised to 1 for According to the determinant product theorem, the determinant thus results from the relationship
Laplacian development theorem
With Laplace's development theorem, one can "develop" the determinant of an matrix according to a row or column. The two formulas are
(development according to the -th column)
(development after the -th line),
where is the -submatrix of obtained by deleting the -th row and -th column. The product is called cofactor
Strictly speaking, the development theorem only gives a procedure for calculating the summands of the Leibniz formula in a certain order. In the process, the determinant is reduced by one dimension with each application. If desired, the procedure can be applied until a scalar results (see above).
Laplace's development theorem can be generalised in the following way. Instead of developing by only one row or column, one can also develop by several rows or columns. The formula for this is
with the following designations: and are subsets of and is the submatrix of which consists of the rows with the indices of and the columns with the indices of and denote the complements of and . is the sum of the indices of . For the evolution by rows with indices from the sum runs over all where the number of these column indices equal to the number of rows evolve by. For the development according to the columns with the indices from the sum runs over . The number of summands is given by the binomial coefficient with .
Efficiency:
The computational cost of Laplace's evolution theorem for a matrix of dimension is of order whereas the usual methods are only of order and can sometimes be designed even better (see, for example, the Strassen algorithm). Nevertheless, Laplace's development theorem can be applied well to small matrices and matrices with many zeros.
Other properties
Determinant product theorem
The determinant is a multiplicative mapping in the sense that
for allmatrices and .
This means that the mapping is a group homorphism from the general linear group into the unit group of the body. The kernel of this mapping is the special linear group.
More generally, the Binet-Cauchy theorem applies to the determinant of a square matrix that is the product of two (not necessarily square) matrices. Even more generally, a formula for the calculation of a minor of order product of two matrices results as a direct consequence of the Binet-Cauchy theorem. If an matrix and is an matrix and if and with then the following applies with the same terms as for the generalised development theorem
The case yields Binet-Cauchy's theorem (which becomes the ordinary determinant product theorem for ) and the special case yields the formula for ordinary matrix multiplication.
Existence of the inverse matrix
→ Main article: Regular matrix
matrix A is invertible (i.e. regular) if a unit of the underlying ring (i.e. for solids). If is invertible, then the determinant of the inverse is .
Similar matrices
→ Main article: Similarity (matrix)
If and are similar, that is, if there exists an invertible matrix such that then their determinants coincide, because
.
Therefore, independently of a coordinate representation, one can define the determinant of a linear self-mapping (where a finite-dimensional vector space) by choosing a basis for describing the mapping by a matrix relative to this basis and taking the determinant of this matrix. The result is independent of the chosen basis.
There are matrices that have the same determinant but are not similar.
Block matrices
For the determinant of a block matrix
with square blocks and one can, under certain conditions, give formulae which exploit the block structure. For or follows from the generalised development theorem:
This formula is also called a box set.
If is invertible, it follows from the decomposition
the formula
If is invertible, then it can be formulated:
In the special case that all four blocks have the same size and commutate in pairs, this results in the following with the help of the determinant product theorem
Let denote a commutative subring of the ring of all matrices with entries from the body such that (for example, the subring generated by these four matrices), and let be the corresponding mapping that assigns its determinant to a square matrix with entries from This formula also holds if A is not invertible and generalises for matrices of .
Eigenvalues and characteristic polynomial
Is the characteristic polynomial of the matrix
,
then the determinant of .
Decomposes the characteristic polynomial into linear factors (with not necessarily different α ):
,
so in particular
.
If λ are the different eigenvalues of the matrix with -dimensional generalised eigenspaces, then
.
Continuity and differentiability
The determinant of real square matrices of fixed dimension is a polynomial function , which follows directly from Leibniz's formula. As such, it is continuous and differentiable everywhere. Its total differential at the point can be represented by Jacobi's formula:
where denotes the matrix complementary to and denotes the trace of a matrix. In particular, for invertible that
or as an approximation formula
if the values of the matrix are sufficiently small. The special case when equal to the unit matrix results in
Permanent
→ Main article: Permanent
The permanent is an "unsigned" analogue of the determinant, but is used much less frequently.
Generalisation
The determinant can also be defined on matrices with entries in a commutative ring with one. This is done with the help of a certain antisymmetric multilinear mapping: If is a commutative ring and the -dimensional free -module, then let
the uniquely determined mapping with the following properties:
- is -linear in each of the arguments.
- is antisymmetric, i.e. if two of the arguments are equal, returns zero.
- where is the element of having a 1 as -th coordinate and zeros otherwise.
A mapping with the first two properties is also called a determinant function, volume or alternating -linear form. The determinant is obtained by identifying naturally with the space of square matrices
Special determinants
- Vronsky determinant
- Pfaff determinant
- Vandermonde determinant
- Gram's determinant
- Functional determinant (also called Jacobi determinant)
- Determinant (knot theory)
History
Historically, determinants (Latin determinare "to delimit", "to determine") and matrices are very closely related, which is still the case according to our understanding today. However, the term matrix was only coined more than 200 years after the first thoughts on determinants. Originally, a determinant was considered in connection with systems of linear equations. The determinant "determines" whether the system of equations has a unique solution (this is precisely the case when the determinant is not equal to zero). The first considerations of this kind for matrices were carried out by Gerolamo Cardano at the end of the 16th century. About a hundred years later, Gottfried Wilhelm Leibniz and Seki Takakazu independently studied determinants of larger linear systems of equations. Seki, who tried to use determinants to give schematic solution formulas for systems of equations, found a rule for the case of three unknowns that corresponded to the later Sarrus rule.
In the 18th century, determinants became an integral part of the technique for solving systems of linear equations. In connection with his studies on intersections of two algebraic curves, Gabriel Cramer calculated the coefficients of a general conic section
which runs through five given points and established Cramer's rule, which is named after him today. For systems of equations with up to four unknowns, this formula was already used by Colin Maclaurin.
Several well-known mathematicians such as Étienne Bézout, Leonhard Euler, Joseph-Louis Lagrange and Pierre-Simon Laplace were now primarily concerned with the calculation of determinants. An important advance in theory was achieved by Alexandre-Théophile Vandermonde in a work on elimination theory, completed in 1771 and published in 1776. In it, he formulated some fundamental statements about determinants and is therefore considered a founder of the theory of determinants. These results included, for example, the statement that an even number of permutations of two adjacent columns or rows does not change the sign of the determinant, whereas an odd number of permutations of adjacent columns or rows changes the sign of the determinant.
During his investigations of binary and ternary quadratic forms, Gauss used the schematic notation of a matrix without calling this number field a matrix. In the process, he defined today's matrix multiplication as a by-product of his investigations and showed the determinant product theorem for certain special cases. Augustin-Louis Cauchy further systematised the theory of the determinant. For example, he introduced the conjugate elements and clearly distinguished between the individual elements of the determinant and between the subdeterminants of different orders. He also formulated and proved theorems about determinants such as the determinant product theorem or its generalisation, Binet-Cauchy's formula. In addition, he made a significant contribution to the term "determinant" becoming established for this figure. Therefore, Augustin-Louis Cauchy can also be considered the founder of the theory of the determinant.
The axiomatic treatment of the determinant as a function of independent variables was first given by Karl Weierstrass in his Berlin lectures (at the latest from 1864 and possibly already before), which Ferdinand Georg Frobenius then followed up in his Berlin lectures of the summer semester of 1874 and was, among other things, probably the first to systematically trace Laplace's development theorem back to this axiomatics.
Questions and Answers
Q: What is a determinant?
A: A determinant is a scalar (a number) that indicates how a square matrix behaves.
Q: How can the determinant of a matrix be calculated?
A: The determinant of the matrix can be calculated from the numbers in the matrix.
Q: How is the determinant of a matrix written?
A: The determinant of a matrix is written as det(A) or |A| in a formula.
Q: Are there other ways to write out the determinant of a matrix?
A: Yes, instead of det([a b c d]) and |[a b c d]|, one can simply write det [a b c d] and |[a b c d]|.
Q: What does it mean when we say "scalar"?
A: A scalar is an individual number or quantity that has magnitude but no direction associated with it.
Q: What are square matrices?
A: Square matrices are matrices with an equal number of rows and columns, such as 2x2 or 3x3 matrices.