Linear independence
In linear algebra, a family of vectors of a vector space is called linearly independent if the zero vector can only be generated by a linear combination of the vectors in which all coefficients of the combination are set to the value zero. Equivalently (unless the family consists only of the zero vector), none of the vectors can be represented as a linear combination of the other vectors in the family.
Otherwise they are called linearly dependent. In this case, at least one of the vectors (but not necessarily each) can be represented as a linear combination of the others.
For example, in the three-dimensional Euclidean space the vectors , and linearly independent. The vectors and , on the other hand, are linearly dependent because the third vector is the sum of the first two, i.e. the difference between the sum of the first two and the third is the zero vector. The vectors , and are also linearly dependent because of but here the third vector cannot be represented as a linear combination of the other two.
Linear independent vectors in ℝ3
Linear dependent vectors in a plane in ℝ3
Definition
Let be a vector space over the body and an index set. A family indexed by is called linearly independent if every finite subfamily contained in it is linearly independent.
A finite family of vectors from is called linearly independent if the only possible representation of the zero vector is as a linear combination
with coefficients from the basic body is the one where all coefficients equal to zero. If, on the other hand, the zero vector can also be generated non-trivially (with coefficients not equal to zero), then the vectors are linearly dependent.
The family is thus linearly dependent if and only if there exists a finite non-empty subset and coefficients at least one of which is not equal to 0, so that
The zero vector is an element of the vector space . In contrast, 0 is an element of the body .
The term is also used for subsets of a vector space: A subset of a vector space is called linearly independent if every finite linear combination of pairwise different vectors from can represent the zero vector only if all coefficients in this linear combination have the value zero. Note the following difference: if, for example, is a linearly independent family, then obviously a linearly dependent family. However, the set is then linearly independent.
Other characterisations and simple properties
- The vectors are (unless and ) are linearly independent exactly when none of them can be represented as a linear combination of the others.
This statement does not apply in the more general context of moduli over rings.
- A variant of this statement is the dependence lemma: If are linearly independent and linearly dependent, then as a linear combination of .
- If a family of vectors is linearly independent, then each subfamily of this family is also linearly independent. If, on the other hand, a family is linearly dependent, then every family that contains this dependent family is also linearly dependent.
- Elementary transformations of the vectors do not change the linear dependence or the linear independence.
- If the zero vector is one of the (here: let ), they are linearly dependent - the zero vector can be generated by setting all except for which, as a coefficient of the zero vector may be arbitrary (i.e. in particular also non-zero).
- In a -dimensional space, a family of more than vectors is always linearly dependent (see barrier lemma).
Determination by means of determinant
If one has given an -dimensional vector space as row or column vectors with respect to a fixed basis, one can check their linear independence by combining these row or column vectors into an matrix and then calculating its determinant. The vectors are linearly independent if the determinant is not equal to 0.
Basis of a vector space
→ Main article: Basis (vector space)
The concept of linearly independent vectors plays an important role in the definition and handling of vector space bases. A base of a vector space is a linearly independent generating system. Bases make it possible to calculate with coordinates, especially for finite-dimensional vector spaces.
Examples
Single vector
Let the vector be an element of the vector space over . Then the single vector linearly independent by itself exactly if it is not the zero vector.
For it follows from the definition of the vector space that if
with ,
can only be or !
Vectors in the plane
The vectors and are linearly independent in linearly independent.
Proof: For hold
i.e.
Then applies
so
This system of equations is only valid for the solution , (the so-called trivial solution); i.e. and are linearly independent.
Standard basis in n-dimensional space
In the vector space consider the following elements (the natural or standard basis of ):
Then the vector family with is linearly independent.
Proof: For apply
But then also
and it follows that for all .
Functions as vectors
Let be the vector space of all functions . The two functions and in are linearly independent.
Proof: Let and let it hold
for all . If one derives this equation according to , then one obtains a second equation
By subtracting the first equation from the second equation, we obtain
Since this equation must hold for all and thus in particular also for follows by substituting that must be. Substituting the calculated in this way back into the first equation yields
It follows again that (for ) must be.
Since the first equation is only solvable for and , the two functions and linearly independent.
See also: Wronski determinant
Rows
Let be the vector space of all real-valued continuous functions on the open unit interval. Then it is true that
but nevertheless are linearly independent. Linear combinations of powers of are in fact only polynomials and not general power series, so in particular they are restricted near 1, so that be represented as a linear combination of powers.
Rows and columns of a matrix
Another interesting question is whether the rows of a matrix are linearly independent or not. Here, the rows are regarded as vectors. If the rows of a square matrix are linearly independent, the matrix is called regular, otherwise singular. The columns of a square matrix are linearly independent exactly when the rows are linearly independent. Example of a sequence of regular matrices: Hilbert matrix.
Rational independence
Real numbers that are linearly independent over the rational numbers as coefficients are called rationally independent or incommensurable. The numbers are therefore rationally independent or incommensurable, the numbers on the other hand are rationally dependent.
Generalisations
The definition of linearly independent vectors can be applied analogously to elements of a module. In this context, linearly independent families are also called free (see also: free module).
The notion of linear independence can be further generalised to a consideration of independent sets, see Matroid.