Norm (mathematics)

This article is about norms in vector spaces, for norms in body theory see norm (body extension), for the norm of an ideal see ideal (ring theory).

In mathematics, a norm (from the Latin norma "guideline") is a mapping that assigns a number to a mathematical object, for example a vector, a matrix, a sequence or a function, that is intended to describe the size of the object in a certain way. The specific meaning of "magnitude" depends on the object under consideration and the norm used; for example, a norm may represent the length of a vector, the largest singular value of a matrix, the variation of a sequence, or the maximum of a function. A norm is symbolized by two vertical lines ‖ \|\cdot \|left and right of the object.

Formally, a norm is a mapping that associates a non-negative real number with an element of a vector space over the real or complex numbers and has the three properties of definiteness, absolute homogeneity, and subadditivity. A norm may (but need not) be derived from a scalar product. If a norm is assigned to a vector space, one obtains a normed space with important analytic properties, since every norm on a vector space also induces a metric and hence a topology. Two mutually equivalent norms induce the same topology, whereas on finite-dimensional vector spaces all norms are mutually equivalent.

Norms are studied in particular in linear algebra and functional analysis, but they also play an important role in numerical mathematics.

Sets of constant norm (norm spheres) of the maximum norm (cube surface) and the sum norm (octahedron surface) of vectors in three dimensionsZoom
Sets of constant norm (norm spheres) of the maximum norm (cube surface) and the sum norm (octahedron surface) of vectors in three dimensions

Basic Terms

Definition

A norm is a mapping ‖ \|\cdot \|from a vector space Vover the body \mathbb {K} of the real or complex numbers into the set of nonnegative real numbers {\mathbb {R} }_{0}^{+},

\|\cdot \|\colon V\to {\mathbb {R} }_{0}^{+},\;x\mapsto \|x\|,

which for all vectors x,y\in Vand all scalars αsatisfy \alpha \in \mathbb {K} the following three axioms:

(1) Definiteness:

\|x\|=0\;\Rightarrow \;x=0,

(2) absolute homogeneity:

\|\alpha \cdot x\|=|\alpha |\cdot \|x\|,

(3) Subadditivity or triangle inequality:

\|x+y\|\leq \|x\|+\|y\|.

Here |\cdot |denotes the magnitude of the scalar.

This axiomatic definition of the norm was established by Stefan Banach in his 1922 dissertation. The standard symbol used today was first yused by Erhard Schmidt in 1908 as the distance ‖ \|x-y\|between vectors xand

Example

The standard example of a norm is the Euclidean norm of a vector (x,y)(with origin at zero) in the plane \mathbb {R} ^{2},

\|(x,y)\|={\sqrt {x^{2}+y^{2}}},

which corresponds to the descriptive length of the vector. For example, the Euclidean norm of the vector equal to(1,1) {\sqrt {2}} . Definiteness then means that if the length of a vector is zero, then it must be the zero vector. Absolute homogeneity says that if each component of a vector is multiplied by a number, its length changes by a factor of the magnitude of that number. Finally, the triangle inequality states that the length of the sum of two vectors is at most as great as the sum of the two lengths.

Basic properties

From the absolute homogeneity, by setting α follows \alpha =0directly

x=0\;\Rightarrow \;\|x\|=0,

thus the opposite direction of definiteness. Therefore, a vector has xnorm zero exactly if it is the zero vector. Furthermore, it follows from absolute homogeneity by setting α \alpha =-1

\|{-x}\|=\|x\|and thus ‖ \|x-y\|=\|y-x\|,

thus symmetry with respect to sign reversal. From the triangle inequality then follows by setting y=-x, that a norm is always non-negative, thus

\|x\|\geq 0

holds. Thus, every vector different from the zero vector has a positive norm. Furthermore, the inverse triangle inequality applies to norms

{\bigl |}\|x\|-\|y\|{\bigr |}\leq \|x-y\|,

which can be shown by applying the triangle inequality to x-y+y and considering symmetry. Thus, every norm is a uniformly continuous mapping. Moreover, due to subadditivity and absolute homogeneity, a norm is a sublinear and thus convex mapping, that is, for all t\in [0,1]holds

\|tx+(1-t)y\|\leq t\|x\|+(1-t)\|y\|.

Standard balls

For a given vector x_{0}\in Vand a scalar r\in {\mathbb {K} }with r>0is called the set

or\{x\in V\colon \|x-x_{0}\|<r\}\{x\in V\colon \|x-x_{0}\|\leq r\}

open or closed norm sphere and the set

\{x\in V\colon \|x-x_{0}\|=r\}

norm sphere around x_{0} with radius r. The terms "sphere" and "sphere" are to be seen very generally - for example, a norm sphere can also have corners and edges - and coincide only in the special case of the Euclidean vector norm with the sphere term known from geometry. If one chooses in the definition x_{0}=0 and r=1, the resulting sets are called unit sphere and unit sphere, respectively. Each norm sphere or norm sphere arises from the corresponding unit sphere or unit sphere by scaling by the factor rand translation by the vector x_{0}. A vector of the unit sphere is called a unit vector; for each vector the corresponding unit vector is x\neq 0obtained by normalizing . {\tfrac {x}{\|x\|}}

In any case, a norm sphere must be a convex set, otherwise the corresponding mapping would not satisfy the triangle inequality. Furthermore, a norm sphere must always be point symmetric with respect to x_{0} due to absolute homogeneity. A norm can also be defined in finite-dimensional vector spaces over the associated norm sphere if this set is convex, point-symmetric with respect to the zero point, closed and bounded, and has the zero point in the interior. The corresponding mapping is also called Minkowski functional or gauge functional. Hermann Minkowski investigated such gauge functionals as early as 1896 in the context of number theoretic problems.

Induced norms

Main article: Scalar product norm

A norm may, but need not necessarily, be \langle \cdot ,\cdot \rangle derived from a scalar product ⟨ . The norm of a vector x\in Vis then defined as.

\|x\|={\sqrt {\langle x,x\rangle }},

i.e. the root of the scalar product of the vector with itself. In this case, one speaks of the norm induced by the scalar product or Hilbert norm. Every norm induced by a scalar product satisfies the Cauchy-Schwarz inequality

|\langle x,y\rangle |\leq \|x\|\cdot \|y\|

and is invariant under unitary transformations. According to the Jordan-von Neumann theorem, a norm is induced by a scalar product if and only if it satisfies the parallelogram equation. However, some important norms are not derived from a scalar product; in fact, historically, an essential step in the development of functional analysis was the introduction of norms not based on a scalar product. For every norm, however, there is an associated semi-inner product.

According to the triangle inequality, the length of the sum of two vectors is at most as great as the sum of their lengths; equality holds exactly when the vectors x and y point in the same direction.Zoom
According to the triangle inequality, the length of the sum of two vectors is at most as great as the sum of their lengths; equality holds exactly when the vectors x and y point in the same direction.

Unit sphere (red) and sphere (blue) for the Euclidean norm in two dimensionsZoom
Unit sphere (red) and sphere (blue) for the Euclidean norm in two dimensions

Norms on finite dimensional vector spaces

Number standards

Amount standard

Main article: Magnitude function

The magnitude of a real number z\in \mathbb {R} is a simple example of a norm. One obtains the magnitude norm by omitting the sign of the number, i.e.

\|z\|=|z|={\sqrt {z^{2}}}={\begin{cases}\,\ \ z&\mathrm {f{\ddot {u}}r} \ z\geq 0\\\,-z&\mathrm {f{\ddot {u}}r} \ z<0.\end{cases}}

The magnitude of a complex number {\displaystyle z\in \mathbb {C} }is equivalently given by

\|z\|=|z|={\sqrt {z{\bar {z}}}}={\sqrt {\left(\operatorname {Re} z\right)^{2}+\left(\operatorname {Im} z\right)^{2}}}

where zis {\bar {z}}the complex conjugate number to and \operatorname {Re} or \operatorname {Im} denotes the real or imaginary part of the complex number. The magnitude of a complex number thus corresponds to the length of its vector in the Gaussian number plane.

The absolute value norm is derived from the standard scalar product of two real or complex numbers

for \langle w,z\rangle =w\cdot zw,z\in \mathbb {R} or ⟨ for \langle w,z\rangle =w\cdot {\bar {z}}{\displaystyle w,z\in \mathbb {C} }

induced.

Vector norms

In the following, real or complex vectors x\in {\mathbb {K} }^{n} finite dimension n\in \mathbb {N} considered. A vector (in the strict sense) is then a tuple x=(x_{1},\dotsc ,x_{n})with entries x_{i}\in \mathbb {K} for i=1,\dotsc ,n . For the following definitions, it does not matter whether the vector is a row vector or a column vector. For n=1all following norms correspond to the magnitude norm of the previous section.

Maximum standard

Main article: Maximum standard

The maximum norm, Chebyshev norm or ∞-norm (infinity norm) of a vector is defined as

\|x\|_{\infty }=\max _{i=1,\dotsc ,n}|x_{i}|

and corresponds to the magnitude of the largest component of the vector. The unit sphere of the real maximum norm has the shape of a square in two dimensions, the shape of a cube in three dimensions and the shape of a hypercube in general dimensions.

The maximum norm is not induced by a scalar product. The metric derived from it is called the maximum metric, Chebyshev metric, or, especially in two dimensions, the chessboard metric, since it measures the distance corresponding to the number of steps a king must take in chess to move from one square on the chessboard to another. For example, since the king can move diagonally, the distance between the centers of the two diagonally opposite corner squares of a chessboard in the maximum metric is equal to 7.

The maximum standard is a special case of the product standard

\|x\|_{\infty }=\max _{i=1,\dotsc ,n}\|x_{i}\|_{i}

over the product space V=V_{1}\times \dotsb \times V_{n}of nnormalized vector spaces (V_{i},\|\cdot \|_{i})with x=(x_{1},\dotsc ,x_{n})and x_{i}\in V_{i}.

Euclidean norm

Main article: Euclidean norm

The Euclidean norm or 2-norm of a vector is defined as

\|x\|_{2}={\sqrt {\sum _{i=1}^{n}|x_{i}|^{2}}}

and corresponds to the square root of the sum of the magnitude squares of the components of the vector. For real vectors, the magnitude dashes can be omitted in the definition, but not for complex vectors.

The unit sphere of the real Euclidean norm has the shape of a circle in two dimensions, the shape of a spherical surface in three dimensions and the shape of a sphere in general dimensions. In two and three dimensions, the Euclidean norm describes the descriptive length of a vector in the plane and in space, respectively. The Euclidean norm is the only vector norm that is invariant under unitary transformations, for example rotations of the vector around the zero point.

The Euclidean norm is given by the standard scalar product of two real or complex vectors x,yby

resp\langle x,y\rangle _{2}=x_{1}y_{1}+x_{2}y_{2}+\dotsb +x_{n}y_{n}\langle x,y\rangle _{2}=x_{1}{\bar {y}}_{1}+x_{2}{\bar {y}}_{2}+\dotsb +x_{n}{\bar {y}}_{n}

induced. A vector space provided with the Euclidean norm is called a Euclidean space. The metric derived from the Euclidean norm is called the Euclidean metric. For example, according to the Pythagorean theorem, the distance between the centers of the two diagonally opposite corner squares of a checkerboard in the Euclidean metric is equal to {\sqrt {7^{2}+7^{2}}}=7{\sqrt {2}}\approx 9{,}9.

Sum standard

Main article: Sum standard

The sum norm, (more precisely) magnitude sum norm, or 1-norm (read: "one-norm") of a vector is defined as.

\|x\|_{1}=\sum _{i=1}^{n}|x_{i}|

and corresponds to the sum of the magnitudes of the components of the vector. The unit sphere of the real sum norm has the shape of a square in two dimensions, an octahedron in three dimensions and a cross polytope in general dimensions.

The sum norm is not induced by a scalar product. The metric derived from the sum norm is also called the Manhattan metric or the taxi metric, especially in real two-dimensional space, because it measures the distance between two points like the driving distance on a grid-like city map on which one can only move in vertical and horizontal sections. For example, the distance between the centers of the two diagonally opposite corner squares of a checkerboard in the Manhattan metric is equal to 14.

p standards

Main article: p standard

In general, for real 1\leq p<\infty the p-norm of a vector can be given by

\|x\|_{p}=\left(\sum _{i=1}^{n}|x_{i}|^{p}\right)^{1/p}

define. Thus, for p=1 one obtains the sum norm, for p=2 the Euclidean norm, and as a limit for p\to \infty the maximum norm. In the real case, the unit spheres of p -norms take the form of superellipses (p>2)or subellipses two dimensions, and(1\leq p<2) superellipsoids or subellipsoids in three and higher dimensions.

All p -norms including the maximum norm satisfy the Minkowski inequality as well as the Hölder inequality. They are pmonotonically decreasing for increasing and equivalent to each other. As limiting factors, for 1\leq p\leq r\leq \infty

\|x\|_{r}\leq \|x\|_{p}\leq n^{{\frac {1}{p}}-{\frac {1}{r}}}\|x\|_{r},

where in the case of the maximum norm the exponent {\tfrac {1}{\infty }}=0is set to Thus, the p -norms differ by at most a factor of n. The mappings definedp<1 analogously to the p -norms for are not norms, since the resulting norm spheres are no longer convex and thus the triangle inequality is violated.

Matrix standards

Main article: Matrix standard

In the following, we consider real or complex matrices A\in {\mathbb {K} }^{m\times n} with mrows and ncolumns are considered. For matrix norms, in addition to the three norm properties, sometimes the submultiplicity

\|A\cdot B\|\leq \|A\|\cdot \|B\|

with B\in {\mathbb {K} }^{n\times l} is required as another defining property. If a matrix norm is submultiplicative, then the spectral radius of the matrix (the magnitude of the largest eigenvalue) is at most as large as the norm of the matrix. However, there are matrix norms with the usual norm properties that are not submultiplicative. In most cases, the definition of a matrix norm is based on a vector norm. A matrix norm is called compatible with a vector norm if

\|A\cdot x\|\leq \|A\|\cdot \|x\|

for all x\in {\mathbb {K} }^{n}holds.

Matrix norms over vector norms

By writing all entries of a matrix one below the other, a matrix can also be viewed as a corresponding long vector of {\mathbb {K} }^{m\cdot n}. Thus matrix norms can be defined directly over vector norms, in particular over the p-norms through

\|A\|=\left(\sum _{i=1}^{m}\sum _{j=1}^{n}|a_{ij}|^{p}\right)^{1/p},

Where a_{ij}\in \mathbb {K} are the entries of the matrix. Examples of matrix norms defined in this way are the total norm based on the maximum norm and the Frobenius norm based on the Euclidean norm, both of which are submultiplicative and compatible with the Euclidean norm.

Matrix norms via operator norms

Main article: natural matrix norm

A matrix norm is called induced from a vector norm or natural matrix norm if it is derived as an operator norm, that is, if:

\|A\|=\max _{x\neq 0}{\frac {\|Ax\|}{\|x\|}}=\max _{\|x\|=1}\|Ax\|.

Descriptively, a matrix norm defined in this way corresponds to the largest possible stretching factor after applying the matrix to a vector. As operator norms, such matrix norms are always submultiplicative and compatible with the vector norm from which they were derived. In fact, among all matrix norms compatible with a vector norm, an operator norm is the one with the smallest value. Examples of matrix norms defined in this way are the row sum norm based on the maximum norm, the spectral norm based on the Euclidean norm, and the column sum norm based on the sum norm.

Matrix norms over singular values

Another way to derive matrix norms over vector norms is to consider a singular value decomposition of a matrix A=U\Sigma V^{H}into a unitary matrix U, a diagonal matrix \Sigma and an adjoint unitary matrix V^{H}. The nonnegative real entries σ \sigma _{1},\ldots ,\sigma _{r}of \Sigma are then the singular values of Aand equal to the square roots of the eigenvalues of A^{H}A. The singular values are then \sigma =(\sigma _{1},\ldots ,\sigma _{r})combined into a vector σ whose vector norm is considered, i.e.

\|A\|=\|\sigma \|.

Examples of matrix norms defined in this way are the shadow norms defined over the p -norms of the vector of singular values and the Ky-Fan norms based on the sum of the largest singular values.

absolute norm of a real numberZoom
absolute norm of a real number

Zoom

Maximum norm ‖ {\displaystyle \|\cdot \|_{\infty }}in two dimensions.

Zoom

Euclidean norm ‖ {\displaystyle \|\cdot \|_{2}}in two dimensions.

The spectral norm of a 2 × 2 matrix corresponds to the largest stretching of the unit circle by the matrixZoom
The spectral norm of a 2 × 2 matrix corresponds to the largest stretching of the unit circle by the matrix

Zoom

Unit circles of different p-norms in two dimensions

Zoom

Sum norm ‖ \|\cdot \|_{1}in two dimensions.


AlegsaOnline.com - 2020 / 2023 - License CC3