Differential of a function

In calculus, a differential refers to the linear part of the increase of a variable or a function and describes an infinitely small section on the axis of a coordinate system. Historically, the term was at the core of the development of infinitesimal calculus in the 17th and 18th centuries. From the 19th century onwards, analysis was mathematically correctly rebuilt by Augustin Louis Cauchy and Karl Weierstrass on the basis of the limit value concept, and the concept of the differential lost its significance for elementary differential and integral calculus.

If there is a functional dependence y=f(x)with a differentiable function fthen the basic relationship between the differential \mathrm dyof the dependent variable and the differential \mathrm dxof the independent variable is

\mathrm dy = f'(x) \mathrm dx,

where f'(x)xdenotes the derivative of fat the point Instead of \mathrm dyone also writes \mathrm df(x)or \mathrm df_x. This relationship can be generalised to functions of several variables using partial derivatives and then leads to the notion of total differential.

Differentials are used today in different applications with different meanings and also with different mathematical rigour. The differentials appearing in standard notations such as \textstyle \int _{a}^{b}f(x)\,{\mathrm d}xfor integrals or {\tfrac {{\mathrm d}f}{{\mathrm d}x}}for derivatives are nowadays usually regarded as mere notational components without independent meaning.

A rigorous definition is provided by the theory of differential forms used in differential geometry, where differentials are interpreted as exact 1-forms. A different kind of approach is provided by non-standard analysis, which takes up the historical concept of the infinitesimal number again and specifies it in the sense of modern mathematics.

Classification

In his "Lectures on Differential and Integral Calculus", first published in 1924, Richard Courant writes that the idea of the differential as an infinitely small quantity has no meaning and that it is therefore useless to define the derivative as the quotient of two such quantities, but that one could nevertheless try to define the expression \frac{\mathrm dy}{\mathrm dx}as the actual quotient of two quantities \mathrm dyand }\mathrm dx. To do this, first define f^{\prime }(x)as usual as f^{\prime }(x):=\lim _{{h\to 0}}{\frac {f(x+h)-f(x)}{h}}and then for a fixed consider xthe increment h=\Delta xas an independent variable. (Let this be denoted {\displaystyle h=\mathrm {d} x}.) Then define {\displaystyle \mathrm {d} y=hf^{\prime }(x)}with which one tautologically gets ={\frac{\displaystyle f^{\prime }(x)={\frac {\mathrm {d} y}{\mathrm {d} x}}}

In more modern terminology, the differential in can be thought of xas a linear mapping from the tangent space T_{x}\mathbb{R} \simeq \mathbb{R} into the real numbers. The "tangent vector" h\in \mathbb{R} \simeq T_{x}\mathbb{R} is hf^{\prime }(x)assigned the real number and this linear mapping is by definition the differential \mathrm df(x). Thus {\displaystyle \mathrm {d} f(x)(h)=f^{\prime }(x)h}and in particular {\displaystyle \mathrm {d} x(x)(h)=h}from which the relation {\displaystyle f^{\prime }(x)={\frac {\mathrm {d} f(x)}{\mathrm {d} x(x)}}}results tautologically.

The differential as linearised increment

If f\colon \mathbb {R} \to \mathbb {R} is a real function of a real variable, a change of the argument by Δ \Delta xfrom xto causes x + \Delta xa change of the function value from y=f(x)to y + \Delta y = f(x+ \Delta x); The following therefore applies to the increase in the function value

\Delta y = f(x+\Delta x) - f(x).

For example, if fan (affine) linear function, that is, y = f(x) = m x + bfollows that Δ \Delta y = m \cdot \Delta x. That is, the growth of the function value in this simple case is directly proportional to the growth of the argument and the ratio Δ {\tfrac {\Delta y}{\Delta x}}just corresponds to the constant slope mof f.

For functions whose slope is not constant, the situation is more complicated. If fis xdifferentiable at the point , then the gradient there is given by the derivative f'(x)where this is defined as the limit value of the difference quotient:

f'(x) = \lim_{h \to 0} \frac{f(x+h) - f(x)}{h}.

If one now considers for Δ \Delta x\neq 0the difference between the difference quotient and the derivative

\phi (\Delta x):={\frac {f(x+\Delta x)-f(x)}{\Delta x}}-f'(x),

then the following follows for the increase of the function value

\Delta y = f'(x) \cdot \Delta x + \phi(\Delta x) \cdot \Delta x.

In this representation, Δ \Delta ydecomposed into a fraction f'(x) \cdot \Delta x, which \Delta xdepends linearly on Δ and a residual which vanishes of higher than linear order, in the sense that \lim _{{h\to 0}}\phi (h)=0. The linear part of the increment, which therefore for small values of Δ \Delta xgenerally a good approximation for Δ \Delta yfcalled the differential of and \mathrm dydenoted by

Zoom

The differential \textstyle \mathrm dyas the linear part of the increment Δ \textstyle \mathrm \Delta y

Definition

Let f\colon D\to \mathbb {R} a function with domain D\subseteq \mathbb {R} . If is x\in Ddifferentiable fat the point and h \in \R, then is called

{\displaystyle \mathrm {d} f(x)(h):=f'(x)\cdot h}

the differential of fat the position xto the argument increment h. Instead of one often writesh\mathrm dx. If y=f(x)then one also writes instead of\mathrm dy\mathrm df(x).

For a fixed xthe differential \mathrm df(x)is thus a linear function which assigns to each argument h\in \mathbb{R} the value f^{\prime }(x)h\in \mathbb{R} .

For example, for the identical function \mathrm{id} \colon \R \to \R, \mathrm{id}(x) = xtherefore, because \mathrm{id}'(x) = 1the equation \mathrm d x = \mathrm{d (id)}(x) = 1 \cdot h = hand thus in this example {\mathrm d}y={\mathrm d}x.

Higher order differentials

If f\colon D\to \mathbb {R} at the point x \in D \subseteq \Rn-times differentiable ( n\in \mathbb {N} ) and \mathrm dx = h \in \R, then

\mathrm d^n y := \mathrm d^n f(x) := f^{(n)}(x) \, \mathrm dx^n

the n-th order differential of fat the point xto the argument increment h. In this product denotes f^{(n)}(x)the n-th derivative of fat the point xand \mathrm dx^ndenotes the n-th power of the number \mathrm dx.

The meaning of this definition is explained by Courant as follows. If thought to be fixed,h \textstyle \Delta xholding the same value \textstyle hfor different \textstyle x, i.e. Δ , then {\mathrm d}y=hf^{\prime }(x)is a function of x, from which one can again \textstyle \mathrm d^2 y = \mathrm d (hf'(x))form the differential (s. Fig.). Fig.). The result is the second differential {\mathrm d}^{2}y={\mathrm d}^{2}f(x), it is obtained by replacing in h\left\{f^{\prime }(x+h)-f^{\prime }(x)\right\}(the increment of hf^{\prime }(x)) hf^{{\prime \prime }}(x)replaces the term in brackets by its linear part {\mathrm d}^{2}y=h^{2}f^{{\prime \prime }}(x). In an analogous way, one can motivate the definition of differentials of higher order. For example, \textstyle {\mathrm d}^{3}y=h^{3}f'''(x)and in general \textstyle {\mathrm d}^{n}y=h^{n}f^{{(n)}}(x).

For a fixed xthe differential {\mathrm d}^{n}f(x)is again a (for n>1) function, which assigns to each argument h ∈ h\in \mathbb{R} the value f^{{(n)}}(x)h^{n}\in \mathbb{R} .

Calculation rules

Regardless of the definition used, the following calculation rules apply to differentials. In the following, denotes xthe independent variable, denote u, v, y, zdependent variables or functions and cdenotes any real constant. The derivative of yto xis {\tfrac {{\mathrm d}y}{{\mathrm d}x}}written Then the following calculation rules result from the relationship

\mathrm dy = \frac{\mathrm dy}{\mathrm dx} \mathrm dx

and the derivation rules. The following calculation rules for differentials of functions f\colon \R\to\Rare to be understood in such a way that in each case the functions obtained after inserting the arguments dx=h\in \mathbb{R} should coincide. For example, the rule \mathrm d(u + v) = \mathrm du + \mathrm dvstates that in any has x\in \mathbb {R} the identity {\mathrm d}(u+v)(x)={\mathrm d}u(x)+{\mathrm d}v(x)and this means by definition, that for all real numbers hthe equation should hold.{\mathrm (}u+v)^{\prime }(x)\cdot h=u^{\prime }(x)\cdot h+v^{\prime }(x)\cdot h

Constant and constant factor

  • \mathrm d(c) = 0and
  • \mathrm d(c y) = c \, \mathrm dy

Addition and subtraction

  • \mathrm d(u + v) = \mathrm du + \mathrm dv; and
  • \mathrm d(u - v) = \mathrm du - \mathrm dv

Multiplication

also called product rule:

  • \mathrm d(u v) = v\,\mathrm du + u\,\mathrm dv = (u v) \left(\frac {\mathrm du} {u} + \frac {\mathrm dv} {v}\right)

Division

  • {\displaystyle \mathrm {d} \left({\frac {u}{v}}\right)={\frac {v\,\mathrm {d} u-u\,\mathrm {d} v}{v^{2}}}=\left({\frac {u}{v}}\right)\left({\frac {\mathrm {d} u}{u}}-{\frac {\mathrm {d} v}{v}}\right)}

Chain rule

  • If zdepends on yand depends onyx, so \mathrm dz = \frac{\mathrm dz}{\mathrm dy}\; \mathrm dyand \mathrm dy = \frac{\mathrm dy}{\mathrm dx}\; \mathrm dx, then holds

\mathrm dz = \frac{\mathrm dz}{\mathrm dy} \cdot \frac{\mathrm dy}{\mathrm dx}\; \mathrm dx.

Examples

  • For u = x^2and v = \sin(x)\mathrm du = 2x\, \mathrm dxor \mathrm d v = \cos(x)\,\mathrm dx. It follows

\mathrm d (u v) = \mathrm d(x^2 \sin(x)) = x^2 \cos(x)\,\mathrm dx + \sin(x) 2x \, \mathrm dx.

  • For y = 1 + x^2and z = \sqrt{y}\mathrm dy = 2 x\, \mathrm dxand \mathrm dz = \frac{\mathrm dy}{2 \sqrt{y}}, thus

\mathrm d(\sqrt{1+x^2}) = \mathrm dz =\frac{2 x\, \mathrm dx}{2 \sqrt{y}} = \frac{x}{z} \mathrm dx.

Extension and variants

Instead of \mathrm {d} find the following symbols denoting differentials:

  • By ∂ \partial (introduced by Condorcet, Legendre and then Jacobi one sees it in old French cursive, or as a variant of the cursive Cyrillic d) is meant a partial differential.
  • With δ \delta (the Greek small delta) denotes a virtual displacement, the variation of a location vector. It is therefore related to the partial differential according to the individual spatial dimensions of the location vector.
  • δ \delta denotes an inexact differential.

Total differential

Main article: Total differential

The total differential or complete differential of a differentiable function f(x_{1},\ldots ,x_{n})in nvariables is defined by

{{\rm {d}}}f=\sum \limits _{{i=1}}^{n}{\frac {\partial f}{\partial x_{i}}}\,{{\mathrm d}}x_{i}.

This is again interpretable as the linear part of the increment. A change of the argument by Δ \Delta xcauses a change of the function value by Δ \Delta y=f(x+\Delta x)-f(x)which is decomposable as

\Delta y=\operatorname {grad}f(x)\cdot \Delta x+r(\Delta x),

where the first summand is the scalar product of the two n-elementary vectors \operatorname {grad}f(x)=({\tfrac {\partial f}{\partial x_{1}}}(x),\ldots ,{\tfrac {\partial f}{\partial x_{n}}}(x))and Δ \Delta xand the remainder vanishes of higher order, so \textstyle \lim _{{\Delta x\rightarrow 0}}{\frac {r(\Delta x)}{\parallel \Delta x\parallel }}=0.

Virtual shift

Main article: Virtual work

A virtual displacement δ \delta {\mathbf {x}}_{{i}}is a fictitious infinitesimal displacement of the i-th particle compatible with constraints. The dependence on time is not considered. From the total differential {\mathrm d}g=\sum _{{i=1}}^{n}{\frac {\partial g}{\partial q_{i}}}\,{\mathrm d}q_{i}+{\frac {\partial g}{\partial t}}\,{\mathrm d}tof a function the sought virtual change δ \delta g=\sum _{{i=1}}^{n}{\frac {\partial g}{\partial q_{i}}}\,\delta q_{i}g(q_{1},\dots ,q_{n},t)arises. The term "instantaneous" is thus mathematised.

The sholonomic constraints, f_{l}\,({\mathbf x}_{1},\dots ,{\mathbf x}_{N},\,t)=0\,\,,\quad l=1,\dots ,s\,\,are \,q_{k}satisfied by using n=3N-sso-called generalised coordinates :

\delta {\mathbf {x}}_{{i}}=\sum _{{k=1}}^{{n}}{\frac {\partial {\mathbf {x}}_{{i}}}{\partial q_{{k}}}}\delta q_{{k}}

The holonomic constraints are thus explicitly eliminated by selecting and correspondingly reducing the generalised coordinates.

Stochastic Analysis

In stochastic analysis, the differential notation is often used, for example, to notate stochastic differential equations; it is then always to be understood as a shorthand notation for a corresponding equation of Itō-integrals. For example, if (H_t)_{t \geq 0}a stochastic process that is Itō-integrable with respect to a Wiener process (W_t)_{t\geq 0}Itō-integrable, then the process given by

X_t = X_0 + \int_0^t H_s \, \mathrm dW_s, \qquad t \geq 0

given equation for a process (X_t)_{t \geq 0}is written in differential form as \mathrm d X_t = H_t \, \mathrm dW_tHowever, the above calculation rules for differentials have to be modified in the case of stochastic processes with non-vanishing quadratic variation according to Itō's lemma.

Today's approach: differentials as 1-forms

Main article: Pfaff's form and differential form

The definition of the differential given above dfcorresponds in today's terminology to the notion of the exact 1-form df.

Ube an open subset of \mathbb {R} ^{n}. A 1-form or Pfaffian form ω \omega on Uassigns to each point p\in Ulinear form ω p \omega_p\colon\mathrm T_pU\to\mathbb R. Such linear forms are called cotangent vectors; they are elements of the dual space \mathrm T^*_pUof the tangent space \mathrm T_pU. A pfaffian form ω \omega is therefore a mapping

\omega\colon U\to\bigsqcup_{p\in U}\mathrm T^*_pU,\quad p\mapsto\omega_p\in\mathrm T^*_pU.

The total differential or the outer derivative \mathrm dfof a differentiable function f\colon U\rightarrow\mathbb{R} is the pfaffian form defined as follows: If X\in\mathrm T_pUa tangent vector, then: (\mathrm df)_p(X)=Xf,thus equal to the directional derivative of fin the direction of X. Thus, if γ \gamma\colon(-\varepsilon,\varepsilon)\to Ua path with γ \gamma(0)=pand γ\dot\gamma(0)=X, then

(\mathrm df)_p(X)=\left.\frac{\mathrm d}{\mathrm dt}\right|_{t=0}f(\gamma(t)).

Using the gradient and the standard scalar product, the total differential of can be given by f

({\mathrm d}f)_{p}(X)=\langle {\mathrm {grad}}\,f,X\rangle

represent.

For n=1one obtains in particular the differential dfof functions f\colon \mathbb {R} \to \mathbb {R} .

Differentials in the integral calculus

Clear explanation

To calculate the area of a region bounded by the graph of a function fthe xaxis and two perpendicular lines x=aand x = bthe area was divided into rectangles of width Δ \Delta x, which are made "infinitely narrow", and height f(x). Their respective area is the "product" of

 f(x) \cdot \Delta x ,

the total area, i.e. the sum

\int _{a}^{b}f(x)\cdot {\mathrm d}x

where here \mathrm dxis again a finite quantity corresponding to[a, b] a subdivision of the interval See more precisely: Mean value theorem of integral calculus. There is in the interval [a, b]fixed value ξ \xi whose function value multiplied by the sum of the finite \mathrm dxthe interval [a, b]represents the value of the integral of this one continuous function:

\int _{a}^{b}f(x)\cdot {\mathrm d}x=f(\xi )\cdot \int _{a}^{b}{\mathrm d}x

The total interval [a,b]the integral need not be evenly subdivided. The differentials at the various subdivision points can be chosen to be of different sizes, and the choice of subdivision of the integration interval often depends on the nature of the integration problem. Together with the function value within the "differential" interval (respectively the maximum and minimum value therein corresponding to upper and lower sum) an area size is formed; one makes the limit transition in the sense that one chooses the subdivision of [a,b]finer and finer. The integral is a definition for an area with boundary by a curve piece.

Formal explanation

Main article: "Integration of differential forms" in the article Differential form

Let f\colon \mathbb {R} \to \mathbb {R} an integrable function with root function F\colon \mathbb{R} \to \mathbb{R} . The differential

{\displaystyle \mathrm {d} F=f(x)\,\mathrm {d} x}

is a 1-form which can be integrated according to the rules of integration of differential forms. The result of the integration over an interval \left[a,b\right]is exactly the Lebesgue integral

{\displaystyle \int _{a}^{b}f(x)\,\mathrm {d} x}.

Historical

Gottfried Wilhelm Leibniz uses the integral sign for the first time in a manuscript in 1675 in the treatise Analysis tetragonistica, he does not write {\displaystyle \textstyle \int f(x)\,\mathrm {d} x}but \textstyle\int f(x). On 11. November 1675 Leibniz wrote an essay entitled "Examples of the inverse tangent method" and here, in addition to \textstyle\int f(x)appears for the first time,{\displaystyle \textstyle \int f(x)\,\mathrm {d} x}as well as instead of \tfrac{x}{d}the notation \textstyle \mathrm dx.

In the modern version of this approach to integral calculus according to Bernhard Riemann, the "integral" is a limit value of the area contents of finitely many rectangles of finite width for ever finer subdivisions of the " x-range".

Therefore, the first symbol in the integral is a stylised S for "sum". "Utile erit scribi \textstyle \intpro omnia (It will be useful to write \textstyle \intinstead of omnia) and ∫ l to denote the sum of a totality ∫ ... Here a new genus of calculus is revealed; if, on the other hand, {\displaystyle \textstyle \int l=ya}given, an opposite calculus is offered with the designation {\displaystyle \textstyle l={\frac {ya}{d}}}, namely, as ∫ increases the dimensions, so d decreases them. ∫, however, means the sum, d the difference." writes Leibniz on 29 October 1675 in an investigation in which he uses the Cavalier totals. In the later transcript of 11. November 1675 he moves from writing {\displaystyle \textstyle {\frac {x}{d}}}to dx, he records in a footnote "dx is equal to {\displaystyle \textstyle {\frac {x}{d}}}", in the same calculation also comes the formula {\displaystyle \textstyle \int y\;\mathrm {d} y={\frac {y^{2}}{2}}}. Omnia stands for omnia l and is used in Bonaventura Cavalieri's geometrically oriented area calculation method. The corresponding printed publication by Leibniz is De geometria recondita from 1686. Leibniz took pains with the designation "to make the calculation calculatingly simple and compelling."

Blaise Pascal's reflections on the quadrant arc: Quarts de Cercle

When Leibniz was a young man in Paris in 1673, he received a decisive stimulus from a reflection by Pascal in his 1659 paper Traité des sinus des quarts de cercle (Treatise on the sine of the quarter circle). He says he saw a light in it that the author had not noticed. It is the following (written in modern terminology, see illustration):

To reduce the static moment

{\displaystyle \int \limits _{0}^{{\frac {1}{2}}a\pi }y\,\mathrm {d} s}

of the quadrant arc with respect to the x-axis, Pascal deduces from the similarity of the triangles with the sides

 (\Delta x ,\Delta y ,\Delta s)

and

(y, (a-x), a)\,,

that their aspect ratio is the same

\frac{\Delta s}{a} = \frac{\Delta x} {y}\,,

and thus

 y \cdot\Delta s = a \cdot\Delta x\,,

so that

{\displaystyle \int \limits _{0}^{{\frac {1}{2}}a\pi }y\,\mathrm {d} s=\int \limits _{0}^{a}a\,\mathrm {d} x=a^{2}}

applies. Leibniz now noticed - and this was the "light" he saw - that this procedure is not limited to the circle, but applies in general to any (smooth) curve, provided that the radius of the circle a is replaced by the length of the normal of the curve (the reciprocal curvature, the radius of the circle of curvature). The infinitesimal triangle

( \Delta x ,\Delta y ,\Delta s )

is the characteristic triangle (It is also found in Isaac Barrow for tangent determination.) It is remarkable that the later Leibnizian symbolism of the differential calculus (dx, dy, ds) corresponds precisely to the point of view of this "improved indivisibility conception".

Similarity

All triangles from a section Δ \Delta sthe tangent together with the pieces Δ \Delta xand Δ parallel to the respective x- and \Delta yform with the triangle from radius of curvature a, subnormal x-aand ordinate y form similar triangles and retain their ratios according to the slope of the tangent to the circle of curvature at this point even when the limit transition is made. The ratio of Δ {\tfrac {\Delta y}{\Delta x}}is, after all, exactly the slope of Δ \Delta s. Therefore, for each circle of curvature at a point of the curve, its (characteristic) proportions in the coordinate system can be transferred to the differentials there, especially if they are understood as infinitesimal quantities.

Nova methodus 1684

A new method of maxima, minima, and tangents, which does not interfere with fractional or irrational quantities, and a peculiar method of calculating them. (Leibniz (G. G. L.), Acta eruditorum 1684)

Leibniz explains his method very briefly on four pages. He chooses any independent fixed differential (here dx, see fig. r. above) and gives the calculation rules, as below, for the differentials, describing how to form them.

Then he gives the chain rule:

"Thus it comes about that for every equation presented one can write down its differential equation. This is done by simply inserting for each member (i.e. each constituent which contributes to the production of the equation by mere addition or subtraction) the differential of the member, but for another quantity (which is not itself a member but contributes to the formation of a member) applying its differential in order to form the differential of the member itself, not without further ado, but according to the algorithm prescribed above."

This is unusual from today's point of view, because he considers independent and dependent differentials equally and individually, and not the differential quotient of dependent and independent quantity as finally required. The other way round, when he gives a solution, the formation of the differential quotient is possible. He deals with the whole range of rational functions. There follows a formal complicated example, a dioptric one of light refraction (minimum), an easily solvable geometric one, with entangled distance relations, and one dealing with the logarithm.

Further connections are considered scientifically historically with him from the context of earlier and later works on the subject, some of which are only available in manuscript or in letters and not published. In Nova methodus 1684, for example, it is not stated that for the independent dx dx = const. and ddx=0. In further contributions he treats the subject up to "roots" and quadratures of infinite series.

Leibniz describes the relationship between infinitesimal and known differential (= size):

"It is also clear that our method masters the transcendental lines which cannot be traced back to the algebraic calculus or are of no definite degree, and this applies quite generally, without any special, not always applicable, presuppositions. It is only necessary to state once and for all that to find a tangent is as much as to draw a straight line connecting two points of the curve at an infinitely small distance, or an extended side of the infinite-cornered polygon, which for us is synonymous with the curve. But that infinitely small distance can always be expressed by some known differential, such as dv, or by a relation to it, i.e. by a certain known tangent."

For the transcendent line, the cycloid is used as proof.

As an appendix, in 1684 he explains the solution of a problem posed by Florimond de Beaune to Descartes, which he did not solve. The problem involves finding a function (w, of the line WW in Plate XII) whose tangent (WC) always intersects the x axis in such a way that the intercept between the point of intersection of the tangent with the x axis and its distance from the associated abscissa x, there he chooses dx always equals b, is constant, he calls it a here. He compares this proportionality with the arithmetic series and the geometric series and obtains the logarithms as abscissa and the numeri as ordinate. "Thus the ordinates w" (increase in value) "become proportional to the dw" (increase in slope)", their increments or differences, ..." He gives the logarithm function as the solution: "... if the w are the numeri, the x are the logarithms.": w=a/b dw, or w dx = a dw. This satisfies

\textstyle \log w =\frac x a + \log c

or

\textstyle w = c e^{\frac x a}.

Cauchy's differential term

In the 1980s, a debate took place in Germany about the extent to which Cauchy's foundation of analysis is logically sound. With the help of a historical reading of Cauchy, Detlef Laugwitz tries to make the concept of infinitely small quantities fruitful for his Ω \Omega -numbers, but finds inconsistencies in Cauchy as a result. Detlef Spalt corrects the (first!) historical reading approach of Cauchy's work and demands the use of terms from Cauchy's time and not today's terms to prove his theorems and comes to the conclusion that Cauchy's foundation of analysis is logically sound, but questions about the treatment of infinitely small magnitudes remain open.

Cauchy's differentials are finite and constant \mathrm d x =h( hfinite). The value of the constant is not specified.

\Delta xis infinitely small and variable with Cauchy.

The relation to his Δ \Delta x=i=\alpha hwhere hfinite and α \alpha is infinitesimal (infinitely small).

Their geometric relationship is defined as

\frac{\mathrm d y}{\mathrm d x}=\lim_{\alpha=0}\frac{\Delta y}{\Delta x}

determined. Cauchy can transfer this ratio of infinitely small quantities, or more precisely the limit of geometric difference ratios of dependent numerical quantities, a quotient, to finite quantities.

Differentials are finite number quantities whose geometric ratios are strictly equal to the limits of the geometric ratios formed by the infinitely small increments of the presented independent variables or the variables of the functions. Cauchy considers it important to regard differentials as finite number quantities.

The calculator makes use of the infinitesimals as mediators, which must lead him to the knowledge of the relationship that exists between the finite number magnitudes; and in Cauchy's opinion, the infinitesimals must never be admitted in the final equations, where their presence would remain meaningless, purposeless and useless. Moreover, if one were to consider the differentials as constantly very small number magnitudes, then one would thereby give up the advantage which consists in the fact that among the differentials of several variables one can take the one as a unit. For in order to form a clear conception of any number magnitude, it is important to relate it to the unit of its genus. It is therefore important to select a unit from among the differentials.

In particular, the difficulty of defining higher differentials falls away for Cauchy. For Cauchy sets \mathrm d x=hafter he has obtained the calculus rules of the differentials by transition to the limits. And since the differential of a function of the variable xis another function of this variable, he can differentiate yseveral times and in this way obtains the differentials of different orders.

 \mathrm d y=\mathrm d y = h \cdot y'=y'\mathrm d x

 \mathrm {dd} y=\mathrm d^2 y = h\mathrm d y'=y''h^2

 \mathrm {ddd} y=\mathrm d^3 y = h^2\mathrm d y''=y'''h^3

First content pageZoom
First content page

Graphical illustration of the Beaune problemZoom
Graphical illustration of the Beaune problem

Plate XIIZoom
Plate XII

The characteristic triangleZoom
The characteristic triangle

See also

  • Differential equation

AlegsaOnline.com - 2020 / 2023 - License CC3