Statistical physics

This article or subsequent section is not sufficiently supported by evidence (e.g., anecdotal evidence). Information without sufficient evidence may be removed in the near future. Please help Wikipedia by researching the information and adding good supporting evidence.

Statistical physics is a branch of physics that uses methods from probability theory to describe physical systems. This allows statistical physics to make statements about the properties and behavior of a large composite system without tracking the behavior of each of its parts in detail. Typical statements of statistical physics have the character of probabilities, but these become more and more certainties as the number of parts of the system increases.

Statistical physics is mainly concerned with explaining the physical behavior of many-particle systems such as solids, liquids, and gases from the properties of atoms and molecules. Its methods are also applied to many questions in other natural and engineering sciences such as biology, chemistry, neuroscience, process engineering, as well as in the social, economic and linguistic sciences (see sociophysics, economophysics, statistical linguistics).

Statistical physics is a fundamental physical theory. It starts from the simplest laws for the motion of individual particles and, with the help of a few additional physical hypotheses, can derive and justify, among other things, the laws of thermodynamics, but also the statistical fluctuations around a stationary state of equilibrium. Currently, open questions mainly concern irreversible processes, such as the calculation of transport coefficients from microscopic properties.

Statistical mechanics and statistical quantum mechanics are subfields of statistical physics.

Basics

General

Statistical relations can be formulated in physics wherever an observable physical quantity in an overall system depends on the instantaneous states of many of its subsystems, but these are not known more precisely. For example, in 1 liter of water there are about 3{,}3\cdot 10^{{25}}Water molecules contained. To describe the flow of 1 liter of water in a pipe, it would be impractical to try to follow the paths of all 33 000 000 000 000 000 000 000 water molecules individually at the atomic level. It is sufficient to trace the behaviour of the system on a large scale.

The basic approach is that the subsystems can behave in any way within the framework of their individual possibilities. In principle, the overall system could also obtain a certain combination of macroscopic values that contradicts all previous observations; however, this proves to be so improbable that it must be reasonably ruled out. An example would be that in a litre of air all the molecules spontaneously assemble in one half of the volume, which would show up once on average if you looked 10(1022) times in succession.

In such systems, it is practically impossible to determine the current states of all subsystems in detail in order to draw conclusions about the values of the observable variables or the further behavior of the overall system, especially since these states also change much faster than the variables observable in the overall system. It turns out that knowledge of the details of all subsystems is often not needed at all if one wants to obtain practicable statements about the behavior of the overall system.

On the basis of a few, but not further provable basic assumptions, statistical physics provides concepts and methods with which statements about the system as a whole can be made from the known laws for the behaviour of the subsystems, down to the individual particles or quanta.

Statistical reasoning of thermodynamics

The concepts and laws of classical thermodynamics were initially obtained in the 18th and 19th centuries by phenomenological means on macroscopic systems, primarily those in a state of equilibrium or not far from it. Nowadays, they can be traced back to the properties and behaviour of their smallest particles (usually atoms or molecules) using statistical physics. For each state of the system defined by macroscopic values - called a macrostate - there are always many possibilities to give the individual particles just such states that together they produce the given macroscopic values of the system. The exact distribution of the particles to their individual states is called the microstate, and to each macrostate belongs a certain set of microstates. Since the particles are in motion and undergo interaction processes internal to the system, in general no microstate is conserved in time. It changes microscopically deterministically, but the outcome can only be predicted with probability. Now, if the macrostate is to be an equilibrium state of the macroscopic system that is stable in time, this means that the microstate does not migrate out of the set of microstates belonging to that macrostate. The thermodynamic equations of state, i.e. the laws governing the stable equilibrium state of a macroscopic system, can now be derived in this way: One determines the respective quantities of the associated microstates for a fictitiously assumed macrostate of the system. In order to obtain the equilibrium state, this quantity is determined for various macro-states and among them the quantity is selected which, as a whole, does not change in the course of time due to the system-internal processes or changes only with the minimum possible probability. The selection criterion is very simple: the largest quantity is selected.

For any other macrostate that is not an equilibrium state, the changes in the microstate due to processes internal to the system lead to gradual changes in macroscopic quantities, i.e. also to other macrostates. In such a case, statistical physics can explain for many physical systems why this macroscopic change proceeds as a relaxation towards equilibrium and how fast it proceeds.

In addition, this statistical view shows that the state of thermodynamic equilibrium is stable only when viewed macroscopically, but must exhibit fluctuations over time when viewed microscopically. These fluctuations are real, but become less significant in relative terms the larger the system under consideration. For typical macroscopic systems, they are many orders of magnitude smaller than the achievable measurement accuracy and are therefore irrelevant for most applications of thermodynamics. With such statements, statistical physics goes beyond classical thermodynamics and allows to limit its scope quantitatively. Fluctuations explain phenomena such as critical opalescence and Brownian motion, which has been known since the beginning of the 19th century. More precise measurements of such fluctuations were carried out on mesoscopic systems at the beginning of the 20th century. The fact that these measurement results also corresponded quantitatively to the predictions of statistical physics contributed significantly to their breakthrough and thus to the acceptance of the atomic hypothesis. It was also the observation of such fluctuations that led Max Planck to his radiation formula and Albert Einstein to the light quantum hypothesis, thus establishing quantum physics.

Basic assumptions of the statistical treatment

The starting point is the microstate of a large physical system. In the realm of classical physics, it is given by the instantaneous locations and momenta of all its particles - i.e., microscopically; in the many-dimensional phase space of the system, it occupies a single point. According to the general exposition in the previous section, a measure of the size of a subset of the phase space is needed. In classical physics, the points of the individual microstates in phase space form a continuum. Since one cannot count the points in it, the closest measure is given by the volume of the subset. For this purpose, one can think of the phase space as being divided into small volume elements, each containing equal sets of very similar states. If the volume element is to contain only one state, it is called a phase space cell.

In the field of quantum physics, the microstate is given by a pure quantum mechanical state of the many-particle system, as defined, for example, by a projection operator onto a 1-dimensional subspace of the Hilbert space of the whole system, or represented by a normalized vector from it. The Hilbert space here is also the phase space. The dimension of the relevant subspace of the Hilbert space serves as a measure for a subset of states (if the basis is countable).

In the course of time, the point or the state vector, which indicates the momentary microstate of the system, wanders around in the phase space, for example because the locations and velocities of the particles vary constantly or individual particles change from one energy level to another. All macroscopic variables of the system (such as volume, energy, but also such as center of mass, its velocity, etc.) can be calculated from the data of the currently present microstate (if these were fully known). In a macro-state of the system, the starting point of macroscopic thermodynamics, only these macroscopic values are given. A macrostate - whether in equilibrium or not - is realized by a certain set of many different microstates. Which one of them is present at a given time is treated as a coincidence, because it is practically impossible to determine it beforehand. In order to be able to calculate the probability of this whole set of microstates, according to the rules of probability theory, a basic assumption about the a priori probability with which a certain single microstate is present is necessary. This is:

  • Basic assumption on a priori probability: In a closed system all reachable microstates have the same a priori probability.

If the microstates form a continuum, this assumption is not applied to a single point of the phase space, but to a volume element with microstates that belong sufficiently precisely to the same macrostate: The a priori probability is proportional to the size of the volume element. This basic assumption cannot be proved, but it can be made understandable by means of the ergodic hypothesis put forward by Boltzmann: It is assumed that for a closed system, the point of each microstate wanders in the phase space of the system in such a way that it reaches (or comes arbitrarily close to) each microstate with equal frequency. The choice of the volume element as a measure of probability means graphically that not only the microstates but also their trajectories fill the phase space with constant density.

Since the phase space includes all microstates of the system that are possible at all, those microstates that belong to a given macrostate form a subset in it. The volume of this subset is the sought measure of the probability that the system is currently in this given macrostate. Often this volume is called the "number of possible states" belonging to the given macrostate, although in classical physics it is not a pure number, but a quantity with a dimension given by a power of action increasing with the number of particles. Because the logarithm of this phase space volume is needed in the statistical formulas for thermodynamic quantities, one must still convert it to a pure number by relating it to the phase space cell. If one calculates the entropy of an ideal gas in this way, it is shown by fitting to the measured values that the phase space cell (per particle and per degree of freedom of its motion) is just as large as Planck's quantum of action h. Thus, one typically obtains very large values for the number indicating the probability, which is why it is also called the thermodynamic probability, in contrast to the mathematical probability. In quantum statistics, the dimension of the relevant subspace of the Hilbert space takes the place of the volume. Even outside statistical physics, in some quantum mechanical calculations of the phase space volume, the approximation is used to first determine the quantity in a classical way by integration and to divide the result by a corresponding power of the action quantum.

All macroscopic values of interest can be calculated as the average of the density distribution of microstates in phase space.

Stable state of equilibrium

There cannot be an equilibrium state that is stable in microscopic terms. The best approximation, given macroscopic values of the system variables, is achieved by that macrostate which has the greatest possible probability. The success of statistical physics is essentially based on the fact that this criterion determines the macrostate with extraordinary sharpness if the system consists of a sufficiently large number of subsystems (cf. the law of large numbers). All other states lose such extreme probability even with small deviations that their occurrence can be neglected.

An example that illustrates this fact: What is the most likely spatial density distribution for the molecules of a classical gas? If there are Nmolecules are in the volume V, of which a small fraction {\displaystyle rV}( {\displaystyle 0\leq r\ll 1)}) is considered, there are {\displaystyle {\tbinom {N}{n}}r^{n}(1-r)^{N-n}}Ways to distribute the molecules so that nmolecules are in the volume part {\displaystyle rV} and {\displaystyle (N-n)}in the volume tail {\displaystyle (1-r)V}(binomial distribution). If the n molecules {\displaystyle (N\!\!-\!n)}have the same distribution as the rest respect to all other features of their states, this formula is already a measure of the number of states. This binomial distribution has the expected value ⟨ {\displaystyle \langle n\rangle =rN}and a maximum there with relative width σ {\displaystyle \sigma _{\mathrm {rel} }={\sqrt {1/\langle n\rangle }}}. For example. {\displaystyle V=1\;\mathrm {l} }normal air, {\displaystyle N=3\cdot 10^{22}}and {\displaystyle rV=1\;\mathrm {mm} ^{3}}follows ⟨ {\displaystyle \langle n\rangle =3\cdot 10^{16}}and σ {\displaystyle \sigma _{\mathrm {rel} }=0{,}6\cdot 10^{-8}} . Thus, for the most likely macro state, about 2/3 of the time the spatial density at the mm scale matches the average value better than with 8-digit precision. Larger relative deviations also occur, but e.g. more than Δ {\displaystyle \Delta N/N=3\cdot 10^{-8}}only about 10-6 of the time (see normal distribution).

Quantum statistics of indistinguishable particles

The statistical weight of a macrostate depends heavily on whether the associated microstates include all those that differ only by the interchange of two particles of the physically same kind. If so, the formula for entropy in statistical mechanics would contain a summand that is not additive in particle number (and therefore incorrect). This problem became known as Gibbs' paradox. This paradox can be eliminated helpfully by an additional rule to Boltzmann's counting method: Interchanges of identical particles are not to be counted. The more detailed reason for this could only be given by quantum mechanics. According to this, a fundamental distinction must also be made for indistinguishable particles as to whether their spin is an integer (particle type boson) or a half-integer (particle type fermion). In the case of fermions, there is the additional law that the same one-particle state cannot be occupied by more than one particle, whereas in the case of bosons this number can be arbitrarily large. If these rules are observed, the uniform classical (or Boltzmannian) statistics give rise to the Fermi-Dirac statistics for uniform fermions and the Bose-Einstein statistics for uniform bosons. Both statistics show at low temperatures (the thermal radiation at any temperature, the conduction electrons in the metal even at room temperature) serious differences, both among themselves and compared to the classical statistics, in the behavior of systems with several identical particles, and that at any particle number.


AlegsaOnline.com - 2020 / 2023 - License CC3