The uncertainty principle, also known as Heisenberg's indeterminacy principle, is a fundamental concept in quantum mechanics. It states that there is a limit to the precision with which certain pairs of physical properties, such as position and momentum, can be simultaneously known. In other words, the more accurately one property is measured, the less accurately the other property can be known.
The quintessentially quantum mechanical uncertainty principle comes in many forms other than position–momentum. The energy–time relationship is widely used to relate quantum state lifetime to measured energy widths but its formal derivation is fraught with confusing issues about the nature of time. The basic principle has been extended in numerous directions; it must be considered in many kinds of fundamental physical measurements.
It is vital to illustrate how the principle applies to relatively intelligible physical situations since it is indiscernible on the macroscopic[8] scales that humans experience. Two alternative frameworks for quantum physics offer different explanations for the uncertainty principle. The wave mechanics picture of the uncertainty principle is more visually intuitive, but the more abstract matrix mechanics picture formulates it in a way that generalizes more easily.
Mathematically, in wave mechanics, the uncertainty relation between position and momentum arises because the expressions of the wavefunction in the two corresponding orthonormalbases in Hilbert space are Fourier transforms of one another (i.e., position and momentum are conjugate variables). A nonzero function and its Fourier transform cannot both be sharply localized at the same time.[9] A similar tradeoff between the variances of Fourier conjugates arises in all systems underlain by Fourier analysis, for example in sound waves: A pure tone is a sharp spike at a single frequency, while its Fourier transform gives the shape of the sound wave in the time domain, which is a completely delocalized sine wave. In quantum mechanics, the two key points are that the position of the particle takes the form of a matter wave, and momentum is its Fourier conjugate, assured by the de Broglie relationp = ħk, where k is the wavenumber.
In matrix mechanics, the mathematical formulation of quantum mechanics, any pair of non-commutingself-adjoint operators representing observables are subject to similar uncertainty limits. An eigenstate of an observable represents the state of the wavefunction for a certain measurement value (the eigenvalue). For example, if a measurement of an observable A is performed, then the system is in a particular eigenstate Ψ of that observable. However, the particular eigenstate of the observable A need not be an eigenstate of another observable B: If so, then it does not have a unique associated measurement for it, as the system is not in an eigenstate of that observable.[10]
The uncertainty principle can be visualized using the position- and momentum-space wavefunctions for one spinless particle with mass in one dimension.
The more localized the position-space wavefunction, the more likely the particle is to be found with the position coordinates in that region, and correspondingly the momentum-space wavefunction is less localized so the possible momentum components the particle could have are more widespread. Conversely, the more localized the momentum-space wavefunction, the more likely the particle is to be found with those values of momentum components in that region, and correspondingly the less localized the position-space wavefunction, so the position coordinates the particle could occupy are more widespread. These wavefunctions are Fourier transforms of each other: mathematically, the uncertainty principle expresses the relationship between conjugate variables in the transform.
Propagation of de Broglie waves in 1d—real part of the complex amplitude is blue, imaginary part is green. The probability (shown as the colour opacity) of finding the particle at a given point x is spread out like a waveform, there is no definite position of the particle. As the amplitude increases above zero the curvature reverses sign, so the amplitude begins to decrease again, and vice versa—the result is an alternating amplitude: a wave.
According to the de Broglie hypothesis, every object in the universe is associated with a wave. Thus every object, from an elementary particle to atoms, molecules and on up to planets and beyond are subject to the uncertainty principle.
The time-independent wave function of a single-moded plane wave of wavenumber k0 or momentum p0 is[11]
In the case of the single-mode plane wave, is 1 if and 0 otherwise. In other words, the particle position is extremely uncertain in the sense that it could be essentially anywhere along the wave packet.
On the other hand, consider a wave function that is a sum of many waves, which we may write as where An represents the relative contribution of the mode pn to the overall total. The figures to the right show how with the addition of many plane waves, the wave packet can become more localized. We may take this a step further to the continuum limit, where the wave function is an integral over all possible modes with representing the amplitude of these modes and is called the wave function in momentum space. In mathematical terms, we say that is the Fourier transform of and that x and p are conjugate variables. Adding together all of these plane waves comes at a cost, namely the momentum has become less precise, having become a mixture of waves of many different momenta.[12]
One way to quantify the precision of the position and momentum is the standard deviationσ. Since is a probability density function for position, we calculate its standard deviation.
The precision of the position is improved, i.e. reduced σx, by using many plane waves, thereby weakening the precision of the momentum, i.e. increased σp. Another way of stating this is that σx and σp have an inverse relationship or are at least bounded from below. This is the uncertainty principle, the exact limit of which is the Kennard bound.
Proof of the Kennard inequality using wave mechanics
We are interested in the variances of position and momentum, defined as
Without loss of generality, we will assume that the means vanish, which just amounts to a shift of the origin of our coordinates. (A more general proof that does not make this assumption is given below.) This gives us the simpler form
The function can be interpreted as a vector in a function space. We can define an inner product for a pair of functions u(x) and v(x) in this vector space: where the asterisk denotes the complex conjugate.
With this inner product defined, we note that the variance for position can be written as
We can repeat this for momentum by interpreting the function as a vector, but we can also take advantage of the fact that and are Fourier transforms of each other. We evaluate the inverse Fourier transform through integration by parts: where in the integration by parts, the cancelled term vanishes because the wave function vanishes at infinity, and then use the Dirac delta function which is valid because does not depend on p .
The modulus squared of any complex number z can be expressed as we let and and substitute these into the equation above to get
All that remains is to evaluate these inner products.
Plugging this into the above inequalities, we get or taking the square root
with equality if and only if p and x are linearly dependent. Note that the only physics involved in this proof was that and are wave functions for position and momentum, which are Fourier transforms of each other. A similar result would hold for any pair of conjugate variables.
In matrix mechanics, observables such as position and momentum are represented by self-adjoint operators.[12] When considering pairs of observables, an important quantity is the commutator. For a pair of operators  and , one defines their commutator as In the case of position and momentum, the commutator is the canonical commutation relation
The physical meaning of the non-commutativity can be understood by considering the effect of the commutator on position and momentum eigenstates. Let be a right eigenstate of position with a constant eigenvalue x0. By definition, this means that Applying the commutator to yields where Î is the identity operator.
Suppose, for the sake of proof by contradiction, that is also a right eigenstate of momentum, with constant eigenvalue p0. If this were true, then one could write On the other hand, the above canonical commutation relation requires that This implies that no quantum state can simultaneously be both a position and a momentum eigenstate.
When a state is measured, it is projected onto an eigenstate in the basis of the relevant observable. For example, if a particle's position is measured, then the state amounts to a position eigenstate. This means that the state is not a momentum eigenstate, however, but rather it can be represented as a sum of multiple momentum basis eigenstates. In other words, the momentum must be less precise. This precision may be quantified by the standard deviations,
As in the wave mechanics interpretation above, one sees a tradeoff between the respective precisions of the two, quantified by the uncertainty principle.
Consider a one-dimensional quantum harmonic oscillator. It is possible to express the position and momentum operators in terms of the creation and annihilation operators:
Using the standard rules for creation and annihilation operators on the energy eigenstates, the variances may be computed directly, The product of these standard deviations is then
In particular, the above Kennard bound[6] is saturated for the ground staten=0, for which the probability density is just the normal distribution.
Quantum harmonic oscillators with Gaussian initial condition
Position (blue) and momentum (red) probability densities for an initial Gaussian distribution. From top to bottom, the animations show the cases Ω = ω, Ω = 2ω, and Ω = ω/2. Note the tradeoff between the widths of the distributions.
In a quantum harmonic oscillator of characteristic angular frequency ω, place a state that is offset from the bottom of the potential by some displacement x0 as where Ω describes the width of the initial state but need not be the same as ω. Through integration over the propagator, we can solve for the full time-dependent solution. After many cancelations, the probability densities reduce to where we have used the notation to denote a normal distribution of mean μ and variance σ2. Copying the variances above and applying trigonometric identities, we can write the product of the standard deviations as
From the relations we can conclude the following (the right most equality holds only when Ω = ω):
In the picture where the coherent state is a massive particle in a quantum harmonic oscillator, the position and momentum operators may be expressed in terms of the annihilation operators in the same formulas above and used to calculate the variances, Therefore, every coherent state saturates the Kennard bound with position and momentum each contributing an amount in a "balanced" way. Moreover, every squeezed coherent state also saturates the Kennard bound although the individual contributions of position and momentum need not be balanced in general.
The product of the standard deviations is therefore For all , the quantity is greater than 1, so the uncertainty principle is never violated. For numerical concreteness, the smallest value occurs when , in which case
Assume a particle initially has a momentum space wave function described by a normal distribution around some constant momentum p0 according to where we have introduced a reference scale , with describing the width of the distribution—cf. nondimensionalization. If the state is allowed to evolve in free space, then the time-dependent momentum and position space wave functions are
Since and , this can be interpreted as a particle moving along with constant momentum at arbitrarily high precision. On the other hand, the standard deviation of the position is such that the uncertainty product can only increase with time as
Starting with Kennard's derivation of position-momentum uncertainty, Howard Percy Robertson developed[13][1] a formulation for arbitrary Hermitian operator operators expressed in terms of their standard deviation where the brackets indicate an expectation value of the observable represented by operator . For a pair of operators and , define their commutator as
and the Robertson uncertainty relation is given by[14]
Erwin Schrödinger[15] showed how to allow for correlation between the operators, giving a stronger inequality, known as the Robertson–Schrödinger uncertainty relation,[16][1]
The derivation shown here incorporates and builds off of those shown in Robertson,[13] Schrödinger[16] and standard textbooks such as Griffiths.[17]: 138 For any Hermitian operator , based upon the definition of variance, we have we let and thus
Similarly, for any other Hermitian operator in the same state for
The product of the two deviations can thus be expressed as
(1)
In order to relate the two vectors and , we use the Cauchy–Schwarz inequality[18] which is defined as and thus Equation (1) can be written as
(2)
Since is in general a complex number, we use the fact that the modulus squared of any complex number is defined as , where is the complex conjugate of . The modulus squared can also be expressed as
(3)
we let and and substitute these into the equation above to get
(4)
The inner product is written out explicitly as and using the fact that and are Hermitian operators, we find
Similarly it can be shown that
Thus, we have and
We now substitute the above two equations above back into Eq. (4) and get
Substituting the above into Equation (2) we get the Schrödinger uncertainty relation
This proof has an issue[19] related to the domains of the operators involved. For the proof to make sense, the vector has to be in the domain of the unbounded operator, which is not always the case. In fact, the Robertson uncertainty relation is false if is an angle variable and is the derivative with respect to this variable. In this example, the commutator is a nonzero constant—just as in the Heisenberg uncertainty relation—and yet there are states where the product of the uncertainties is zero.[20] (See the counterexample section below.) This issue can be overcome by using a variational method for the proof,[21][22] or by working with an exponentiated version of the canonical commutation relations.[20]
Note that in the general form of the Robertson–Schrödinger uncertainty relation, there is no need to assume that the operators and are self-adjoint operators. It suffices to assume that they are merely symmetric operators. (The distinction between these two notions is generally glossed over in the physics literature, where the term Hermitian is used for either or both classes of operators. See Chapter 9 of Hall's book[23] for a detailed discussion of this important but technical distinction.)
In the phase space formulation of quantum mechanics, the Robertson–Schrödinger relation follows from a positivity condition on a real star-square function. Given a Wigner function with star product ★ and a function f, the following is generally true:[24]
Choosing , we arrive at
Since this positivity condition is true for alla, b, and c, it follows that all the eigenvalues of the matrix are non-negative.
The non-negative eigenvalues then imply a corresponding non-negativity condition on the determinant, or, explicitly, after algebraic manipulation,
Since the Robertson and Schrödinger relations are for general operators, the relations can be applied to any two observables to obtain specific uncertainty relations. A few of the most common relations found in the literature are given below.
Position–linear momentum uncertainty relation: for the position and linear momentum operators, the canonical commutation relation implies the Kennard inequality from above:
Angular momentum uncertainty relation: For two orthogonal components of the total angular momentum operator of an object: where i, j, k are distinct, and Ji denotes angular momentum along the xi axis. This relation implies that unless all three components vanish together, only a single component of a system's angular momentum can be defined with arbitrary precision, normally the component parallel to an external (magnetic or electric) field. Moreover, for , a choice , , in angular momentum multiplets, ψ = |j, m⟩, bounds the Casimir invariant (angular momentum squared, ) from below and thus yields useful constraints such as j(j + 1) ≥ m(m + 1), and hence j ≥ m, among others.
The derivation of the Robertson inequality for operators and requires and to be defined. There are quantum systems where these conditions are not valid.[27] One example is a quantum particle on a ring, where the wave function depends on an angular variable in the interval . Define "position" and "momentum" operators and by and with periodic boundary conditions on . The definition of depends the range from 0 to . These operators satisfy the usual commutation relations for position and momentum operators, . More precisely, whenever both and are defined, and the space of such is a dense subspace of the quantum Hilbert space.[28]
Now let be any of the eigenstates of , which are given by . These states are normalizable, unlike the eigenstates of the momentum operator on the line. Also the operator is bounded, since ranges over a bounded interval. Thus, in the state , the uncertainty of is zero and the uncertainty of is finite, so that The Robertson uncertainty principle does not apply in this case: is not in the domain of the operator , since multiplication by disrupts the periodic boundary conditions imposed on .[20]
For the usual position and momentum operators and on the real line, no such counterexamples can occur. As long as and