Multivariate normal distribution

Multivariate normal
Probability density function
Many sample points from a multivariate normal distribution with and , shown along with the 3-sigma ellipse, the two marginal distributions, and the two 1-d histograms.
Notation
Parameters μRklocation
ΣRk × kcovariance (positive semi-definite matrix)
Support xμ + span(Σ) ⊆ Rk
PDF
exists only when Σ is positive-definite
Mean μ
Mode μ
Variance Σ
Entropy
MGF
CF
Kullback–Leibler divergence See § Kullback–Leibler divergence

In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables, each of which clusters around a mean value.

Definitions

[edit]

Notation and parametrization

[edit]

The multivariate normal distribution of a k-dimensional random vector can be written in the following notation:

or to make it explicitly known that X is k-dimensional,

with k-dimensional mean vector

and covariance matrix

such that and . The inverse of the covariance matrix is called the precision matrix, denoted by .

Standard normal random vector

[edit]

A real random vector is called a standard normal random vector if all of its components are independent and each is a zero-mean unit-variance normally distributed random variable, i.e. if for all .[1]: p. 454 

Centered normal random vector

[edit]

A real random vector is called a centered normal random vector if there exists a deterministic matrix such that has the same distribution as where is a standard normal random vector with components.[1]: p. 454 

Normal random vector

[edit]

A real random vector is called a normal random vector if there exists a random -vector , which is a standard normal random vector, a -vector , and a matrix , such that .[2]: p. 454 [1]: p. 455 

Formally:

Here the covariance matrix is .

In the degenerate case where the covariance matrix is singular, the corresponding distribution has no density; see the section below for details. This case arises frequently in statistics; for example, in the distribution of the vector of residuals in the ordinary least squares regression. The are in general not independent; they can be seen as the result of applying the matrix to a collection of independent Gaussian variables .

Equivalent definitions

[edit]

The following definitions are equivalent to the definition given above. A random vector has a multivariate normal distribution if it satisfies one of the following equivalent conditions.

  • Every linear combination of its components is normally distributed. That is, for any constant vector , the random variable has a univariate normal distribution, where a univariate normal distribution with zero variance is a point mass on its mean.
  • There is a k-vector and a symmetric, positive semidefinite matrix , such that the characteristic function of is

The spherical normal distribution can be characterised as the unique distribution where components are independent in any orthogonal coordinate system.[3][4]

Density function

[edit]
Bivariate normal joint density

Non-degenerate case

[edit]

The multivariate normal distribution is said to be "non-degenerate" when the symmetric covariance matrix is positive definite. In this case the distribution has density[5]

where is a real k-dimensional column vector and is the determinant of , also known as the generalized variance. The equation above reduces to that of the univariate normal distribution if is a matrix (i.e. a single real number).

The circularly symmetric version of the complex normal distribution has a slightly different form.

Each iso-density locus — the locus of points in k-dimensional space each of which gives the same particular value of the density — is an ellipse or its higher-dimensional generalization; hence the multivariate normal is a special case of the elliptical distributions.

The quantity is known as the Mahalanobis distance, which represents the distance of the test point from the mean . The squared Mahalanobis distance is decomposed into a sum of k terms, each term being a product of three meaningful components.[6] Note that in the case when , the distribution reduces to a univariate normal distribution and the Mahalanobis distance reduces to the absolute value of the standard score. See also Interval below.

Bivariate case

[edit]

In the 2-dimensional nonsingular case (), the probability density function of a vector is: where is the correlation between and and where and . In this case,

In the bivariate case, the first equivalent condition for multivariate reconstruction of normality can be made less restrictive as it is sufficient to verify that a countably infinite set of distinct linear combinations of and are normal in order to conclude that the vector of is bivariate normal.[7]

The bivariate iso-density loci plotted in the -plane are ellipses, whose principal axes are defined by the eigenvectors of the covariance matrix (the major and minor semidiameters of the ellipse equal the square-root of the ordered eigenvalues).

Bivariate normal distribution centered at with a standard deviation of 3 in roughly the direction and of 1 in the orthogonal direction.

As the absolute value of the correlation parameter increases, these loci are squeezed toward the following line :

This is because this expression, with (where sgn is the Sign function) replaced by , is the best linear unbiased prediction of given a value of .[8]

Degenerate case

[edit]

If the covariance matrix is not full rank, then the multivariate normal distribution is degenerate and does not have a density. More precisely, it does not have a density with respect to k-dimensional Lebesgue measure (which is the usual measure assumed in calculus-level probability courses). Only random vectors whose distributions are absolutely continuous with respect to a measure are said to have densities (with respect to that measure). To talk about densities but avoid dealing with measure-theoretic complications it can be simpler to restrict attention to a subset of of the coordinates of such that the covariance matrix for this subset is positive definite; then the other coordinates may be thought of as an affine function of these selected coordinates.[9]

To talk about densities meaningfully in singular cases, then, we must select a different base measure. Using the disintegration theorem we can define a restriction of Lebesgue measure to the -dimensional affine subspace of where the Gaussian distribution is supported, i.e. . With respect to this measure the distribution has the density of the following motif:

where is the generalized inverse and is the pseudo-determinant.[10]

Cumulative distribution function

[edit]

The notion of cumulative distribution function (cdf) in dimension 1 can be extended in two ways to the multidimensional case, based on rectangular and ellipsoidal regions.

The first way is to define the cdf of a random vector as the probability that all components of are less than or equal to the corresponding values in the vector :[11]

Though there is no closed form for , there are a number of algorithms that estimate it numerically.[11][12]

Another way is to define the cdf as the probability that a sample lies inside the ellipsoid determined by its Mahalanobis distance from the Gaussian, a direct generalization of the standard deviation.[13] In order to compute the values of this function, closed analytic formula exist,[13] as follows.

Interval

[edit]

The interval for the multivariate normal distribution yields a region consisting of those vectors x satisfying

Here is a -dimensional vector, is the known -dimensional mean vector, is the known covariance matrix and is the quantile function for probability of the chi-squared distribution with degrees of freedom.[14] When the expression defines the interior of an ellipse and the chi-squared distribution simplifies to an exponential distribution with mean equal to two (rate equal to half).

Complementary cumulative distribution function (tail distribution)

[edit]

The complementary cumulative distribution function (ccdf) or the tail distribution is defined as . When , then the ccdf can be written as a probability the maximum of dependent Gaussian variables:[15]

While no simple closed formula exists for computing the ccdf, the maximum of dependent Gaussian variables can be estimated accurately via the Monte Carlo method.[15][16]

Properties

[edit]

Probability in different domains

[edit]
Top: the probability of a bivariate normal in the domain (blue regions). Middle: the probability of a trivariate normal in a toroidal domain. Bottom: converging Monte-Carlo integral of the probability of a 4-variate normal in the 4d regular polyhedral domain defined by . These are all computed by the numerical method of ray-tracing.[17]

The probability content of the multivariate normal in a quadratic domain defined by (where is a matrix, is a vector, and is a scalar), which is relevant for Bayesian classification/decision theory using Gaussian discriminant analysis, is given by the generalized chi-squared distribution.[17] The probability content within any general domain defined by (where is a general function) can be computed using the numerical method of ray-tracing [17] (Matlab code).

Higher moments

[edit]

The kth-order moments of x are given by

where r1 + r2 + ⋯ + rN = k.

The kth-order central moments are as follows

  1. If k is odd, μ1, ..., N(xμ) = 0.
  2. If k is even with k = 2λ, then[ambiguous]

where the sum is taken over all allocations of the set into λ (unordered) pairs. That is, for a kth (= 2λ = 6) central moment, one sums the products of λ = 3 covariances (the expected value μ is taken to be 0 in the interests of parsimony):

This yields terms in the sum (15 in the above case), each being the product of λ (in this case 3) covariances. For fourth order moments (four variables) there are three terms. For sixth-order moments there are 3 × 5 = 15 terms, and for eighth-order moments there are 3 × 5 × 7 = 105 terms.

The covariances are then determined by replacing the terms of the list by the corresponding terms of the list consisting of r1 ones, then r2 twos, etc.. To illustrate this, examine the following 4th-order central moment case:

where is the covariance of Xi and Xj. With the above method one first finds the general case for a kth moment with k different X variables, , and then one simplifies this accordingly. For example, for , one lets Xi = Xj and one uses the fact that .

Functions of a normal vector

[edit]
a: Probability density of a function of a single normal variable with and . b: Probability density of a function of a normal vector , with mean , and covariance . c: Heat map of the joint probability density of two functions of a normal vector , with mean , and covariance . d: Probability density of a function of 4 iid standard normal variables. These are computed by the numerical method of ray-tracing.[17]

A quadratic form of a normal vector , (where is a matrix, is a vector, and is a scalar), is a generalized chi-squared variable.[17] The direction of a normal vector follows a projected normal distribution.[18]

If is a general scalar-valued function of a normal vector, its probability density function, cumulative distribution function, and inverse cumulative distribution function can be computed with the numerical method of ray-tracing (Matlab code).[17]

Likelihood function

[edit]

If the mean and covariance matrix are known, the log likelihood of an observed vector is simply the log of the probability density function:

,

The circularly symmetric version of the noncentral complex case, where is a vector of complex numbers, would be

i.e. with the conjugate transpose (indicated by ) replacing the normal transpose (indicated by ). This is slightly different than in the real case, because the circularly symmetric version of the complex normal distribution has a slightly different form for the normalization constant.

A similar notation is used for multiple linear regression.[19]

Since the log likelihood of a normal vector is a quadratic form of the normal vector, it is distributed as a generalized chi-squared variable.[17]

Differential entropy

[edit]

The differential entropy of the multivariate normal distribution is[20]

,

where the bars denote the matrix determinant, k is the dimensionality of the vector space, and the result has units of nats.

Kullback–Leibler divergence

[edit]

The Kullback–Leibler divergence from to , for non-singular matrices Σ1 and Σ0, is:[21]

where denotes the matrix determinant, is the trace, is the natural logarithm and is the dimension of the vector space.

The logarithm must be taken to base e since the two terms following the logarithm are themselves base-e logarithms of expressions that are either factors of the density function or otherwise arise naturally. The equation therefore gives a result measured in nats. Dividing the entire expression above by loge 2 yields the divergence in bits.

When ,

Mutual information

[edit]

The mutual information of a distribution is a special case of the Kullback–Leibler divergence in which is the full multivariate distribution and is the product of the 1-dimensional marginal distributions. In the notation of the Kullback–Leibler divergence section of this article, is a diagonal matrix with the diagonal entries of , and . The resulting formula for mutual information is:

where is the correlation matrix constructed from .[22]

In the bivariate case the expression for the mutual information is:

Joint normality

[edit]

Normally distributed and independent

[edit]

If and are normally distributed and independent, this implies they are "jointly normally distributed", i.e., the pair must have multivariate normal distribution. However, a pair of jointly normally distributed variables need not be independent (would only be so if uncorrelated, ).

Two normally distributed random variables need not be jointly bivariate normal

[edit]

The fact that two random variables and both have a normal distribution does not imply that the pair has a joint normal distribution. A simple example is one in which X has a normal distribution with expected value 0 and variance 1, and if and if , where . There are similar counterexamples for more than two random variables. In general, they sum to a mixture model.[citation needed]

Correlations and independence

[edit]

In general, random variables may be uncorrelated but statistically dependent. But if a random vector has a multivariate normal distribution then any two or more of its components that are uncorrelated are independent. This implies that any two or more of its components that are pairwise independent are independent. But, as pointed out just above, it is not true that two random variables that are (separately, marginally) normally distributed and uncorrelated are independent.

Conditional distributions

[edit]

If N-dimensional x is partitioned as follows

and accordingly μ and Σ are partitioned as follows

then the distribution of x1 conditional on x2 = a is multivariate normal[23] (x1 | x2 = a) ~ N(μ, Σ) where

and covariance matrix

[24]

Here is the generalized inverse of . The matrix is the Schur complement of Σ22 in Σ. That is, the equation above is equivalent to inverting the overall covariance matrix, dropping the rows and columns corresponding to the variables being conditioned upon, and inverting back to get the conditional covariance matrix.

Note that knowing that x2 = a alters the variance, though the new variance does not depend on the specific value of a; perhaps more surprisingly, the mean is shifted by ; compare this with the situation of not knowing the value of a, in which case x1 would have distribution .

An interesting fact derived in order to prove this result, is that the random vectors and are independent.

The matrix Σ12Σ22−1 is known as the matrix of regression coefficients.

Bivariate case

[edit]

In the bivariate case where x is partitioned into and , the conditional distribution of given is[25]

where is the correlation coefficient between and .

Bivariate conditional expectation

[edit]
In the general case
[edit]

The conditional expectation of X1 given X2 is:

Proof: the result is obtained by taking the expectation of the conditional distribution above.

In the centered case with unit variances
[edit]

The conditional expectation of X1 given X2 is

and the conditional variance is

thus the conditional variance does not depend on x2.

The conditional expectation of X1 given that X2 is smaller/bigger than z is:[26]: 367 

where the final ratio here is called the inverse Mills ratio.

Proof: the last two results are obtained using the result , so that

and then using the properties of the expectation of a truncated normal distribution.

Marginal distributions

[edit]

To obtain the marginal distribution over a subset of multivariate normal random variables, one only needs to drop the irrelevant variables (the variables that one wants to marginalize out) from the mean vector and the covariance matrix. The proof for this follows from the definitions of multivariate normal distributions and linear algebra.[27]

Example

Let X = [X1, X2, X3] be multivariate normal random variables with mean vector μ = [μ1, μ2, μ3] and covariance matrix Σ (standard parametrization for multivariate normal distributions). Then the joint distribution of X = [X1, X3] is multivariate normal with mean vector μ = [μ1, μ3] and covariance matrix .

Affine transformation

[edit]

If Y = c + BX is an affine transformation of where c is an vector of constants and B is a constant matrix, then Y has a multivariate normal distribution with expected value c + and variance BΣBT i.e., . In particular, any subset of the Xi has a marginal distribution that is also multivariate normal. To see this, consider the following example: to extract the subset (X1, X2, X4)T, use

which extracts the desired elements directly.

Another corollary is that the distribution of Z = b · X, where b is a constant vector with the same number of elements as X and the dot indicates the dot product, is univariate Gaussian with . This result follows by using

Observe how the positive-definiteness of Σ implies that the variance of the dot product must be positive.

An affine transformation of X such as 2X is not the same as the sum of two independent realisations of X.

Geometric interpretation

[edit]

The equidensity contours of a non-singular multivariate normal distribution are ellipsoids (i.e. affine transformations of hyperspheres) centered at the mean.[28] Hence the multivariate normal distribution is an example of the class of elliptical distributions. The directions of the principal axes of the ellipsoids are given by the eigenvectors of the covariance matrix . The squared relative lengths of the principal axes are given by the corresponding eigenvalues.

If Σ = UΛUT = 1/2(1/2)T is an eigendecomposition where the columns of U are unit eigenvectors and Λ is a diagonal matrix of the eigenvalues, then we have

Moreover, U can be chosen to be a rotation matrix, as inverting an axis does not have any effect on N(0, Λ), but inverting a column changes the sign of U's determinant. The distribution N(μ, Σ) is in effect N(0, I) scaled by Λ1/2, rotated by U and translated by μ.

Conversely, any choice of μ, full rank matrix U, and positive diagonal entries Λi yields a non-singular multivariate normal distribution. If any Λi is zero and U is square, the resulting covariance matrix UΛUT is singular. Geometrically this means that every contour ellipsoid is infinitely thin and has zero volume in n-dimensional space, as at least one of the principal axes has length of zero; this is the degenerate case.

"The radius around the true mean in a bivariate normal random variable, re-written in polar coordinates (radius and angle), follows a Hoyt distribution."[29]

In one dimension the probability of finding a sample of the normal distribution in the interval is approximately 68.27%, but in higher dimensions the probability of finding a sample in the region of the standard deviation ellipse is lower.[30]

Dimensionality Probability
1 0.6827
2 0.3935
3 0.1987
4 0.0902
5 0.0374
6 0.0144
7 0.0052
8 0.0018
9 0.0006
10 0.0002

Statistical inference

[edit]

Parameter estimation

[edit]

The derivation of the maximum-likelihood estimator of the covariance matrix of a multivariate normal distribution is straightforward.

In short, the probability density function (pdf) of a multivariate normal is

and the ML estimator of the covariance matrix from a sample of n observations is [31]

which is simply the sample covariance matrix. This is a biased estimator whose expectation is

An unbiased sample covariance is

(matrix form; is the identity matrix, J is a matrix of ones; the term in parentheses is thus the centering matrix)

The Fisher information matrix for estimating the parameters of a multivariate normal distribution has a closed form expression. This can be used, for example, to compute the Cramér–Rao bound for parameter estimation in this setting. See Fisher information for more details.

Bayesian inference

[edit]

In Bayesian statistics, the conjugate prior of the mean vector is another multivariate normal distribution, and the conjugate prior of the covariance matrix is an inverse-Wishart distribution . Suppose then that n observations have been made

and that a conjugate prior has been assigned, where

where

and

Then[31]

where

Multivariate normality tests

[edit]

Multivariate normality tests check a given set of data for similarity to the multivariate normal distribution. The null hypothesis is that the data set is similar to the normal distribution, therefore a sufficiently small p-value indicates non-normal data. Multivariate normality tests include the Cox–Small test[32] and Smith and Jain's adaptation[33] of the Friedman–Rafsky test created by Larry Rafsky and Jerome Friedman.[34]

Mardia's test[35] is based on multivariate extensions of skewness and kurtosis measures. For a sample {x1, ..., xn} of k-dimensional vectors we compute

Under the null hypothesis of multivariate normality, the statistic A will have approximately a chi-squared distribution with 1/6k(k + 1)(k + 2) degrees of freedom, and B will be approximately standard normal N(0,1).

Mardia's kurtosis statistic is skewed and converges very slowly to the limiting normal distribution. For medium size samples , the parameters of the asymptotic distribution of the kurtosis statistic are modified[36] For small sample tests () empirical critical values are used. Tables of critical values for both statistics are given by Rencher[37] for k = 2, 3, 4.

Mardia's tests are affine invariant but not consistent. For example, the multivariate skewness test is not consistent against symmetric non-normal alternatives.[38]

The BHEP test[39] computes the norm of the difference between the empirical characteristic function and the theoretical characteristic function of the normal distribution. Calculation of the norm is performed in the L2(μ) space of square-integrable functions with respect to the Gaussian weighting function . The test statistic is