Probability of error

In statistics, the term "error" arises in two ways. Firstly, it arises in the context of decision making, where the probability of error may be considered as being the probability of making a wrong decision and which would have a different value for each type of error. Secondly, it arises in the context of statistical modelling (for example regression) where the model's predicted value may be in error regarding the observed outcome and where the term probability of error may refer to the probabilities of various amounts of error occurring.

Hypothesis testing

[edit]

In hypothesis testing in statistics, two types of error are distinguished.

  • Type I errors which consist of rejecting a null hypothesis that is true; this amounts to a false positive result.
  • Type II errors which consist of failing to reject a null hypothesis that is false; this amounts to a false negative result.[1]

The probability of error is similarly distinguished.

  • For a Type I error, it is shown as α (alpha) and is known as the size of the test and is 1 minus the specificity of the test. This quantity is sometimes referred to as the confidence of the test, or the level of significance (LOS) of the test.
  • For a Type II error, it is shown as β (beta) and is 1 minus the power or 1 minus the sensitivity of the test.[citation needed]

Statistical and econometric modelling

[edit]

The fitting of many models in statistics and econometrics usually seeks to minimise the difference between observed and predicted or theoretical values. This difference is known as an error, though when observed it would be better described as a residual.

The error is taken to be a random variable and as such has a probability distribution. Thus distribution can be used to calculate the probabilities of errors with values within any given range.

References

[edit]
  1. ^ "Type I Error and Type II Error - Experimental Errors in Research". explorable.com. Retrieved 2024-02-29.