## Expected Value

The expected value is a weighted average of all possible values in a data set.

### Learning Objectives

Compute the expected value and explain its applications and relationship to the law of large numbers

### Key Takeaways

#### Key Points

- The expected value refers, intuitively, to the value of a random variable one would “expect” to find if one could repeat the random variable process an infinite number of times and take the average of the values obtained.
- The intuitive explanation of the expected value above is a consequence of the law of large numbers: the expected value, when it exists, is almost surely the limit of the sample mean as the sample size grows to infinity.
- From a rigorous theoretical standpoint, the expected value of a continuous variable is the integral of the random variable with respect to its probability measure.

#### Key Terms

**random variable**: a quantity whose value is random and to which a probability distribution is assigned, such as the possible outcome of a roll of a die**integral**: the limit of the sums computed in a process in which the domain of a function is divided into small subsets and a possibly nominal value of the function on each subset is multiplied by the measure of that subset, all these products then being summed**weighted average**: an arithmetic mean of values biased according to agreed weightings

In probability theory, the expected value refers, intuitively, to the value of a random variable one would “expect” to find if one could repeat the random variable process an infinite number of times and take the average of the values obtained. More formally, the expected value is a weighted average of all possible values. In other words, each possible value the random variable can assume is multiplied by its assigned weight, and the resulting products are then added together to find the expected value.

The weights used in computing this average are the probabilities in the case of a discrete random variable (that is, a random variable that can only take on a finite number of values, such as a roll of a pair of dice), or the values of a probability density function in the case of a continuous random variable (that is, a random variable that can assume a theoretically infinite number of values, such as the height of a person).

From a rigorous theoretical standpoint, the expected value of a continuous variable is the integral of the random variable with respect to its probability measure. Since probability can never be negative (although it can be zero), one can intuitively understand this as the area under the curve of the graph of the values of a random variable multiplied by the probability of that value. Thus, for a continuous random variable the expected value is the limit of the weighted sum, i.e. the integral.

### Simple Example

Suppose we have a random variable [latex]text{X}[/latex], which represents the number of girls in a family of three children. Without too much effort, you can compute the following probabilities:

[latex]text{P}[text{X}=0] = 0.125 text{P}[text{X}=1] = 0.375 text{P}[text{X}=2] = 0.375 text{P}[text{X}=3] = 0.125[/latex]

The expected value of [latex]text{X}, text{E}[text{X}][/latex], is computed as:

[latex]displaystyle {begin{align} text{E}[text{X}] &= sum_{text{x}=0}^3text{xP}[text{X}=text{x}] &=0cdot 0.125 + 1cdot0.375 + 2cdot 0.375 + 3cdot 0.125 &= 1.5end{align}}[/latex]

This calculation can be easily generalized to more complicated situations. Suppose that a rich uncle plans to give you $2,000 for each child in your family, with a bonus of $500 for each girl. The formula for the bonus is:

[latex]text{Y} = 1000 + 500text{X}[/latex]

What is your expected bonus?

[latex]displaystyle {begin{align} text{E}[1000 + 500text{X}] &= sum_{text{x}=0}^3 (1000 + 500text{x})text{P}[text{X}=text{x}] &=1000cdot0.125 +1500cdot0.375 + 2000cdot0.375 + 2500 cdot 0.125 &= 1750 end{align}}[/latex]

We could have calculated the same value by taking the expected number of children and plugging it into the equation:

[latex]text{E}[1000+500text{X}] = 1000 + 500text{E}[text{X}][/latex]

### Expected Value and the Law of Large Numbers

The intuitive explanation of the expected value above is a consequence of the law of large numbers: the expected value, when it exists, is almost surely the limit of the sample mean as the sample size grows to infinity. More informally, it can be interpreted as the long-run average of the results of many independent repetitions of an experiment (e.g. a dice roll). The value may not be expected in the ordinary sense—the “expected value” itself may be unlikely or even impossible (such as having 2.5 children), as is also the case with the sample mean.

### Uses and Applications

To empirically estimate the expected value of a random variable, one repeatedly measures observations of the variable and computes the arithmetic mean of the results. If the expected value exists, this procedure estimates the true expected value in an unbiased manner and has the property of minimizing the sum of the squares of the residuals (the sum of the squared differences between the observations and the estimate). The law of large numbers demonstrates (under fairly mild conditions) that, as the size of the sample gets larger, the variance of this estimate gets smaller.

This property is often exploited in a wide variety of applications, including general problems of statistical estimation and machine learning, to estimate (probabilistic) quantities of interest via Monte Carlo methods.

The expected value plays important roles in a variety of contexts. In regression analysis, one desires a formula in terms of observed data that will give a “good” estimate of the parameter giving the effect of some explanatory variable upon a dependent variable. The formula will give different estimates using different samples of data, so the estimate it gives is itself a random variable. A formula is typically considered good in this context if it is an unbiased estimator—that is, if the expected value of the estimate (the average value it would give over an arbitrarily large number of separate samples) can be shown to equal the true value of the desired parameter.

In decision theory, and in particular in choice under uncertainty, an agent is described as making an optimal choice in the context of incomplete information. For risk neutral agents, the choice involves using the expected values of uncertain quantities, while for risk averse agents it involves maximizing the expected value of some objective function such as a von Neumann-Morgenstern utility function.

## Standard Error

The standard error is the standard deviation of the sampling distribution of a statistic.

### Learning Objectives

Paraphrase standard error, standard error of the mean, standard error correction and relative standard error.

### Key Takeaways

#### Key Points

- The standard error of the mean (SEM) is the standard deviation of the sample -mean’s estimate of a population mean.
- SEM is usually estimated by the sample estimate of the population standard deviation (sample standard deviation) divided by the square root of the sample size.
- The standard error and the standard deviation of small samples tend to systematically underestimate the population standard error and deviations.
- When the sampling fraction is large (approximately at 5% or more), the estimate of the error must be corrected by multiplying by a ” finite population correction” to account for the added precision gained by sampling close to a larger percentage of the population.
- If values of the measured quantity [latex]text{A}[/latex] are not statistically independent, an unbiased estimate of the true standard error of the mean may be obtained by multiplying the calculated standard error of the sample by the factor [latex]text{f}[/latex].
- The relative standard error (RSE) is simply the standard error divided by the mean and expressed as a percentage.

#### Key Terms

**regression**: An analytic method to measure the association of one or more independent variables with a dependent variable.**correlation**: One of the several measures of the linear statistical relationship between two random variables, indicating both the strength and direction of the relationship.

Quite simply, the standard error is the standard deviation of the sampling distribution of a statistic. The term may also be used to refer to an estimate of that standard deviation, derived from a particular sample used to compute the estimate. For example, the sample mean is the usual estimator of a population mean. However, different samples drawn from that same population would in general have different values of the sample mean. The standard error of the mean (i.e., of using the sample mean as a method of estimating the population mean) is the standard deviation of those sample means over all possible samples (of a given size) drawn from the population. Secondly, the standard error of the mean can refer to an estimate of that standard deviation, computed from the sample of data being analyzed at the time.

In regression analysis, the term “standard error” is also used in the phrase standard error of the regression to mean the ordinary least squares estimate of the standard deviation of the underlying errors.

### Standard Error of the Mean

As mentioned, the standard error of the mean (SEM) is the standard deviation of the sample-mean’s estimate of a population mean. It can also be viewed as the standard deviation of the error in the sample mean relative to the true mean, since the sample mean is an unbiased estimator. SEM is usually estimated by the sample estimate of the population standard deviation (sample standard deviation) divided by the square root of the sample size (assuming statistical independence of the values in the sample):

[latex]displaystyle text{SE}_bar{text{x}} = frac{text{s}}{sqrt{text{n}}}[/latex]

where:

- [latex]text{s}[/latex] is the sample standard deviation (i.e., the sample-based estimate of the standard deviation of the population), and
- [latex]text{n}[/latex] is the size (number of observations) of the sample.

This estimate may be compared with the formula for the true standard deviation of the sample mean:

[latex]displaystyle text{SD}_bar{text{x}} = frac{sigma}{sqrt{text{n}}}[/latex]

The standard error and the standard deviation of small samples tend to systematically underestimate the population standard error and deviations. This is due to the fact that the standard error of the mean is a biased estimator of the population standard error. Decreasing the uncertainty in a mean value estimate by a factor of two requires acquiring four times as many observations in the sample. Or decreasing standard error by a factor of ten requires a hundred times as many observations.

### Standard Error Versus Standard Deviation

The standard error and standard deviation are often considered interchangeable. However, while the mean and standard deviation are descriptive statistics, the mean and standard error describe bounds on a random sampling process. Despite the small difference in equations for the standard deviation and the standard error, this small difference changes the meaning of what is being reported from a description of the variation in measurements to a probabilistic statement about how the number of samples will provide a better bound on estimates of the population mean. Put simply, standard error is an estimate of how close to the population mean your sample mean is likely to be, whereas standard deviation is the degree to which individuals within the sample differ from the sample mean.

### Correction for Finite Population

The formula given above for the standard error assumes that the sample size is much smaller than the population size, so that the population can be considered to be effectively infinite in size. When the sampling fraction is large (approximately at 5% or more), the estimate of the error must be corrected by multiplying by a “finite population correction” to account for the added precision gained by sampling close to a larger percentage of the population. The formula for the FPC is as follows:

[latex]displaystyle text{FPC} = sqrt{frac{text{N}-text{n}}{text{N}-1}}[/latex]

The effect of the FPC is that the error becomes zero when the sample size [latex]text{n}[/latex] is equal to the population size [latex]text{N}[/latex].

### Correction for Correlation In the Sample

If values of the measured quantity [latex]text{A}[/latex] are not statistically independent but have been obtained from known locations in parameter space [latex]text{x}[/latex], an unbiased estimate of the true standard error of the mean may be obtained by multiplying the calculated standard error of the sample by the factor [latex]text{f}[/latex]:

[latex]displaystyle text{f}=sqrt{frac{1+rho}{1-rho}}[/latex]

where the sample bias coefficient [latex]rho[/latex] is the widely used Prais-Winsten estimate of the autocorrelation-coefficient (a quantity between [latex]-1[/latex] and [latex]1[/latex]) for all sample point pairs. This approximate formula is for moderate to large sample sizes and works for positive and negative [latex]rho[/latex] alike.

### Relative Standard Error

The relative standard error (RSE) is simply the standard error divided by the mean and expressed as a percentage. For example, consider two surveys of household income that both result in a sample mean of $50,000. If one survey has a standard error of $10,000 and the other has a standard error of $5,000, then the relative standard errors are 20% and 10% respectively. The survey with the lower relative standard error has a more precise measurement since there is less variance around the mean. In fact, data organizations often set reliability standards that their data must reach before publication. For example, the U.S. National Center for Health Statistics typically does not report an estimate if the relative standard error exceeds 30%.

Source: Statistics