## When to Use These Tests

“Ranking” refers to the data transformation in which numerical or ordinal values are replaced by their rank when the data are sorted.

### Learning Objectives

Indicate why and how data transformation is performed and how this relates to ranked data.

### Key Takeaways

#### Key Points

- Data transforms are usually applied so that the data appear to more closely meet the assumptions of a statistical inference procedure that is to be applied, or to improve the interpretability or appearance of graphs.
- Guidance for how data should be transformed, or whether a transform should be applied at all, should come from the particular statistical analysis to be performed.
- When there is evidence of substantial skew in the data, it is common to transform the data to a symmetric distribution before constructing a confidence interval.
- Data can also be transformed to make it easier to visualize them.
- A final reason that data can be transformed is to improve interpretability, even if no formal statistical analysis or visualization is to be performed.

#### Key Terms

**confidence interval**: A type of interval estimate of a population parameter used to indicate the reliability of an estimate.**data transformation**: The application of a deterministic mathematical function to each point in a data set.**central limit theorem**: The theorem that states: If the sum of independent identically distributed random variables has a finite variance, then it will be (approximately) normally distributed.

In statistics, “ranking” refers to the data transformation in which numerical or ordinal values are replaced by their rank when the data are sorted. If, for example, the numerical data 3.4, 5.1, 2.6, 7.3 are observed, the ranks of these data items would be 2, 3, 1 and 4 respectively. In another example, the ordinal data hot, cold, warm would be replaced by 3, 1, 2. In these examples, the ranks are assigned to values in ascending order. (In some other cases, descending ranks are used. ) Ranks are related to the indexed list of order statistics, which consists of the original dataset rearranged into ascending order.

Some kinds of statistical tests employ calculations based on ranks. Examples include:

- Friedman test
- Kruskal-Wallis test
- Rank products
- Spearman’s rank correlation coefficient
- Wilcoxon rank-sum test
- Wilcoxon signed-rank test

Some ranks can have non-integer values for tied data values. For example, when there is an even number of copies of the same data value, the above described fractional statistical rank of the tied data ends in [latex]frac{1}{2}[/latex].

### Data Transformation

Data transformation refers to the application of a deterministic mathematical function to each point in a data set—that is, each data point [latex]text{z}_text{i}[/latex] is replaced with the transformed value [latex]text{y}_text{i} = text{f}(text{z}_text{i})[/latex], where [latex]text{f}[/latex] is a function. Transforms are usually applied so that the data appear to more closely meet the assumptions of a statistical inference procedure that is to be applied, or to improve the interpretability or appearance of graphs.

Nearly always, the function that is used to transform the data is invertible and, generally, is continuous. The transformation is usually applied to a collection of comparable measurements. For example, if we are working with data on peoples’ incomes in some currency unit, it would be common to transform each person’s income value by the logarithm function.

### Reasons for Transforming Data

Guidance for how data should be transformed, or whether a transform should be applied at all, should come from the particular statistical analysis to be performed. For example, a simple way to construct an approximate 95% confidence interval for the population mean is to take the sample mean plus or minus two standard error units. However, the constant factor 2 used here is particular to the normal distribution and is only applicable if the sample mean varies approximately normally. The central limit theorem states that in many situations, the sample mean does vary normally if the sample size is reasonably large.

However, if the population is substantially skewed and the sample size is at most moderate, the approximation provided by the central limit theorem can be poor, and the resulting confidence interval will likely have the wrong coverage probability. Thus, when there is evidence of substantial skew in the data, it is common to transform the data to a symmetric distribution before constructing a confidence interval. If desired, the confidence interval can then be transformed back to the original scale using the inverse of the transformation that was applied to the data.

Data can also be transformed to make it easier to visualize them. For example, suppose we have a scatterplot in which the points are the countries of the world, and the data values being plotted are the land area and population of each country. If the plot is made using untransformed data (e.g., square kilometers for area and the number of people for population), most of the countries would be plotted in tight cluster of points in the lower left corner of the graph. The few countries with very large areas and/or populations would be spread thinly around most of the graph’s area. Simply rescaling units (e.g., to thousand square kilometers, or to millions of people) will not change this. However, following logarithmic transformations of both area and population, the points will be spread more uniformly in the graph.

A final reason that data can be transformed is to improve interpretability, even if no formal statistical analysis or visualization is to be performed. For example, suppose we are comparing cars in terms of their fuel economy. These data are usually presented as “kilometers per liter” or “miles per gallon. ” However, if the goal is to assess how much additional fuel a person would use in one year when driving one car compared to another, it is more natural to work with the data transformed by the reciprocal function, yielding liters per kilometer, or gallons per mile.

## Mann-Whitney U-Test

The Mann–Whitney [latex]text{U}[/latex]-test is a non-parametric test of the null hypothesis that two populations are the same against an alternative hypothesis.

### Learning Objectives

Compare the Mann-Whitney [latex]text{U}[/latex]-test to Student’s [latex]text{t}[/latex]-test

### Key Takeaways

#### Key Points

- Mann-Whitney has greater efficiency than the [latex]text{t}[/latex]-test on non- normal distributions, such as a mixture of normal distributions, and it is nearly as efficient as the [latex]text{t}[/latex]-test on normal distributions.
- The test involves the calculation of a statistic, usually called [latex]text{U}[/latex], whose distribution under the null hypothesis is known.
- The first method to calculate [latex]text{U}[/latex] involves choosing the sample which has the smaller ranks, then counting the number of ranks in the other sample that are smaller than the ranks in the first, then summing these counts.
- The second method involves adding up the ranks for the observations which came from sample 1. The sum of ranks in sample 2 is now determinate, since the sum of all the ranks equals [latex]frac{text{N}(text{N}+1)}{2}[/latex], where [latex]text{N}[/latex] is the total number of observations.

#### Key Terms

**tie**: One or more equal values or sets of equal values in the data set.**ordinal data**: A statistical data type consisting of numerical scores that exist on an ordinal scale, i.e. an arbitrary numerical scale where the exact numerical quantity of a particular value has no significance beyond its ability to establish a ranking over a set of data points.

The Mann–Whitney [latex]text{U}[/latex]-test is a non-parametric test of the null hypothesis that two populations are the same against an alternative hypothesis, especially that a particular population tends to have larger values than the other. It has greater efficiency than the [latex]text{t}[/latex]-test on non-normal distributions, such as a mixture of normal distributions, and it is nearly as efficient as the [latex]text{t}[/latex]-test on normal distributions.

### Assumptions and Formal Statement of Hypotheses

Although Mann and Whitney developed the test under the assumption of continuous responses with the alternative hypothesis being that one distribution is stochastically greater than the other, there are many other ways to formulate the null and alternative hypotheses such that the test will give a valid test. A very general formulation is to assume that:

- All the observations from both groups are independent of each other.
- The responses are ordinal (i.e., one can at least say of any two observations which is the greater).
- The distributions of both groups are equal under the null hypothesis, so that the probability of an observation from one population ([latex]text{X}[/latex]) exceeding an observation from the second population ([latex]text{Y}[/latex]) equals the probability of an observation from [latex]text{Y}[/latex]exceeding an observation from [latex]text{X}[/latex]. That is, there is a symmetry between populations with respect to probability of random drawing of a larger observation.
- Under the alternative hypothesis, the probability of an observation from one population ([latex]text{X}[/latex]) exceeding an observation from the second population ([latex]text{Y}[/latex]) (after exclusion of ties) is not equal to [latex]0.5[/latex]. The alternative may also be stated in terms of a one-sided test, for example: [latex]text{P}(text{X} > text{Y}) + 0.5 cdot text{P}(text{X} = text{Y}) > 0.5[/latex].

### Calculations

The test involves the calculation of a statistic, usually called [latex]text{U}[/latex], whose distribution under the null hypothesis is known. In the case of small samples, the distribution is tabulated, but for sample sizes above about 20, approximation using the normal distribution is fairly good.

There are two ways of calculating [latex]text{U}[/latex] by hand. For either method, we must first arrange all the observations into a single ranked series. That is, rank all the observations without regard to which sample they are in.

### Method One

For small samples a direct method is recommended. It is very quick, and gives an insight into the meaning of the [latex]text{U}[/latex] statistic.

- Choose the sample for which the ranks seem to be smaller (the only reason to do this is to make computation easier). Call this “sample 1,” and call the other sample “sample 2. “
- For each observation in sample 1, count the number of observations in sample 2 that have a smaller rank (count a half for any that are equal to it). The sum of these counts is [latex]text{U}[/latex].

### Method Two

For larger samples, a formula can be used.

First, add up the ranks for the observations that came from sample 1. The sum of ranks in sample 2 is now determinate, since the sum of all the ranks equals:

[latex]dfrac{text{N}(text{N} + 1)}{2}[/latex]

where [latex]text{N}[/latex] is the total number of observations. [latex]text{U}[/latex] is then given by:

[latex]text{U}_1=text{R}_1 – dfrac{text{n}_1(text{n}_1+1)}{2}[/latex]

where [latex]text{n}_1[/latex] is the sample size for sample 1, and [latex]text{R}_1[/latex] is the sum of the ranks in sample 1. Note that it doesn’t matter which of the two samples is considered sample 1. The smaller value of [latex]text{U}_1[/latex] and [latex]text{U}_2[/latex] is the one used when consulting significance tables.

### Example of Statement Results

In reporting the results of a Mann–Whitney test, it is important to state:

- a measure of the central tendencies of the two groups (means or medians; since the Mann–Whitney is an ordinal test, medians are usually recommended)
- the value of [latex]text{U}[/latex]
- the sample sizes
- the significance level

In practice some of this information may already have been supplied and common sense should be used in deciding whether to repeat it. A typical report might run:

“Median latencies in groups [latex]text{E}[/latex] and [latex]text{C}[/latex] were [latex]153[/latex] and [latex]247[/latex] ms; the distributions in the two groups differed significantly (Mann–Whitney [latex]text{U}=10.5[/latex], [latex]text{n}_1=text{n}_2=8[/latex], [latex]text{P} < 0.05text{, two-tailed}[/latex]).”

### Comparison to Student’s [latex]text{t}[/latex]-Test

The [latex]text{U}[/latex]-test is more widely applicable than independent samples Student’s [latex]text{t}[/latex]-test, and the question arises of which should be preferred.

### Ordinal Data

[latex]text{U}[/latex] remains the logical choice when the data are ordinal but not interval scaled, so that the spacing between adjacent values cannot be assumed to be constant.

### Robustness

As it compares the sums of ranks, the Mann–Whitney test is less likely than the [latex]text{t}[/latex]-test to spuriously indicate significance because of the presence of outliers (i.e., Mann–Whitney is more robust).

### Efficiency

For distributions sufficiently far from normal and for sufficiently large sample sizes, the Mann-Whitney Test is considerably more efficient than the [latex]text{t}[/latex]. Overall, the robustness makes Mann-Whitney more widely applicable than the [latex]text{t}[/latex]-test. For large samples from the normal distribution, the efficiency loss compared to the [latex]text{t}[/latex]-test is only 5%, so one can recommend Mann-Whitney as the default test for comparing interval or ordinal measurements with similar distributions.

## Wilcoxon t-Test

The Wilcoxon [latex]text{t}[/latex]-test assesses whether population mean ranks differ for two related samples, matched samples, or repeated measurements on a single sample.

### Learning Objectives

Break down the procedure for the Wilcoxon signed-rank t-test.

### Key Takeaways

#### Key Points

- The Wilcoxon [latex]text{t}[/latex]-test can be used as an alternative to the paired Student’s [latex]text{t}[/latex]-test, [latex]text{t}[/latex]-test for matched pairs, or the [latex]text{t}[/latex]-test for dependent samples when the population cannot be assumed to be normally distributed.
- The test is named for Frank Wilcoxon who (in a single paper) proposed both the rank [latex]text{t}[/latex]-test and the rank-sum test for two independent samples.
- The test assumes that data are paired and come from the same population, each pair is chosen randomly and independent and the data are measured at least on an ordinal scale, but need not be normal.

#### Key Terms

**Wilcoxon t-test**: A non-parametric statistical hypothesis test used when comparing two related samples, matched samples, or repeated measurements on a single sample to assess whether their population mean ranks differ (i.e., it is a paired-difference test).**tie**: One or more equal values or sets of equal values in the data set.

The Wilcoxon signed-rank t-test is a non-parametric statistical hypothesis test used when comparing two related samples, matched samples, or repeated measurements on a single sample to assess whether their population mean ranks differ (i.e., it is a paired difference test). It can be used as an alternative to the paired Student’s [latex]text{t}[/latex]-test, [latex]text{t}[/latex]-test for matched pairs, or the [latex]text{t}[/latex]-test for dependent samples when the population cannot be assumed to be normally distributed.

The test is named for Frank Wilcoxon who (in a single paper) proposed both the rank [latex]text{t}[/latex]-test and the rank-sum test for two independent samples. The test was popularized by Siegel in his influential text book on non-parametric statistics. Siegel used the symbol [latex]text{T}[/latex] for the value defined below as [latex]text{W}[/latex]. In consequence, the test is sometimes referred to as the Wilcoxon [latex]text{T}[/latex]-test, and the test statistic is reported as a value of [latex]text{T}[/latex]. Other names may include the “[latex]text{t}[/latex]-test for matched pairs” or the “[latex]text{t}[/latex]-test for dependent samples.”

### Assumptions

- Data are paired and come from the same population.
- Each pair is chosen randomly and independent.
- The data are measured at least on an ordinal scale, but need not be normal.

### Test Procedure

Let [latex]text{N}[/latex] be the sample size, the number of pairs. Thus, there are a total of [latex]2text{N}[/latex] data points. For [latex]text{i}=1,cdots,text{N}[/latex], let [latex]text{x}_{1,text{i}}[/latex] and [latex]text{x}_{2,text{i}}[/latex] denote the measurements.

[latex]text{H}_0[/latex]: The median difference between the pairs is zero.

[latex]text{H}_1[/latex]: The median difference is not zero.

1. For [latex]text{i}=1,cdots,text{N}[/latex], calculate [latex]left| { text{x} }_{ 2,text{i} }-{ text{x} }_{ 1,text{i} } right|[/latex] and [latex]text{sgn}left( { text{x} }_{ 2,text{i} }-{ text{x} }_{ 1,text{i} } right)[/latex], where [latex]text{sgn}[/latex] is the sign function.

2. Exclude pairs with [latex]left|{ text{x} }_{ 2,text{i} }-{ text{x} }_{ 1,text{i} } right|=0[/latex]. Let [latex]text{N}_text{r}[/latex] be the reduced sample size.

3. Order the remaining pairs from smallest absolute difference to largest absolute difference, [latex]left| { text{x} }_{ 2,text{i} }-{ text{x} }_{ 1,text{i} } right|[/latex].

4. Rank the pairs, starting with the smallest as 1. Ties receive a rank equal to the average of the ranks they span. Let [latex]text{R}_text{i}[/latex] denote the rank.

5. Calculate the test statistic [latex]text{W}[/latex], the absolute value of the sum of the signed ranks:

[latex]text{W}= left| sum left(text{sgn}(text{x}_{2,text{i}}-text{x}_{1,text{i}}) cdot text{R}_text{i} right) right|[/latex]

6. As [latex]text{N}_text{r}[/latex] increases, the sampling distribution of [latex]text{W}[/latex] converges to a normal distribution. Thus, for [latex]text{N}_text{r} geq 10[/latex], a [latex]text{z}[/latex]-score can be calculated as follows:

[latex]text{z}=dfrac{text{W}-0.5}{sigma_text{W}}[/latex]

where

[latex]displaystyle{sigma_text{W} = sqrt{frac{text{N}_text{r}(text{N}_text{r}+1)(2text{N}_text{r}+1)}{6}}}[/latex]

If [latex]text{z} > text{z}_{text{critical}}[/latex] then reject [latex]text{H}_0[/latex].

For [latex]text{N}_text{r} < 10[/latex], [latex]text{W}[/latex] is compared to a critical value from a reference table. If [latex]text{W}ge { text{W} }_{ text{critical,}{ text{N} }_{ text{r} } }[/latex] then reject [latex]text{H}_0[/latex].

Alternatively, a [latex]text{p}[/latex]-value can be calculated from enumeration of all possible combinations of [latex]text{W}[/latex] given [latex]text{N}_text{r}[/latex].

## Kruskal-Wallis H-Test

The Kruskal–Wallis one-way analysis of variance by ranks is a non-parametric method for testing whether samples originate from the same distribution.

### Learning Objectives

Summarize the Kruskal-Wallis one-way analysis of variance and outline its methodology

### Key Takeaways

#### Key Points

- The Kruskal-Wallis test is used for comparing more than two samples that are independent, or not related.
- When the Kruskal-Wallis test leads to significant results, then at least one of the samples is different from the other samples.
- The test does not identify where the differences occur or how many differences actually occur.
- Since it is a non- parametric method, the Kruskal–Wallis test does not assume a normal distribution, unlike the analogous one-way analysis of variance.
- The test does assume an identically shaped and scaled distribution for each group, except for any difference in medians.
- Kruskal–Wallis is also used when the examined groups are of unequal size (different number of participants).

#### Key Terms

**chi-squared distribution**: A distribution with [latex]text{k}[/latex] degrees of freedom is the distribution of a sum of the squares of [latex]text{k}[/latex] independent standard normal random variables.**Kruskal-Wallis test**: A non-parametric method for testing whether samples originate from the same distribution.**Type I error**: An error occurring when the null hypothesis ([latex]text{H}_text{0}[/latex]) is true, but is rejected.

The Kruskal–Wallis one-way analysis of variance by ranks (named after William Kruskal and W. Allen Wallis) is a non-parametric method for testing whether samples originate from the same distribution. It is used for comparing more than two samples that are independent, or not related. The parametric equivalent of the Kruskal-Wallis test is the one-way analysis of variance (ANOVA). When the Kruskal-Wallis test leads to significant results, then at least one of the samples is different from the other samples. The test does not identify where the differences occur, nor how many differences actually occur. It is an extension of the Mann–Whitney [latex]text{U}[/latex] test to 3 or more groups. The Mann-Whitney would help analyze the specific sample pairs for significant differences.

Since it is a non-parametric method, the Kruskal–Wallis test does not assume a normal distribution, unlike the analogous one-way analysis of variance. However, the test does assume an identically shaped and scaled distribution for each group, except for any difference in medians.

Kruskal–Wallis is also used when the examined groups are of unequal size (different number of participants).

### Method

1. Rank all data from all groups together; i.e., rank the data from [latex]1[/latex] to [latex]text{N}[/latex] ignoring group membership. Assign any tied values the average of the ranks would have received had they not been tied.

2. The test statistic is given by:

[latex]displaystyle{text{K}=(text{N}-1) frac{displaystyle{sum_{text{i}=1}^text{g}text{n}_text{i}(bar{text{r}}_{text{i}cdot} – bar{text{r}})^2}}{displaystyle{sum_{text{i}=1}^text{g} sum_{text{j}=1}^{text{n}_text{i}} (text{r}_{text{ij}}-bar{text{r}})^2}}}[/latex]where

[latex]displaystyle{bar{text{r}}_{text{i}cdot}= frac{sum_{text{j}=1}^{text{n}_text{i}}text{r}_{text{ij}}}{text{n}_text{i}}}[/latex]

and where [latex]bar{text{r}} = frac{1}{2} (text{N}+1)[/latex] and is the average of all values of [latex]text{r}_{text{ij}}[/latex], [latex]text{n}_text{i}[/latex] is the number of observations in group [latex]text{i}[/latex], [latex]text{r}_{text{ij}}[/latex] is the rank (among all observations) of observation [latex]text{j}[/latex] from group [latex]text{i}[/latex], and [latex]text{N}[/latex] is the total number of observations across all groups.

3. If the data contain no ties, the denominator of the expression for [latex]text{K}[/latex] is exactly

[latex]dfrac{(text{N}-1)text{N}(text{N}+1)}{12}[/latex]

and

[latex]bar{text{r}}=dfrac{text{N}+1}{2}[/latex]

Therefore:

[latex]begin{align} text{K} &= frac{12}{text{N}(text{N}+1)} cdot sum_{{i}=1}^text{g} text{n}_text{i} left( bar{text{r}}_{text{i} cdot} – dfrac{text{N}+1}{2}right)^2 &= frac{12}{text{N}(text{N}+1)} cdot sum_{text{i}=1}^text{g} text{n}_text{i} bar{text{r}}_{text{i}cdot}^2 – 3 (text{N}+1) end{align}[/latex]

Note that the second line contains only the squares of the average ranks.

4. A correction for ties if using the shortcut formula described in the previous point can be made by dividing [latex]text{K}[/latex] by the following:

[latex]1-frac{displaystyle{sum_{text{i}=1}^text{G} (text{t}_text{i}^3 – text{t}_text{i})}}{displaystyle{text{N}^3-text{N}}}[/latex]

where [latex]text{G}[/latex] is the number of groupings of different tied ranks, and [latex]text{t}_text{i}[/latex] is the number of tied values within group [latex]text{i}[/latex] that are tied at a particular value. This correction usually makes little difference in the value of [latex]text{K}[/latex] unless there are a large number of ties.

5. Finally, the p-value is approximated by:

[latex]text{Pr}left( { chi }_{ text{g}-1 }^{ 2 }ge text{K} right)[/latex]

If some [latex]text{n}_text{i}[/latex] values are small (i.e., less than 5) the probability distribution of [latex]text{K}[/latex] can be quite different from this chi-squared distribution. If a table of the chi-squared probability distribution is available, the critical value of chi-squared, [latex]{ chi }_{ alpha,text{g}-1′ }^{ 2 }[/latex], can be found by entering the table at [latex]text{g} − 1[/latex] degrees of freedom and looking under the desired significance or alpha level. The null hypothesis of equal population medians would then be rejected if [latex]text{K}ge { chi }_{ alpha,text{g}-1 }^{ 2 }[/latex]. Appropriate multiple comparisons would then be performed on the group medians.

6. If the statistic is not significant, then there is no evidence of differences between the samples. However, if the test is significant then a difference exists between at least two of the samples. Therefore, a researcher might use sample contrasts between individual sample pairs, or post hoc tests, to determine which of the sample pairs are significantly different. When performing multiple sample contrasts, the type I error rate tends to become inflated.

Source: Statistics