You are on page 1of 11

6/20/2018 A Gentle Introduction to Statistical Hypothesis Tests

A Gentle Introduction to Statistical Hypothesis Tests


by Jason Brownlee on May 14, 2018 in Statistics

Data must be interpreted in order to add meaning.

We can interpret data by assuming a specific structure our outcome and


use statistical methods to confirm or reject the assumption. The
assumption is called a hypothesis and the statistical tests used for this purpose are called statistical
hypothesis tests.

Whenever we want to make claims about the distribution of data or whether one set of results are different
from another set of results in applied machine learning, we must rely on statistical hypothesis tests.

In this tutorial, you will discover statistical hypothesis testing and how to interpret and carefully state the
results from statistical tests.

After completing this tutorial, you will know:

Statistical hypothesis tests are important for quantifying answers to questions about samples of data.
The interpretation of a statistical hypothesis test requires a correct understanding of p-values and critical
values.
Regardless of the significance level, the finding of hypothesis tests may still contain errors.

Let’s get started.

Update May/2018: Added note about “reject” vs “failure to reject”, improved language on this issue.

A Gentle Introduction to Statistical Hypothesis Tests


Photo by Kevin Verbeem, some rights reserved.

https://machinelearningmastery.com/statistical-hypothesis-tests/ 1/11
6/20/2018 A Gentle Introduction to Statistical Hypothesis Tests

Tutorial Overview
This tutorial is divided into 3 parts; they are:

1. Statistical Hypothesis Testing


2. Statistical Test Interpretation
3. Errors in Statistical Tests

Need help with Statistics for Machine Learning?


Take my free 7-day email crash course now (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Download Your FREE Mini-Course

Statistical Hypothesis Testing


Data alone is not interesting. It is the interpretation of the data that we are really interested in.

In statistics, when we wish to start asking questions about the data and interpret the results, we use
statistical methods that provide a confidence or likelihood about the answers. In general, this class of
methods is called statistical hypothesis testing, or significance tests.

The term “hypothesis” may make you think about science, where we investigate a hypothesis. This is along
the right track.

In statistics, a hypothesis test calculates some quantity under a given assumption. The result of the test
allows us to interpret whether the assumption holds or whether the assumption has been violated.

Two concrete examples that we will use a lot in machine learning are:

A test that assumes that data has a normal distribution.


A test that assumes that two samples were drawn from the same underlying population distribution.

The assumption of a statistical test is called the null hypothesis, or hypothesis 0 (H0 for short). It is often
called the default assumption, or the assumption that nothing has changed.

A violation of the test’s assumption is often called the first hypothesis, hypothesis 1 or H1 for short. H1 is
really a short hand for “some other hypothesis,” as all we know is that the evidence suggests that the H0 can
be rejected.

Hypothesis 0 (H0): Assumption of the test holds and is failed to be rejected at some level of
significance.
Hypothesis 1 (H1): Assumption of the test does not hold and is rejected at some level of significance.

Before we can reject or fail to reject the null hypothesis, we must interpret the result of the test.
https://machinelearningmastery.com/statistical-hypothesis-tests/ 2/11
6/20/2018 A Gentle Introduction to Statistical Hypothesis Tests

Statistical Test Interpretation


The results of a statistical hypothesis test must be interpreted for us to start making claims.

This is a point that may cause a lot of confusion for beginners and experienced practitioners alike.

There are two common forms that a result from a statistical hypothesis test may take, and they must be
interpreted in different ways. They are the p-value and critical values.

Interpret the p-value


We describe a finding as statistically significant by interpreting the p-value.

For example, we may perform a normality test on a data sample and find that it is unlikely that sample of data
deviates from a Gaussian distribution, failing to reject the null hypothesis.

A statistical hypothesis test may return a value called p or the p-value. This is a quantity that we can use to
interpret or quantify the result of the test and either reject or fail to reject the null hypothesis. This is done by
comparing the p-value to a threshold value chosen beforehand called the significance level.

The significance level is often referred to by the Greek lower case letter alpha.

A common value used for alpha is 5% or 0.05. A smaller alpha value suggests a more robust interpretation of
the null hypothesis, such as 1% or 0.1%.

The p-value is compared to the pre-chosen alpha value. A result is statistically significant when the p-value is
less than alpha. This signifies a change was detected: that the default hypothesis can be rejected.

If p-value > alpha: Fail to reject the null hypothesis (i.e. not signifiant result).
If p-value <= alpha: Reject the null hypothesis (i.e. significant result).

For example, if we were performing a test of whether a data sample was normal and we calculated a p-value
of .07, we could state something like:

The test found that the data sample was normal, failing to reject the null hypothesis at a 5%
significance level.

The significance level can be inverted by subtracting it from 1 to give a confidence level of the hypothesis
given the observed sample data.

1 confidence level = 1 - significance level

Therefore, statements such as the following can also be made:

The test found that the data was normal, failing to reject the null hypothesis at a 95% confidence
level.

“Reject” vs “Failure to Reject”


The p-value is probabilistic.

https://machinelearningmastery.com/statistical-hypothesis-tests/ 3/11
6/20/2018 A Gentle Introduction to Statistical Hypothesis Tests

This means that when we interpret the result of a statistical test, we do not know what is true or false, only
what is likely.

Rejecting the null hypothesis means that there is sufficient statistical evidence that the null hypothesis does
not look likely. Otherwise, it means that there is not sufficient statistical evidence to reject the null hypothesis.

We may think about the statistical test in terms of the dichotomy of rejecting and accepting the null
hypothesis. The danger is that if we say that we “accept” the null hypothesis, the language suggests that the
null hypothesis is true. Instead, it is safer to say that we “fail to reject” the null hypothesis, as in, there is
insufficient statistical evidence to reject it.

When reading “reject” vs “fail to reject” for the first time, it is confusing to beginners. You can think of it as
“reject” vs “accept” in your mind, as long as you remind yourself that the result is probabilistic and that even
an “accepted” null hypothesis still has a small probability of being wrong.

Common p-value Misinterpretations


This section highlights some common misinterpretations of the p-value in the results of statistical tests.

True or False Null Hypothesis


The interpretation of the p-value does not mean that the null hypothesis is true or false.

It does mean that we have chosen to reject or fail to reject the null hypothesis at a specific statistical
significance level based on empirical evidence and the chosen statistical test.

You are limited to making probabilistic claims, not crisp binary or true/false claims about the result.

p-value as Probability
A common misunderstanding is that the p-value is a probability of the null hypothesis being true or false
given the data.

In probability, this would be written as follows:

1 Pr(hypothesis | data)

This is incorrect.

Instead, the p-value can be thought of as the probability of the data given the pre-specified assumption
embedded in the statistical test.

Again, using probability notation, this would be written as:

1 Pr(data | hypothesis)

It allows us to reason about whether or not the data fits the hypothesis. Not the other way around.

The p-value is a measure of how likely the data sample would be observed if the null hypothesis were true.

Post-Hoc Tuning
It does not mean that you can re-sample your domain or tune your data sample and re-run the statistical test
until you achieve a desired result.

https://machinelearningmastery.com/statistical-hypothesis-tests/ 4/11
6/20/2018 A Gentle Introduction to Statistical Hypothesis Tests

It also does not mean that you can choose your p-value after you run the test.

This is called p-hacking or hill climbing and will mean that the result you present will be fragile and not
representative. In science, this is at best unethical, and at worst fraud.

Interpret Critical Values


Some tests do not return a p-value.

Instead, they might return a list of critical values and their associated significance levels, as well as a
test statistic.

These are usually nonparametric or distribution-free statistical hypothesis tests.

The choice of returning a p-value or a list of critical values is really an implementation choice.

The results are interpreted in a similar way. Instead of comparing a single p-value to a pre-specified
significance level, the test statistic is compared to the critical value at a chosen significance level.

If test statistic < critical value: Fail to reject the null hypothesis.
If test statistic >= critical value: Reject the null hypothesis.

Again, the meaning of the result is similar in that the chosen significance level is a probabilistic decision on
rejection or fail to reject the base assumption of the test given the data.

Results are presented in the same way as with a p-value, as either significance level or confidence level. For
example, if a normality test was calculated and the test statistic was compared to the critical value at the 5%
significance level, results could be stated as:

The test found that the data sample was normal, failing to reject the null hypothesis at a 5%
significance level.

Or:

The test found that the data was normal, failing to reject the null hypothesis at a 95% confidence
level.

Errors in Statistical Tests


The interpretation of a statistical hypothesis test is probabilistic.

That means that the evidence of the test may suggest an outcome and be mistaken.

For example, if alpha was 5%, it suggests that (at most) 1 time in 20 that the null hypothesis would be
mistakenly rejected or failed to be rejected because of the statistical noise in the data sample.

Given a high p-value, it may mean that the null hypothesis is true (we got it right) or that the null hypothesis is
false and some unlikely event occurred (we made a mistake). If this type of error is made, it is called a false
positive. We falsely believe the null hypothesis or assumption of the statistical test.

https://machinelearningmastery.com/statistical-hypothesis-tests/ 5/11
6/20/2018 A Gentle Introduction to Statistical Hypothesis Tests

Alternately, a low p-value either means that the null hypothesis false (we got it right) or it is true and some
rare and unlikely event has been observed (we made a mistake). If this type of error is made, it is called a
false negative. We falsely believe the rejection of the null hypothesis.

Each of these two types of error has a specific name.

Type I Error: The incorrect rejection of a true null hypothesis or a false positive.
Type II Error: The incorrect failure of rejection of a false null hypothesis or a false negative.

All statistical hypothesis tests have a chance of making either of these types of errors. False findings or false
disoveries are more than possible; they are probable.

Ideally, we want to choose a significance level that minimizes the likelihood of one of these errors. E.g. a very
small significance level. Although significance levels such as 0.05 and 0.01 are common in many fields of
science, harder sciences, such as physics, are more aggressive.

It is common to use a significance level of (3 * 10)^-7 or 0.0000003, often referred to as 5-sigma. This means
that the finding was due to chance with a probability of 1 in 3.5 million independent repeats of the
experiments. To use a threshold like this may require a much large data sample.

Nevertheless, these types of errors are always present and must be kept in mind when presenting and
interpreting the results of statistical tests. It is also a reason why it is important to have findings independently
verified.

Extensions
This section lists some ideas for extending the tutorial that you may wish to explore.

Find an example of a research paper that does not present results using p-values.
Find an example of a research paper that presents results with statistical significance, but makes one of
the common misinterpretations of p-values.
Find an example of a research paper that presents results with statistical significance and correctly
interprets and presents the p-value and findings.

If you explore any of these extensions, I’d love to know.

Further Reading
This section provides more resources on the topic if you are looking to go deeper.

Articles
Statistical hypothesis testing on Wikipedia
Statistical significance on Wikipedia
p-value on Wikipedia
Critical value on Wikipedia
Type I and type II errors on Wikipedia
Data dredging on Wikipedia
Misunderstandings of p-values on Wikipedia
What does the 5 sigma mean?

https://machinelearningmastery.com/statistical-hypothesis-tests/ 6/11
6/20/2018 A Gentle Introduction to Statistical Hypothesis Tests

Summary
In this tutorial, you discovered statistical hypothesis testing and how to interpret and carefully state the
results from statistical tests.

Specifically, you learned:

Statistical hypothesis tests are important for quantifying answers to questions about samples of data.
The interpretation of a statistical hypothesis test requires a correct understanding of p-values.
Regardless of the significance level, the finding of hypothesis tests may still contain errors.

Do you have any questions?


Ask your questions in the comments below and I will do my best to answer.

Get a Handle on Statistics for Machine Learning!


Develop a working understanding of statistics
…by writing lines of code in python

Discover how in my new Ebook:


Statistical Methods for Machine Learning

It provides self-study tutorials on topics like:


Hypothesis Tests, Correlation, Nonparametric Stats, Resampling, and much more…

Discover how to Transform Data into Knowledge


Skip the Academics. Just Results.

Click to learn more.

About Jason Brownlee


Jason Brownlee, Ph.D. is a machine learning specialist who
teaches developers how to get results with modern machine
learning methods via hands-on tutorials.
View all posts by Jason Brownlee →

A Gentle Introduction to Normality Tests in Python Introduction to Nonparametric Statistical Significance Tests in Python

16 Responses to A Gentle Introduction to Statistical Hypothesis Tests

REPLY
DC May 14, 2018 at 2:18 pm #

https://machinelearningmastery.com/statistical-hypothesis-tests/ 7/11
6/20/2018 A Gentle Introduction to Statistical Hypothesis Tests

Can one accept a hypothesis? I thought its


just reject or fail to reject.

REPLY
Jason Brownlee May 14, 2018 at
2:25 pm #

Yes, the null hypothesis can be


accepted, although it does not mean it is
true.

The use of “fail to reject” instead of


“accept” is used to help remind you that
we don’t know what is true, we just have
evidence of a probabilistic finding.

Jarad May 18, 2018 at 1:59REPLY


pm #

It really depends on who you


ask. Some statisticians are extremely
strong-minded on this that you never
use the word “accept” when
concluding an experiment. I think the
consensus is from the statistics
community is that you never “accept”
it.

REPLY
Jason Brownlee May
18, 2018 at 2:42 pm #

Agreed. We do not
accept, but fail to reject.

For beginners, the dichotomy of


“reject” vs “fail to reject” is super
confusing.

As long as the beginner


acknowledges that the result is
probabilistic, that we do not know
truth, then then can think
“accept”/”reject” in their head and
get on with things.

Note, I updated the post and


added a section on this topic to
make things clearer. I don’t want
a bunch of angry statisticians
banging down my door

https://machinelearningmastery.com/statistical-hypothesis-tests/ 8/11
6/20/2018 A Gentle Introduction to Statistical Hypothesis Tests

Yusoof Raheem May 14, 2018 atREPLY


4:20
pm #

This is quite elucidatory it is


interesting I will like to read more many many
thanks

REPLY
Jason Brownlee May 15, 2018 at
7:51 am #

Thanks.

Emily P. May 15, 2018 at 2:21 am # REPLY

Hmm, i think that the following


phrases should be inverted:

If p-value > alpha: Accept the null hypothesis.


If p-value
If p-value > alpha: Do not accept the null
hypothesis (i.e. reject the null hypothesis).
If p-value < alpha: Accept the null hypothesis.

REPLY
Jason Brownlee May 15, 2018 at
8:03 am #

I don’t think so. Why do you say


that?

A small p-value (<=0.05) indicates that


there is strong evidence to reject the null
hypothesis.

REPLY
Scot May 15, 2018 at 3:57 am #

Another great article Mr. Brownlee. I


really like that you mention that the
interpretation of the p-value does not mean
the null hypothesis is true or false. I believe
that often gets forgotten!

Under the section:


Interpret the p-value

I think the following sentence fragment has an


extra word

https://machinelearningmastery.com/statistical-hypothesis-tests/ 9/11
6/20/2018 A Gentle Introduction to Statistical Hypothesis Tests

For example, we may find perform a normality


test on a data sample and find that it is
unlikely

I think you want

For example, we may perform a normality test


on a data sample and find that it is unlikely

REPLY
Jason Brownlee May 15, 2018 at
8:05 am #

Thanks, fixed.

REPLY
MM May 18, 2018 at 12:58 pm #

Can test hypothesis test (p value )


with excel??

REPLY
Jason Brownlee May 18, 2018 at
2:38 pm #

I’m sure you can. Sorry, I don’t


know how.

REPLY
Anthony The Koala May 18, 2018 at
1:25 pm #

Dear Dr Jason,
Have you encountered situations where if you
reject the null hypothesis, there is a chance of
accepting it. Conversely if you accept the null
hypothesis there is a chance that it is
rejected?

In other words what to do if you encounter


false positives or false negatives.
See https://en.wikipedia.org/wiki/Type_I_and_type_II_errors.

Thank you,
Anthony of Belfield

REPLY
Jason Brownlee May 18, 2018 at
2:40 pm #

The p-value is a probability, so


there is always a chance of an error,

https://machinelearningmastery.com/statistical-hypothesis-tests/ 10/11
6/20/2018 A Gentle Introduction to Statistical Hypothesis Tests

specifically a finding that is a statistical


fluke but not representative.

In the past, if I am skeptical of the results


of an experimental result, I will:

– Repeat the experiment many times to


see if we had a fluke
– Increase the sample sizes to improve
the robustness of the finding.
– Use the result anyway.

It takes a lot of discipline to design


experiments sufficiently well to trust the
results. To care about your reputation and
trust your reputation on the results. Real
scientists are rare. Real science is hard.

Fabian Ockenfels June 4, 2018 REPLY


at 3:11
am #

“For example, if alpha was 5%, it


suggests that (at most) 1 time in 20 that the
null hypothesis would be mistakenly rejected
or failed to be rejected because of the
statistical noise in the data sample.”

This is partly incorrect. While an alpha of 5%


does limit the probability of type I errors (false
positive) it does not affect type II errors in the
same way. The statement of “mistakenly
failing to reject” (false negative) H0 in at most
one of 20 times does not hold at the specified
significance level alpha.

In the case of type II errors you are interested


in controlling beta. For reference
see https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2996198/#!po=87.9310 (α,β,AND POWER)

Jason Brownlee June 4, 2018 at


6:32 am #

Thanks for sharing.

https://machinelearningmastery.com/statistical-hypothesis-tests/ 11/11

You might also like