Another comment on p-values
I know this issue has been brought up many times, but I just read this excellent post, and wanted to bring it up again.
If you read scientific articles, you have likely encountered p-values many times over. Many people think they understand what a p-value means, but I believe that many do not. In science we frequently test different hypotheses. We naturally want to determine the probability that the hypothesis is false or true, given the data that was observed. The p-value is frequently interpreted as somehow giving us such a probability. But this is not what the p-value tells you – it only gives you the probability of observing the data you have, or a more extreme sample, under the given hypothesis. If this probability is small, then either you observed a low probability event, or your hypothesis is wrong.
The main point of this post is to direct you to the following clear discussion of the issue. Although this point has been made so many times, I think it is worth re-emphasizing.
Perhaps I am going out on a limb here, but it seems like we naturally tend to the Bayesian approach. We can compute the probability of the data given a hypothesis, p(D | H). What we would like is to determine the probability of the hypothesis given the data, p(H|D). We want to be able to say: “The data tells me that this hypothesis is very probably true,” or even “The data tells me that the probability that this hypothesis is true is 99%”. If you don’t want to use a Bayesian approach then you can’t go directly from p(D | H) to p(H|D). In particular, p-values deal with the first, but not the second.
Next time you see a p-value in an article, pay attention how it is interpreted. I am sure you will find many examples where the interpretation is not quite correct.