Statistics and clinical trials
In the night of December 13, 1799, George Washington woke his wife Martha to tell her that he was feeling ill. Following the medical practice of the day, he was bled repeatedly and given an assortment of medicines, some of which contained mercury. By the time Washington died, half of his blood had been removed. He may have lived longer had the doctors simply done nothing.
Washington’s case is not unique. Doctors of the past did not know whether their medicines worked. Some, like quinine, were real cures. Most others, like lead and mercury, did more harm than good. Until the second half of the 20th century physicians did not have the tools to decide which medicines help patients. And it was not microscopes or sophisticated lab equipment they were lacking. Rather, they did not know how to reliably test and compare different treatments.
Suppose you want to test whether a drug helps people sleep better. You could give it to many patients, and ask them if their sleep has improved. But how can you tell whether the results are due to chance, or the placebo effect? And how many people do you need to ask to conclude that the drug works? One answer is to divide patients randomly into two groups. Give the drug to those in the test group, and not to those in the so-called control group. If those in the test group fare better, the drug is effective.
This may sound simple, but it is not. Some people in both groups will still sleep better by chance. Fortunately, if you have sufficiently many patients, such differences even out. How many patients do you need, and how confident can you be in your conclusions? The mathematical theory of probability and statistics provide precise answers to these questions.
The ethical questions are more difficult. In such clinical trials we give the drug to some people and not to others. How can we withhold a potentially life saving drug from some dying patients, and give it to others? But remember – before a treatment is proven to work, it is possible that doing nothing is the better option. The reason we test treatments is that we only suspect that they work. Doctors, like the rest of us, are not clairvoyant. They do not know what treatment is best until it is tested.
As an example, antiarrhythmic drugs were given to patients for decades to stop irregular heartbeats. Doctors argued that it was self-evident that these drugs saved lives. The drugs were eventually tested in clinical trials, but only because doctors believed they should be used more widely. However, a statistical analysis showed that these supposedly life saving medicines were killing patients each year. The drugs may have been responsible for more than 50,000 deaths in the US alone. What seemed obvious and intuitive turned out to be very wrong.
We laugh at the medieval use of leaches, and other remedies that help “balance the humors”. But even today we sometimes base our decision on intuition rather than evidence. Many avoid vaccinations which have been proven to be safe and effective. On the other doctors and patients frequently demand expensive medical tests when none are needed. Clinical trials and statistics can tell us which treatments work. It is up to us to make use of that knowledge.
- How much doctors rely on proven medicines, and how much they go by instinct and guesses is a matter of debate.
- I recommend Druin Burch’s book “Taking the Medicine” for a look into the history of clinical trials.
- Much has been written about the modern anti-vaccine movement. Here an older, but still relevant article in Wired
- The ethics of randomized clinical trials is a complicated subject. I have written about a particular case here. You will find links to more detailed discussions here.
- Here is a good article about over treatment.