yego.me
💡 Stop wasting time. Read Youtube instead of watch. Download Chrome Extension

The method that can "prove" almost anything - James A. Smith


3m read
·Nov 8, 2024

In 2011, a group of researchers conducted a scientific study to find an impossible result: that listening to certain songs can make you younger. Their study involved real people, truthfully reported data, and commonplace statistical analyses. So how did they do it?

The answer lies in a statistical method scientists often use to try to figure out whether their results mean something or if they’re random noise. In fact, the whole point of the music study was to point out ways this method can be misused. A famous thought experiment explains the method: there are eight cups of tea, four with the milk added first, and four with the tea added first.

A participant must determine which are which according to taste. There are 70 different ways the cups can be sorted into two groups of four, and only one is correct. So, can she taste the difference? That’s our research question. To analyze her choices, we define what’s called a null hypothesis: that she can’t distinguish the teas.

If she can’t distinguish the teas, she’ll still get the right answer 1 in 70 times by chance. 1 in 70 is roughly .014. That single number is called a p-value. In many fields, a p-value of .05 or below is considered statistically significant, meaning there’s enough evidence to reject the null hypothesis. Based on a p-value of .014, they’d rule out the null hypothesis that she can’t distinguish the teas.

Though p-values are commonly used by both researchers and journals to evaluate scientific results, they’re really confusing, even for many scientists. That’s partly because all a p-value actually tells us is the probability of getting a certain result, assuming the null hypothesis is true. So if she correctly sorts the teas, the p-value is the probability of her doing so assuming she can’t tell the difference.

But the reverse isn’t true: the p-value doesn’t tell us the probability that she can taste the difference, which is what we’re trying to find out. So if a p-value doesn’t answer the research question, why does the scientific community use it? Well, because even though a p-value doesn’t directly state the probability that the results are due to random chance, it usually gives a pretty reliable indication.

At least, it does when used correctly. And that’s where many researchers, and even whole fields, have run into trouble. Most real studies are more complex than the tea experiment. Scientists can test their research question in multiple ways, and some of these tests might produce a statistically significant result, while others don’t.

It might seem like a good idea to test every possibility. But it’s not, because with each additional test, the chance of a false positive increases. Searching for a low p-value, and then presenting only that analysis, is often called p-hacking. It’s like throwing darts until you hit a bullseye and then saying you only threw the dart that hit the bull’s eye.

This is exactly what the music researchers did. They played three groups of participants each a different song and collected lots of information about them. The analysis they published included only two out of the three groups. Of all the information they collected, their analysis only used participants’ fathers’ age—to “control for variation in baseline age across participants.”

They also paused their experiment after every ten participants and continued if the p-value was above .05, but stopped when it dipped below .05. They found that participants who heard one song were 1.5 years younger than those who heard the other song, with a p-value of .04.

Usually, it’s much tougher to spot p-hacking, because we don’t know the results are impossible: the whole point of doing experiments is to learn something new. Fortunately, there’s a simple way to make p-values more reliable: pre-registering a detailed plan for the experiment and analysis beforehand that others can check, so researchers can’t keep trying different analyses until they find a significant result.

And, in the true spirit of scientific inquiry, there’s even a new field that’s basically science doing science on itself: studying scientific practices in order to improve them.

More Articles

View All
A Skeptic’s Guide to Loving Bats | Podcast | Overheard at National Geographic
Amy: “Hey, how’s it going?” Jacob: “Uh, it’s going all right. How are you?” Amy: “Uh, you know, hanging in.” Jacob: “Uh, so how’s the history magazine these days?” Amy: “It’s bloody.” Jacob: “Very bloody.” Amy: “Okay, tell me not too much more but a…
Understanding factor pairs
What we’re going to do in this video is talk about factors and factor pairs. Now when we talk about factors, these are really numbers that can be multiplied together to make some number. So for example, if I were to talk about factors of 6, I could multip…
Grizz Quiz: How Much Do You Know About Grizzly Bears? | Short Film Showcase
Maybe they’re your worst nightmare, or perhaps they bring a smile to your face. Grizzly bears are famous for triggering a whole range of different emotions, most of them passionate. You might have asked you a couple of questions. Let me start with this on…
Building Dota Bots That Beat Pros - OpenAI's Greg Brockman, Szymon Sidor, and Sam Altman
Now, if you look forward to what’s going to happen over upcoming years, the hardware for these applications for running your own, that’s really, really quickly going to get faster than people expect. I think that what that’s gonna unlock is they’re going …
Ask Sal Anything! Homeroom - Tuesday, September 22
Hi everyone! Sal here. I was enjoying the view outside when you caught me. Uh, welcome to today’s homeroom live stream! Uh, today we’re going to have just an “ask me anything.” So, uh, if you already have some questions, feel free to put them into the me…
Charlie Munger on Investing in China and Alibaba Stock
All right, we’re back with more from Charlie Monga’s recent Q&A session at the Daily Journal Corporation’s annual shareholder meeting. So, last time we spoke about Charlie’s thoughts on inflation and interest rates. Um, I’ll link to that video if you…