Hi, I’m Adriene Hill and Welcome back to Crash Course Statistics. In this series we’ve been talking a lot about how often you see data and statistics on the news and on social media. There are all sorts of studies and data sets promising the keys to a better life.
Coffee is good for you. No wait. It’s killing you So what do you trust?
INTRO Journalism has many goals to inform, expose, help people make better decisions about their communities and their lives. But, the only way journalism matters--is if people read it or watch it or listen to it. Journalists have to capture the attention of their audience and help them connect with the story.
Case studies and observational studies can be great source materials for articles or podcast. But, it’s always important to look at both the quality of the science. And the quality of the journalism.
If there’s a study without a control group or with non-randomized design it’s going to be less reliable. And if a journalist doesn’t ask those questions, or just doesn’t include the answers in her story, how are you going to know? Back in 2015, newspapers around the world ran stories heralding Chocolate as a way to lose weight.
It sounded great. But turns out it wasn’t good journalism or good science. A science journalist and PhD named John Bohannon created this story by doing a real, randomized study, but one that was intentionally riddled with flaws.
It was meant to be a way to show that academic journals would publish the very flawed study. And so would news outlets. The goal, writes Bohannon, was to demonstrate how easy it is to turn bad science into big headlines.
And it worked. Once the study was published and the press release went out journalists jumped on it. Bohannon says that many outlets ran the story without ever contacting him.
Very few reporters asked about the number of subjects they tested-- only 16--and no he says, reported that number. Also says Bohannon the stories that ran didn’t quote any outside researchers for corroboration. And while it is disappointing that you can’t add some Cadbury to your diet and lose weight, and maybe your aunt keeps quoting this study to you as she downs that 5th and 6th bon-bon, but bad science and bad scientific journalism is not always that harmless.
Most clinical studies base their conclusions on statistical tests that give researchers--and the rest of us a quantifiable way to measure the evidence that the study provides. For example, when a reputable doctor claims that Ibuprofen increases the risk of fertility issues in men, it’s because there was a study with a group that took Ibuprofen and a control group that didn’t, and the subjects taking Ibuprofen showed a tangible increase in some measure of infertility. But which measure?
And was the control group given a placebo? An article that you see on Yahoo health, probably won’t tell you in fact this one doesn’t. For that information, you have to go to the original academic article, and those can be kinda dense.
It turns out that this study did have a placebo control group, and it measured infertility in a clinically respectable way, by measuring levels of fertility related hormones. But these facts are important when considering how trustworthy the conclusion of a study is, and a lot of news articles don’t have them. As a side note if this study was done in rats instead of humans the conclusion that Ibuprofen increases the risk of fertility issues in men would not be as strongly supported!
Let’s go to the Thought Bubble. Imagine that you’re going about your morning as usual, sipping your coffee and scrolling through the latest news when you see an article with the title “Miracle Food causes weight loss! " You want to fit better into your jeans so, you click through.
You see that the miracle food is called Targ, AND the results were “statistically significant”so it’ seems legit. You jump in your car and drive down to the local grocery store and see that Targ is on sale! So you pick up the largest pack and start eating you’re already feeling stronger But then you begin to experience side effects like heartburn and stomach ulcers and a desire to fight.
The article you read didn’t mention that when researchers considered the over 20,000 subjects that the weight loss was only about 1/10th of a pound more for Targ eaters. That’s not very much maybe not worth these side effects. When a study reports something as significant, you probably assume that this means that it’s really gonna matter, but this isn’t always the case, since significant means something different in statistics than in everyday English.
And Science journalists can misuse this confusion by not mentioning how big of an effect was observed. Thanks Thought Bubble! We don’t have time to read all the academic articles on even one topic that affects us.
Take what gets called “text neck”-- a condition that includes sore neck muscles from looking down at your phone and laptop all the time. A Google Scholar search for Academic articles about “Text Neck” returns over 180 results and that’s only since 2013. There’s no way that you could read all of those without exacerbating your already sore neck.
So we need people like scientific journalists who can distill all those articles into digestible--and engaging--pieces for us to consume. It’s helpful to be skeptical but we should keep reading about science. When reading a science story it’s important to note a couple things: who wrote it, who published it, who did the science, and who funded the science.
If an article that tells you that drinking Diet Coke is good for your teeth and is on the Coke website, your suspicion should be raised more so than if it was published by Scientific American. You should also consider who funded and completed the research the article is based on. If you read an article that claims a rare fruit juice will reduce your blood pressure and stave of cancer and you see that study was funded by the juice company.
Be suspicious. Not every study funded by a company is inherently flawed. Science cost money, it can be expensive, and while there are sources of funding from governments and other neutral organizations, the reality is that often the people who are willing to pay to have the research done are the companies who have a vested interest in the results.
Sometimes to get the research done, researchers need to partner with these organization. Privately funded research can be done well. Another thing to watch for in science and health journalism is whether the claims made in the headline actually match the claims made in the story.
You don’t see many stories with headlines like “Ketchup may have mild relationship with weight gain in men over 40”cause who’s going to read that? It might be accurate, but it’s just not as flashy as “Is Ketchup making you fat! ?
” There are a number of reasons we get this splashy headlines. Media outlets from Buzzfeed to Goop to the old-guard newspapers are all fighting for audience these days--Maybe not exactly the same audience but audience. And that competition makes the super-sexy headline, really, really appealing.
Sensational gets clicks. Content creators are under pressure to find and write what’s gonna get shared. The language of correlation is uncertain and, as such, less catchy.
You’ll also spot plenty of causation problems in science and health reporting. When you see an article that claims that “doing yoga cures cancer” you should check to see whether it was an experimental study or whether the claim is based on correlation on a survey between doing yoga and not having cancer. Only experimental studies with randomized designs and control groups have a shot at showing evidence of causation.
Because personally, I can think of a lot of confounding factors for yoga and cancer study. Nowon to another way science can get mischaracterized. There are studies done on mice and rats that get reported as if they were studies on humans.
And while a lot of medical and health related studies get their start in mouse models many of the treatments that work in mice don’t end up being successful in their human counterparts. Similarly, you will see clickbait-y headlines that say “Hydrogen Peroxide kills cancer! ” and lists all the ways you can now incorporate H2O2 into your daily life.
But what the title doesn’t tell you is that these were in vitro studies, which means they’re done on real cancer cells but in a petri dish. In a very simplified sense-- the cells were grown by themselves in a dish and the substance of interest was put into the dish and it killed the cancer cells. But in a dish lots of things we consume everyday will kill cancer cells like coffee, or alcohol.
But they even working in tandem aren’t going to cure cancer. Anyway these misguided “hydrogen peroxide kills cancer” headlines get shared around online and people come up with alternative therapies that involve consuming Hydrogen Peroxide which can be really, really dangerous. Like dead dangerous.
Science stories can make for great journalism. And they can give you something clever to say at your next dinner party. But any time you hear a cable talk show host says the phrase “scientists have found.
. . ” or “a new study suggests…” you should always just look up that study to be sure.
At least before you start spreading it around. And if the results of a study will cause you to make any changes in your life or your family’s life you should really go back and check the science. No matter how reputable the source, it’s always important to be aware of these issues, whether you see it in Buzzfeed or the Economist.
Articles often gloss over all kinds of details. The kind of control group that was used, or whether the study was done in mice or monkeys, all of which can make a huge difference in how strongly you can take the claims of the study. And the bigger the life change you are thinking about making, the more in depth your search for information should be.
Adding a square or two of dark chocolate to your diet is not going to be a big deal. Trying to cure cancer with high doses of Vitamin K just because some study found it kills cancer cells in a dish that is. So this doesn’t mean all the science you read` about on Reddit or watch on your favorite YouTube channel is wrong.
It just means that you need to use statistical thinking to check which claims are reasonable and which aren’t. In order to help us remember some of the rules of thumb we talked about today, our writer Chelsea came up with a limerick: And so without further ado. Crash Course’s FIRST original limerick.
When a study reports correlations Or has mice as its main population The results it declares May not be quite fair So be careful about generalizations Alright, let’s see you do better. Thanks for watching. I'll see you next time.