Journalists tend to temper, not exaggerate, scientific claims, U-M study shows

February 21, 2022
Written By:
Jared Wadley
Contact:

While splashy clickbait headlines touting the power of chocolate to cure everything from acne to cancer are certainly attention grabbers, these articles may not be commonplace in science communication.

A University of Michigan large-scale study of uncertainty in science communications indicates that journalists tend to temper—not exaggerate—scientific claims.

New research by U-M School of Information scholars Jiaxin Pei and David Jurgens dug into how scientific uncertainty is communicated in news articles and tested whether scientific claims are exaggerated. They also wanted to see how scientific claims in the news might differ between well-respected, peer-reviewed journals vs. less rigorous publications.

“I feel like when we talk about the potential of journalists exaggerating claims, it’s always these extreme cases,” said Jurgens, assistant professor of information. “We wanted to see if there was a difference when we lined up what the scientist said and what the journalist said for the same paper.”

Overall, Pei and Jurgens uncovered positive news about science communication.

“Our findings suggest that journalists are actually pretty careful when reporting science,” said Pei, adding that if anything, some communicators—not journalists—reduce the certainty of scientific claims.

“Journalists have a hard job,” said Jurgens, who acknowledges the skill it takes to translate scientific results to a general audience. “It’s nice to see that journalists really are trying to contextualize and temper scientific conclusions within the broader space.”

For their study, the researchers looked at certainty, which can be expressed in subtle ways.

“There’s a lot of words that will signal how confident you are,” Jurgens said. “It’s a spectrum.”

For instance, adding words like “suggest,” “approximately” or “might” tend to increase uncertainty, while using a precise number in measurements indicates greater certainty.

Pei and Jurgens pulled news data from Altmetrics, a company that tracks mentions of scientific papers in news stories. They collected nearly 129,000 news stories mentioning specific scientific articles for their analysis.

In each of the news stories and scientific papers, they parsed any sentences that contained discovery words, such as “find” or “conclude,” to see how journalists and scientists were stating claims of the paper. A group of human annotators went through the scientific papers and news articles, noting certainty levels in more than 1,500 scientific discoveries.

“We took claims in the abstract and tried to match them with claims found in the news,” Jurgens said. “So we said, ‘OK, here’s two different people—scientists and journalists—trying to describe the same thing, but to two different audiences. What do we see in terms of certainty?'”

The researchers then built a computer model to see if they could replicate the certainty levels that human readers pointed out. Their model was highly correlated with human assessments of how certain a claim was.

“The model’s performance is good enough for large-scale analysis, but not perfect,” said Pei, a UMSI doctoral student and first author of the paper, who explained that there is a gap between human judgment and machine predictions, mostly because of subjectivity.

“When identifying uncertainty in text, people’s perceptions can be diverse, which makes it very hard to compare model predictions and human judgments. Humans can sometimes disagree a lot.”

Pei says research translation can get murkier when it comes to the quality of the journal, or what researchers call journal impact factors. Some science news writers report similar levels of certainty in the news, no matter where the original study is published.

‘”This can be problematic given the journal impact factor is an important indicator of research quality,” he said. “If journalists are reporting research that appeared in Nature or Science and some unknown journals with the same degrees of certainty, it might not be clear to the audience which finding is more trustworthy.”

In all, the researchers view this work as an important step in better understanding uncertainty in scientific news. They created a software package for scientists and journalists to calculate the uncertainty in research and reporting.

While journalists can benefit from a certainty check on their work, Jurgens says the tool might be helpful to readers as well.

“It’s easy to get frustrated with uncertainty,” he said. “I think providing a tool like this could have a calming effect to some degree. This work isn’t the magic bullet, but I think this tool could play into a holistic understanding for readers.”

The work was published in the Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing.

Written by Sarah Derouin, School of Information

 

More information: