More misinformation expected as Meta seeks to end fact-checking program
EXPERTS ADVISORY
Meta plans to end its fact-checking program that limits false or misinformation on its social media platforms, such as Facebook, Instagram and Threads. Experts at the University of Michigan School of Information are available to discuss the impact of Tuesday’s announcement.
Oliver Haimson, assistant professor of information, can discuss social media and human-computer interactions.
“Meta’s move to stop employing fact-checkers seems like a dangerous decision in today’s politically charged climate, where we need to be able to distinguish between facts and misinformation or disinformation more than ever,” he said.
“Meta will still be using a crowdsourced method of fact-checking, which will hopefully help to filter out some dangerous misinformation, but this is an insufficient substitute for dedicated professionals. I find Zuckerberg’s statement that “fact checkers have been too politically biased” troubling, as it suggests that truth itself is subject to political interpretation. This dangerous precedent risks legitimizing misinformation—a trend we’ve already seen with vaccine denial, election denial and other harmful conspiracy theories.”
Contact: [email protected]
Cliff Lampe, professor of information, researches the social and technical structures of large-scale technology-mediated communication, working with sites like Facebook, Wikipedia, Slashdot and Everything2.
“Meta has not had much success with their current fact-checking tools, so a change in approach is reasonable,” he said. “It will be interesting to see if their community approach is paired with any response to widespread misinformation on the site. It is understandable that they don’t want to be the arbiter of what is true or not, but at the same time they have a responsibility to maintain an environment that does not harm people.”
Contact: [email protected]
Paul Resnick, professor of information, can talk about social media and computational social science. He is the director of the Center for Social Media Responsibility, which is home to the Community Notes Monitor website.
“There’s a lot that Meta can learn from X’s Community Notes. In addition to distributing decision-making power to a larger group of people, it works a lot faster than Meta’s third-party fact-checking system did,” he said. “It also employs a clever ‘bridging ranking’ algorithm that approves proposed notes when they are upvoted by people who don’t usually agree with each other. That creates an incentive for people to write less partisan notes.
“But there are a lot of risks of manipulation and overenforcement. X has had the benefit of developing their Community Notes system slowly and making small adjustments, building the community’s confidence in it over time. It will be interesting to see whether Meta can avoid the many pitfalls that could arise during implementation.”
Contact: [email protected]