Prioritizing equity among the great challenges of AI, says U-M expert on national panel

May 29, 2024
Concept illustration of the societal and policy issues surrounding artificial intelligence. Image credit: Nicole Smith, made with Midjourney


As generative artificial intelligence becomes ubiquitous, University of Michigan expert Shobita Parthasarathy says technical revisions to the mechanics may address some harms built into the technology so far.

Shobita Parthasarathy
Shobita Parthasarathy

Yet, she argues, “it will always be behind the curve of inequities that emerge unless they are accompanied by larger scale changes in the innovation system.”

Parthasarathy, professor and director of the Science, Technology, and Public Policy Program at U-M’s Ford School of Public Policy, was part of an interdisciplinary group of experts convened by the National Academy of Sciences to explore rising challenges posed by the use of AI in research and to chart a road map for the scientific community.

The group’s research papers were recently published in Issues in Science and Technology as a series called “Strategies to Govern AI Effectively.” With Jared Katzman, a doctoral student at the School of Information and STPP graduate certificate student, Parthasarathy wrote a paper on how to prevent and address inequities built into AI.

The full panel also published an editorial in the Proceedings of the National Academy of Sciences, “Protecting Scientific Integrity in an Age of Generative AI,” which articulates five principles for using AI in scientific research.

“In recent months, generative AI has become ubiquitous,” Parthasarathy said. “In the process, its limitations have begun to reveal themselves, from producing incorrect and even dangerous outputs to denying credit to creators and thinkers to using significant natural resources.

“Equity is a particularly serious problem. Based largely on Anglo-American data, the technology reproduces historical and cultural—and often racial—biases. It relies on the labor of poorly paid workers, often in low-income countries, to categorize and label data and often violent social media posts. And its design and functions tend to reflect the priorities of those building them—a particularly homogenous group.

“In order to address these problems, Jared Katzman and I argue that we need to rethink the AI innovation system by centering the needs, perspectives and knowledge of marginalized communities. These communities need to be empowered to shape the design of these technologies, including the problems that it tries to solve, how data is identified and categorized, how participants in this work are valued and compensated and how emerging technologies are evaluated.

“In addition, others who have deep expertise in the relationships between technology and society, including social scientists, humanists and lawyers, need to participate more actively in the innovation system. AI developers, meanwhile, need to be trained how to be empathetic and humble, and understand the expertise that all of these additional experts bring.

“Finally, governments need to incorporate assessments of equity impacts into their regulatory frameworks.”