AI in society: Perspectives from the field

November 29, 2023
Contact:

Experts working in artificial intelligence, from technological to public policy roles, discuss this turning point in AI and what it means for the future

It may feel like artificial intelligence was just invented with all the hype surrounding ChatGPT and other technologies built on large language models, but six Michigan experts explain how AI has been active in our lives for years—and their hopes and concerns for the future.

AI can be much more than a chatbot. Maggie Makar, assistant professor of computer science and engineering, builds predictive models that encode cause and effect relationships rather than discovering associations.

Joyce Chai, professor of computer science and engineering, builds robotic systems that can understand and act on natural language—basically the way we speak normally.

And Rada Mihalcea, the Janice M. Jenkins Collegiate Professor of Computer Science and Engineering, focuses on how to design AI to help human workers, with one current project providing feedback to counselors.

They offer perspectives on the promise of AI—how it might assist us with both physical and cognitive tasks, and detect wrongdoing on the part of corporations.

AI comes with risks, however. Shobita Parthasarathy, director of the Science, Technology and Public Policy Program, lays out how AI absorbs society’s biases and what it could mean if AI continues to perpetuate them—but with a veneer of objectivity. She touches on the need for regulation to ensure that biased AI doesn’t create barriers for people of color, LGBTQ+ individuals and other marginalized groups.

Our problem today isn’t the looming catastrophe of sentient computers and killer robots, Makar argues. We are facing real violence right now due to the radicalization and civil unrest sown by AI algorithms running social media platforms.

And it’s only going to get harder to escape over time, says Nikola Banovic, assistant professor of computer science and engineering who explores how to build trustworthy AI. He notes that AI is in the process of becoming as embedded in our lives as fossil fuels, and avoiding its ills may be as difficult as ending carbon emissions.

Finally Michael Wellman, the ​​Richard H. Orenstein Division Chair of Computer Science and Engineering and Lynn A. Conway Collegiate Professor of Computer Science and Engineering, explains that our laws are largely designed around human action and human intent. He recently testified before the Senate Committee on Banking, Housing and Urban Affairs about regulating algorithmic financial trading. Who is responsible when AI chooses an expedient but harmful—and illegal—path to meet its goals?

These are the challenges that lay ahead for the field, for regulators and for society at large as AI continues to grow in ability—and ubiquity.