ChatGPT: U-M experts can discuss AI chatbots, their reach, impact, concerns, potential

March 3, 2023
Contact:
Concept illustration of artificial intelligence. Image credit: Nicole Smith, made with Midjourney

EXPERTS ADVISORY

As ChatGPT hits three months in operation, its reach and uses continue to widen along with a range of efforts and research to grasp and anticipate the upside and downside of using artificial intelligence to do what humans would normally do.

University of Michigan experts are available to discuss ChatGPT’s role in scientific research, education, computer science, engineering and business along with ethical implications.

Timothy Cernak is an assistant professor of medicinal chemistry at the College of Pharmacy and assistant professor of chemistry at the College of Literature, Science and the Arts. His lab team has previously used other forms of artificial intelligence to streamline time-consuming, repetitive tasks. Recently, they plugged ChatGPT into their own software, called phactor, which is used to drive chemistry experiments using robots.

Their first attempt using ChatGPT and phactor together ended up giving a highly productive reaction, with a chemical product delivered in 84% yield. The addition of ChatGPT to chemistry labs like Cernak’s holds promise for streamlined, time-saving yet reliable scientific research.

“A vital benefit in my mind is speeding up the development of pharmaceuticals, for example, through AI platforms such as ChatGPT,” he said. “The scientific literature is so vast at this point that a human chemist cannot read every paper on a topic. ChatGPT can though. It can perform time-consuming, routine processes so that researchers can focus on the processes that require deeper, nuanced human thinking.”

Contact: [email protected]


Shobita Parthasarathy is a professor at the Ford School of Public Policy and director of the Science, Technology, and Public Policy Program. Her research focuses on the politics and policy related to science and technology. She is interested in how to develop innovation—and innovation policy—to better achieve public interest and social justice goals.

“Our research shows that large language models such as ChatGPT are likely to reinforce inequality, reinforce social fragmentation, remake labor and expertise, accelerate the thirst for data and accelerate environmental injustice, due to the homogeneity of the development landscape, nature of the datasets, and lack of transparency in the algorithms that power them,” she said. “This makes national and international policy action even more crucial.”

Contact: [email protected]


Benjamin Kuipers is a professor of computer science and engineering at the College of Engineering who studies intelligent robotics, including ethics in AI and robotics.

“ChatGPT is an important development. It’s simultaneously impressive and clumsy at the moment, but we can confidently expect it to get smoother and more expert in the future—though real expertise waits on further breakthroughs in AI,” he said.

“Used in research studies, I would consider it essential for any publication to include an explicit methodological discussion of the use of ChatGPT, including how the research team modeled its imperfections, and how they safeguarded the integrity of their experiment against probable errors on the part of ChatGPT.

“Pretty much any other research tool also has imperfections, and we expect scientists to document how they protect their results against those. Exactly how to do that with AI is being invented in an ad hoc way by scientific authors now, but we should have some generally accepted methods in a few years.”

Contact: [email protected]


Nigel Melville is an associate professor of information systems at the Ross School of Business and director of the Design Science program.

“AI and related technologies are as much about people as about new technologies,” he said. “My research and perspective focuses on how these new machine capabilities can be used by people to advance positive outcomes while minimizing negative ones. AI-related machine capabilities are the fastest growing app platforms in history, and we need to be having more conversations so that everyone can be part of developing better solutions for all.”

Contact: [email protected]


Nikola Banovic is an assistant professor of electrical engineering and computer science at the College of Engineering who studies trustworthy AI.

“We’re in a period of hype as these emerging AI-based systems—including the newer large language models like ChatGPT—are entering the consumer market without being well tested,” he said. “Even as people have been interacting with these platforms, there are also armies of programmers working behind the scenes, improving and modifying the systems so they can weather any PR storms that arise from the models ‘saying’ or doing the wrong thing.”

“For the future, we need to develop tools that policy-makers, consumer advocacy groups, and even consumers themselves, can use to understand what these models are doing and why, in addition to what it is that they cannot and should not do. To protect people from untrustworthy AI that might not have consumers’ best interests in mind will take a combination of policy and AI literacy—so that the general public can evaluate the abilities and limitations of AI tools.”

Contact: [email protected]


Ivo Dinov is a professor at the School of Nursing and professor of computational medicine and bioinformatics at the Medical School. He says there are similarities and differences between discussions of using technology such as calculators and laptops in classrooms in the 1980s, 1990s and 2000s, and current contradictory opinions regarding AI.

“Rather than describing one immutable technology or a specific computational platform, contemporary generative AI refers to a very broad, amorphous, rapidly evolving and highly potent technology,” Dinov said. “Instead of trying to restrict, control, delay or subdue generative AI proliferation, there are at least three important directions the academic community can focus on.

“First, train the trainer—the first impressions and the most knowledge Gen-Z learners gain about generative AI appears to be from random sources. Training faculty about the technical pillars of generative AI, and its enormous promise and potential pitfalls, will go a long way toward establishing a trustworthy, consistent and responsible faculty-led student-training in ethical AI development and use.

“Second, level the playing field––presently, there is a huge AI divide between the haves and have-nots. Some students have the means to acquire access to extremely powerful generative AI or may have access to such services via specialized lab resources, whereas others will not.

“And third, endorse the free and open sharing of generative AI resources (data, algorithms, models, services). Think about the enormous societal benefits and productivity gains realized over the past few decades from the design, implementation, sharing and community support for the open infrastructure underpinning the World Wide Web. With strong academic support of free and open generative AI, this impact may increase exponentially.”

Contact: [email protected]


Paramveer Dhillon is an assistant professor at the School of Information. His research revolves around developing new machine learning and causal inference techniques for human-centric applications. He is also interested in examining the impact of internet technologies on individuals and the economy. His research seeks to understand the intricate interactions between technology and human behavior, with the ultimate aim of developing effective interventions and policies to address the opportunities and challenges presented by the digital age.

Contact: [email protected]