A year of ChatGPT: U-M experts available to comment
A year ago, OpenAI launched ChatGPT, a chatbot built on the large language model GPT-3. Since then, knowledge workers in all sectors have been grappling with the potential uses of it and similar artificial intelligence models—as well as the shortcomings of the models and what they produce.
University of Michigan experts are available to comment.
Shobita Parthasarathy is a professor of public policy and director of the Science, Technology, and Public Policy Program. She is interested in how to develop innovation—and innovation policy—to better achieve public interest and social justice goals. She spoke about AI earlier this year on the Business & Society podcast.
“Over the last year, the world has woken up to artificial intelligence, and the public discussion that has emerged is incredibly important. But in the process, we must be careful not to approach AI either as an asteroid coming to destroy us all or a magic wand that will make our problems disappear,” she said. “It is made by and for humans, and we as citizens have a crucial role to play in shaping it to maximize the benefits and eliminate the harms.
“I have been heartened by the steps taken by union leaders, the Biden administration, and the European Union to regulate the technology, but much more needs to be done. We must exercise our power to oversee the continued development and regulation of artificial intelligence.”
Video: “AI in society: A cautionary tale”
Rada Mihalcea, professor of computer science and engineering, studies natural language processing (NLP) at the intersection of computational social science and health care, particularly processing multiple kinds of data together. She is also an active promoter of NLP for positive social impact.
“This past year has been like no other in terms of how much deployment of NLP-powered applications we have seen, most if not all of it influenced by the rapid progress in large language models like ChatGPT, Bard, Claude and others. The impact that these technologies have had cannot be overstated,” she said. “This past year has also been a year like no other in terms of shift of research focus in the Natural Language Processing community, where most of the current research work is now centered around language models.
“What I would like to see more of, and unfortunately I am still not seeing, is representation of the world population in these models. Large language models that are currently deployed (including ChatGPT) are very Western-focused and do not reflect the majority of the world population. The use of these systems to power social impact applications has also been very sparse. I am hoping we will get to see a significant increase in representation and many more social applications in the near future.”
Video: “AI in society: A look into the future”
Ravi Pendse is the vice president for information technology and chief information officer at U-M. He co-sponsored the creation of U-M’s first-ever Generative AI Advisory committee. He leads U-M’s Information & Technology Services department, which was the driving force behind the development of U-M’s AI Services. This effort made U-M the first university anywhere to provide all students, faculty and staff access to a suite of custom GenAI tools.
“ChatGPT democratized generative AI in the same way that the World Wide Web democratized the internet in the 1990s. Remember, the internet started as a government communications tool and then, with the web, it suddenly became a vital part of our daily lives,” he said. “ChatGPT opened doors for anyone to interact and experiment with AI in very real and very productive ways. It broke down the barriers and let everyone reap the benefits of this technology.
“That is why U-M has made a significant investment in creating our own GenAI tools, very much in the spirit of ChatGPT, which allow every member of our community to access them. We can see that AI has an essential role in the future of education. It is part of how the world does work now. But that means we must make sure that everyone has fair and equitable access to these tools. GenAI cannot change the world if only an elite few have access to it.
“After this first year of the GPT revolution, I am more confident than ever that tools like ChatGPT will never be able to replace human ingenuity. Instead, they will augment and enhance humanity. While these tools open almost unlimited opportunities for efficiency and innovation, there always needs to be a living, breathing human being making sure that these GenAI tools are being used responsibly, ethically and accurately.”
David Jurgens is an associate professor of information and computer science and engineering. His research creates technology for understanding social language and he teaches students at all levels how to use AI.
“Tools like ChatGPT are changing how we educate as students learn to use them effectively,” he said. “Students—and teachers—know all too well how ChatGPT can generate reasonable answers to surprising sophisticated questions. However, students are just starting to learn how to use these tools not just as an answer-generator but as editors, tutors and critics.
“These always-there capabilities can help students navigate challenging content and get personalized feedback while still developing their critical thinking skills. Educators will need to move beyond making course content ‘ChatGPT-proof’ and adapt how we ask students to engage with material and how we can evaluate learning success.
“As tools like ChatGPT get put in more human-like roles, we will need to consider how human-like we want or expect them to be. Should they be funny? Overly polite? Emotional? And whose definitions of these get embodied in the tool? Answering these questions will require input from consumers, policymakers and technologists to address the developing capabilities and ethical implications of social AI.”
Nigel Melville is an associate professor of technology and operations at the Ross School of Business whose primary focus areas include digital transformation, energy informatics and AI affordances. Melville spoke about AI earlier this year on the Business & Society podcast.
“The launch of ChatGPT in November 2022 was a Sputnik moment that caught society, government and industry (outside the tech sector) unprepared for its implications. One year later, while the technology has advanced, society continues to struggle with its significant implications,” he said.
“Here at the University of Michigan, we’ve taken an early lead, but so much more is needed. We can drive our public mission by leading the AI ‘space race’ towards safe—and productive—AI. The words of President John F. Kennedy in 1962 in reference to the space race are equally inspirational in today’s AI moment: ‘We set sail on this new sea because there is new knowledge to be gained, and new rights to be won, and they must be won and used for the progress of all people.'”
Robin Brewer, assistant professor of information, has conducted research on accessibility (older adults and people with vision impairments), voice-based interface, and online communities and social computing.
”The ChatGPT anniversary is surely clouded by the Altman removal (and subsequent reinstatement), but it is worth noting how transformative a freely accessible generative AI platform has been,” she said. “It has also sparked ongoing questions about bias, ethics and creative licensing throughout the (machine learning) communities and beyond that are important to remain attentive to moving forward. And, it has further emphasized how aspects of computing and ethics are important aspects of any truly interdisciplinary education.”
Kentaro Toyama, professor of information, is an expert on information and communication technologies and development, theories of social change, and data-centric analysis of social justice issues
“The most significant thing about ChatGPT was that it was the first among similar systems to be publicly released,” he said. “Its technical advances are of course impressive, but by going public even with flaws, OpenAI accelerated competition among tech rivals, provoked louder calls for AI regulation, and unleashed a new wave of creative AI applications.”
Paramveer Dhillon is an assistant professor of information whose research revolves around developing new machine learning and causal inference techniques for human-centric applications. His research seeks to understand the intricate interactions between technology and human behavior, with the ultimate aim of developing effective interventions and policies to address the opportunities and challenges presented by the digital age.
“Over the past year, ChatGPT has significantly impacted multiple sectors, revolutionizing education through personalized learning, enhancing customer service with intelligent interactions, and aiding health care professionals by summarizing complex medical literature,” he said. “Economically, it has spurred innovation and efficiency, reshaping business strategies and creating new digital market opportunities.
“Looking ahead, the potential of ChatGPT and similar AI technologies is immense. In education, they promise even more tailored and accessible learning experiences. In health care, their role could expand into diagnostic support and patient care, leveraging vast data for deeper insights. Economically, the transformative impact of AI is set to redefine job roles, foster new industries and drive digital transformation.
“The ability of these tools to analyze data on a large scale heralds a future of enhanced, efficient decision-making across sectors, positioning AI as a key driver in addressing global challenges and augmenting human capabilities. This trajectory points to a transformative era where AI significantly elevates our approach to technology, economy and society.”
Lu Wang, associate professor of computer science and engineering, studies natural language processing for building trustworthy large language models for tasks like document summarization, language generation and reasoning.
“As large language models (LLMs) become widely used by the general public, it’s crucial to ensure their outputs are factual, accurate and safe. However, current models, including the advanced GPT-4, fall short in these aspects. For example, GPT-4 struggles with high-school math problems and faces limitations when used in scientific fields like biology, chemistry and drug discovery,” she said.
“Furthermore, it’s essential for LLMs to align with societal values and norms, a capability that current models inconsistently exhibit. This is particularly concerning as these models become more integrated into our daily lives, which would pose significant risks. Addressing these challenges, especially in deploying LLMs for real-world applications that tackle pressing societal issues such as climate change, education and scientific discoveries, requires time and patience. Progress in these areas depends on careful design and thorough evaluation.”
Ivo Dinov is a professor of nursing and computational medicine and bioinformatics. He is director of the Statistics Online Computational Resource and is an expert in mathematical modeling, statistical analysis, high-throughput computational processing and scientific visualization of large datasets. His applied research is focused on neuroscience, nursing informatics, multimodal biomedical image analysis and distributed genomics computing.
“It’s natural and appealing for most of us to focus on the low-hanging-fruit—today’s or tomorrow’s impact of gen-AI in transforming clinical care for specific individuals, particular biomedical research problems or certain medical conditions,” he said. “The currently unrealized potential of gen-AI transcends into solving complex and dynamic health care problems of known, expected or even presently unknown nature—specifically, health care challenges covering diverse medical conditions, heterogeneous populations and multiple generations.
“The future of gen-AI involves transforming current human knowledge into artificial brains representing layered networks of nested perceptrons, or simulated digital versions of biological neuronal cells. Humanity is extremely successful in educating young minds in K-12 schools. AI brains can similarly be trained by feeding them with lots of valuable information and reinforcing their learning.
“Under strictly controlled human supervision, such pretrained gen-AI systems can organically create new content and continuously improve during these steady self-learning processes. This gen-AI training resembles the life-long education that most of us aspire to. The well-trained ultimate AI systems will be fair, autonomous and sustainability-focused, not necessarily always subservient, predictable or solely pleasing their masters.”
Also in the video: “AI in society: A cautionary tale”
Maggie Makar, assistant professor of computer science and engineering, builds predictive models that encode cause and effect relationships rather than discovering associations.
Nikola Banovic, assistant professor of computer science and engineering who explores how to build trustworthy AI.
Michael Wellman, division chair of computer science and engineering, recently testified before the Senate Committee on Banking, Housing and Urban Affairs about regulating algorithmic financial trading.
Also in the video: “AI in society: A look into the future”
Joyce Chai, professor of computer science and engineering, builds robotic systems that can understand and act on natural language—basically the way we speak normally.
Samantha Keppler is an assistant professor of technology and operations at the Ross School of Business. She can discuss K-12 educators’ perceptions of ChatGPT for helping their productivity, planning, engagement and student learning. She also can share insights on how ChatGPT might come to affect the ways schools manage teacher retention and shortages.