Do algorithm technologies improve efficiency? Study investigates reality of reliance, avoidance

September 24, 2024
Written By:
JT Godfrey, Ross School of Business
Contact:

From detecting cancer in medical imaging to recommending additional lesson plans for students in K-12 education, algorithm technologies have become crucial across many different industries.

Whether algorithms lead to improved efficiency is another question, according to University of Michigan researchers.

Despite the widespread adoption of algorithm technologies, research on “algorithm aversion” shows workers are hesitant to use them in their everyday work in many contexts. A new study by researchers at the U-M’s Ross School of Business explores just how algorithm reliance and avoidance plays out in customer-facing services.

Clare Snyder
Clare Snyder

“We often hear people talk about ideal visions for algorithms,” said doctoral student Clare Snyder. “One vision is that these tools will help people make really good decisions, quickly. But often this vision is based on the algorithm’s performance in a vacuum; it’s abstracted from the context where people are making those decisions.”

One inspiration for the research was Snyder’s predoctoral experience designing algorithm tools. She shared that while designing algorithm tools, there is often optimism about efficiency and adoption in the workplace. However, the design process does not often account for the human elements of implementation.

Snyder examined human-algorithm interaction in a study of roughly 400 participants with Samantha Keppler, assistant professor of technology and operations, and Stephen Leider, professor of technology and operations. They found algorithm aversion could affect decisions in other ways than previously explored.

Particularly, they found that workers were averse to using algorithm-generated recommendations to make fast decisions.

Going against conventional expectations, workers are not always faster when an algorithm is implemented. The efficiency gains of adding algorithms to worker-customer interactions depend on how quickly workers adopt algorithm-generated suggestions, which they may do quickly or slowly depending on how well they trust the algorithm’s accuracy and workload pressure.

The study provides industry leaders and tech developers with essential considerations for designing and implementing algorithms into their operations.

“You may spend a lot of time developing an algorithm for a lofty goal,” Snyder said. “However, workers have to trust an algorithm to take its advice quickly. You can’t expect efficiency until they’ve had time to get the information about the algorithm’s good performance.

“Even then, the other conditions, such as workload and time pressure, have to be right.”

The team’s key findings show companies need to create clear goals before implementing algorithm technology. Keppler says it’s important to remember workers can default to an algorithm’s recommendation quickly or spend more time considering it.

“Organizational leaders need to think about which of those actions is preferable,” she said. “Is the algorithm primarily there to help workers speed up or to improve worker accuracy?”

The researchers say the research is the beginning of exploring the interaction between workers and machine learning algorithms. They hope more field and observational research into how workers use algorithms will ultimately improve how people interact with algorithms in everyday life.

Particularly with the proliferation of popular tools such as ChatGPT, human reliance on algorithms will only get more complicated.

“We can build trust with generative AI, or not, kind of like we build trust with other people: slowly, over time, by learning what it can and cannot do well,” Keppler said. “While this, in theory, might be a good thing for reducing aversion, generative AI technologies are constantly changing and often inconsistent, making that trust hard to build—at least right now.”

Written by JT Godfrey, Ross School of Business