As the status quo shifts, we're becoming more forgiving when algorithms mess up

Gaby Clark
scientific editor

Andrew Zinin
lead editor

New inventions—like the printing press, magnetic compasses, steam engines, calculators and the internet—can create radical shifts in our everyday lives. Many of these new technologies were met with by those who lived through the transition.
Over the past 30 years alone, we've seen our relationship with the internet transform dramatically—it's fundamentally changed how ; how we ; and, more recently, how we .
As new technologies and ways of doing things emerge, we fixate on their flaws and errors, and judge them more harshly than what we're already familiar with. These apprehensions are not unwarranted. Today, important debates continue around , , and in the use of AI.
But how much of our aversion is really about the technology itself, and how much is driven by the discomfort of moving away from the status quo?
Algorithm aversion
As a Ph.D. student in cognitive psychology, I study human judgment and decision-making, with a focus on how we evaluate mistakes, and how context, like the status quo, can shape our biases.
In my research with cognitive psychologists Jonathan A. Fugelsang and Derek J. Koehler, we tested how people evaluate errors made by humans versus algorithms .
Despite algorithms' track record of consistently outperforming humans in several , people have been hesitant to use algorithms. This mistrust goes back as far as the 1950s, when psychologist Paul Meehl argued that simple statistical models could make . Yet the response from experts at the time was far from welcoming. As psychologist Daniel Kahneman would later put it, the reaction was marked by "."
That early resistance continues to echo in more , which shows that when an algorithm makes a mistake, people tend to judge and punish it more harshly than when a human makes the same error. This phenomenon is now called algorithm aversion.
Defining convention
We examined this bias by asking participants to evaluate mistakes made by either a human or by an algorithm. Before seeing the error, we told them which option was considered the conventional one—described as being historically dominant, widely used and typically relied upon in that scenario.
In half the trials, the task was said to be traditionally done by humans. In the other half, we reversed the roles, indicating that the role had traditionally been done by an algorithmic agent.
When humans were framed as the norm, people judged algorithmic errors more harshly. But when algorithms were framed as the norm, people's evaluations shifted. They were now more forgiving of algorithmic mistakes, and harsher on humans making the same mistakes.
This suggests that people's reactions may have less to do with algorithms versus humans, and more to do with whether something fits their mental picture of how things are supposed to be done. In other words, we're more tolerant when the culprit is also the status quo. And we're tougher on mistakes that come from what feels new or unfamiliar.
Intuition, nuance and skepticism
Yet, explanations for algorithm aversion continue to make intuitive sense. A human decision-maker, for instance, might be able to consider the nuances of real life like an algorithmic system never could.
But is this aversion really just about the non-human limitations of algorithmic technologies? Or is part of the resistance rooted in something broader—something about shifting from one status quo to another?
These questions, viewed through the historic lens of human relationships with , led us to revisit common assumptions about why people are often skeptical and less forgiving of algorithms.
Signs of that transition are all around us. After all, debates around AI haven't slowed . And for a few decades now, algorithmic tech has already been helping us , , , , and even help .
And while many studies document algorithm aversion, recent ones also show —where people actually prefer or defer to algorithmic advice in a .
We're increasingly leaning on algorithms, especially when they're faster, easier and appear just as (or more) reliable. As that reliance grows, a shift in how we view technologies like AI—and their errors—seems inevitable.
This shift from outright aversion to increasing tolerance suggests that how we judge mistakes may have less to do with who makes them and more to do with what we're accustomed to.
More information: Hamza Tariq et al, Using conventional framing to offset bias against algorithmic errors, Judgment and Decision Making (2025).
Provided by The Conversation
This article is republished from under a Creative Commons license. Read the .