High-stakes decisions: Choosing between human and algorithmic decision-makers

Society increasingly uses algorithms to make weighty decisions in contexts including criminal justice, health care, and finance, a trend that has been criticized for institutionalizing bias and sacrificing fairness.
In a in the journal PNAS Nexus, Kirk Bansak and Elisabeth Paulson asked 9,000 US-based study participants to choose between decision-makers for two high stakes situations: pretrial release and bank loan applications. Participants chose either between two human decision-makers, between two algorithmic decision-makers, or between one human and one algorithmic decision-maker.
In each scenario, participants were presented with simulated statistics about the performance of the decision-makers. Participants prioritized efficiency over fairness when choosing decision-makers. This pattern was broadly consistent regardless of the race, political party, education level, or beliefs about artificial intelligence of the participants.
The pattern also held regardless of whether the decision-makers were human or algorithmic. However, participants were slightly more likely to pick humans than algorithms. Republicans had a stronger preference for humans than Democrats did.
When asked directly, a large percentage of participants claimed that fairness was one of their top priorities, despite scarcely taking fairness into account in their actual decisions. According to the authors, demonstrably efficient algorithms will likely overcome cultural aversion to algorithms for all groups.
More information: Kirk Bansak et al. Public attitudes on performance for algorithmic and human decision-makers, PNAS Nexus (2024).
Journal information: PNAS Nexus
Provided by PNAS Nexus