Discrimination by recruitment algorithms is a real problem

recruitment
Credit: Pixabay/CC0 Public Domain

Predictive artificial intelligence (AI) hiring systems (AHSs) are used by employers every day to screen and shortlist job candidates.

But while AI hiring systems promise time and for employers, they may also enable, reinforce and deepen discrimination against people who are already marginalized in the labor market.

Groups at risk include women, older workers, people with disability and those who speak English as a second language.

In 2024, 62% of Australian organizations used AI in recruitment moderately or extensively, according to the most recent .

Many of these systems use AI to:

  • Analyze or 'parse' CVs
  • Conduct assessments evaluating an applicant's personality, behavior and or abilities
  • Conduct "Robo-interviews'鈥攙ideo interviews where candidates self-record their answers to questions from an AI program

Some AHSs have been found to discriminate against applicants who wear a headscarf, while others are unable to make reasonable adjustments to enable access by people with disability.

In the , an AI system developed by Amazon learned to downgrade the applications of job seekers who used the word "women" in their CVs. The system had been trained on CVs from the mostly male-dominated tech industry.

Despite these known problems, substantial gaps exist in our understanding of the real鈥攁s opposed to theoretical鈥攔isks of discrimination when these systems are used.

in Journal of Law and Society , investigates, for the first time, the use of AHSs by Australian employers. It was found that the way these systems are used in practice creates serious risks of discrimination.

The study drew on interviews with Australian recruiters, AI experts and developers, career consultants and publicly available material from two prominent AHS vendors in the Australian market.

It was found that the data used to train AHSs risks embedding present-day and historical discrimination, and systems developed overseas may not reflect the diversity of people in Australia.

Many of the features built into the algorithmic models contain proxies for attributes like gender, disability or age, which may prevent people in these groups being shortlisted for jobs.

For example, when gaps in employment history are used as variables in algorithmic models, they may be proxies for "gender," as .

Discrimination can also result from the way the system is set up by employers.

For example, setting a time limit for answering questions may disadvantage job seekers from non-English-speaking backgrounds.

Also, discrimination can occur if employers do not ensure that their system is accessible to people with disability on an equal footing with other job seekers.

Significantly, the study found that employers using AHSs can create new forms of structural discrimination against marginalized groups who lack the resources, like computer access and digital literacy, to complete an online application.

Finally, AHSs offer fresh opportunities for employers to engage in intentional discrimination.

In a recent case in the US, the was configured to automatically reject female job seekers over 55 years of age and men over 60 years of age.

There's a lot at stake when "algorithm-facilitated discrimination" happens.

As one recruiter who was interviewed acknowledged, a "job application is literally a person's attempt to change their life with a new job."

A discriminatory AHS can cause harm at unprecedented speed and scale and has the capacity to systematically lock disadvantaged groups out of the workforce.

Governments in Australia should review and reform their discrimination laws to address any gaps in the protection of job seekers from this kind of .

Greater transparency is needed around the workings of AI systems, including the they incorporate.

The must be representative and documented.

Employers also need a better understanding of the AHSs rolled out in their organizations and their potential to cause harm at scale.

They should be obliged to provide comprehensive training to those responsible for customizing, operating and overseeing these systems.

Finally, and most fundamentally, the discovery in this research of significant risks to equality rights when employers use AHSs raises the question: should these systems be used at all?

Should it be permissible to use AHSs before necessary legal protections are in place?

Should they be in use before we have a deeper understanding, not only of the systems themselves and our interaction with them, but also of their impacts on historical, structural and intersectional disadvantage in the global labor market?

More information: NATALIE SHEARD, Algorithm鈥恌acilitated discrimination: a socio鈥恖egal study of the use by employers of artificial intelligence hiring systems, Journal of Law and Society (2025).

Citation: Discrimination by recruitment algorithms is a real problem (2025, May 15) retrieved 3 June 2025 from /news/2025-05-discrimination-algorithms-real-problem.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

'Double disadvantage': Women with foreign accents seen as less employable

0 shares

Feedback to editors