Âé¶¹ÒùÔº


AI is now used for audio description. But it should be accurate and actually useful for people with low vision

blind
Credit: Mikhail Nilov from Pexels

Since the recent explosion of widely available generative artificial intelligence (AI), it now seems that a new AI tool emerges every week.

With varying success, AI offers solutions for productivity, creativity, research, and also accessibility: making products, services and other content more usable for people with disability.

The is a poignant example of how the latest AI tech can intersect with disability.

Directed by blind director Adam Morse, it showcases an AI-powered feature that uses audio cues, (where vibrating sensations communicate information to the user) and animations to assist blind and low-vision users in capturing photos and videos.

The ad was applauded for being disability inclusive and representative. It also demonstrated a growing capacity for—and interest in—AI to generate more accessible technology.

AI is also poised to challenge how audio description is created and what it may sound like. This is the focus of our research team.

Audio description is a track of narration that describes important visual elements of visual media, including television shows, movies and live performances. Synthetic voices and quick, automated visual descriptions might result in more audio description on our screens. But will users lose out in other ways?

AI as people's eyes

AI-powered accessibility tools are proliferating. Among them is Microsoft's , an app that turns your smartphone into a talking camera by reading text and identifying objects. The app uses virtual assistants to describe photos taken by blind users; it's an AI version of the original app Be My Eyes, where the same task was done by human volunteers.

There are increasingly more AI software options for text-to-speech and document reading, as well as .

Audio description is an essential feature to make visual media accessible to blind or vision impaired audiences. But its benefits go beyond that.

Increasingly, research shows and . Audio description can also be a creative way to further .

Traditionally, audio description has been created using human voices, script writers and production teams. However, in the last year, several international streaming services including Netflix and have begun offering audio description that are at least partially generated with AI.

Yet there are a number of issues with the current AI technologies, including their ability to generate false information. These tools need to be critically appraised and improved.

Is AI coming for audio description jobs?

Javier in Frame showcases an accessibility feature found on Pixel 8 phones.

There are multiple ways in which AI might impact the creation—and end result—of audio description.

With AI tools, streaming services can get . There's potential for various levels of , while giving to suit their specific needs and preferences. Want your cooking show to be narrated in a British accent? With AI, you could change that with the press of a button.

However, in the audio description industry, many are worried AI could undermine the quality, creativity and professionalism humans bring to the equation.

The language-learning app Duolingo, for example, recently announced it was moving forward with . As a result, many contractors lost jobs that can now purportedly be done by algorithms.

On the one hand, AI could help broaden the range of audio descriptions available for a range of media and live experiences.

But AI audio description may also cost jobs rather than create them. The worst outcome would be a huge amount of lower-quality audio description, which would undermine the value of creating it at all.

Can we trust AI to describe things well?

Industry impact and the technical details of how AI can be used in audio description are one thing.

What's currently lacking is research that centers the perspectives of users and takes into consideration their experiences and needs for future audio description.

Accuracy—and trust in this accuracy—is vitally important for blind and low-vision audiences.

Cheap and often free, AI tools are now widely used to summarize, transcribe and translate. But it's a well-known problem that generative AI struggles to stay factual. Known as "hallucinations," these plausible fabrications proliferate even when the AI tools are —like doing a simple audio transcription.

If AI tools simply fabricate content rather than make existing material accessible, it would even further distance and disadvantage blind and low-vision consumers.

We can use AI for accessibility—with care

AI is a relatively new technology, and for it to be a true benefit in terms of accessibility, its accuracy and reliability need to be absolute. Blind and low-vision users need to be able to turn on AI tools with confidence.

In the current "AI rush" to make audio description cheaper, quicker and more available, it's vital that the people who need it the most are closely involved in how the tech is deployed.

Provided by The Conversation

This article is republished from under a Creative Commons license. Read the .The Conversation

Citation: AI is now used for audio description. But it should be accurate and actually useful for people with low vision (2025, May 21) retrieved 23 May 2025 from /news/2025-05-ai-audio-description-accurate-people.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further


0 shares

Feedback to editors