Âé¶¹ÒùÔº


AI 'workslop' is creating unnecessary extra work. Here's how we can stop it

AI 'workslop' is creating unnecessary extra work. Here's how we can stop it
Before using AI for a work task, ask yourself whether you actually need to. Credit:

Have you ever used artificial intelligence (AI) in your job without double-checking the quality or accuracy of its output? If so, you wouldn't be the only one.

Our shows a staggering two-thirds (66%) of employees who use AI at work have relied on AI output without evaluating it.

This can create a lot of extra work for others in identifying and correcting errors, not to mention reputational hits. Just this week, consulting firm Deloitte Australia after a A$440,000 report prepared for the had been found to contain multiple AI-generated errors.

Against this backdrop, the term "workslop" has entered the conversation. Popularized in a recent Harvard Business Review , it refers to AI-generated content that looks good but "lacks the substance to meaningfully advance a given task."

Beyond wasting time, workslop also corrodes collaboration and trust. But AI use doesn't have to be this way. When applied to the right tasks, with appropriate human collaboration and oversight, AI . We all have a role to play in getting this right.

The rise of AI-generated 'workslop'

According to a reported in the Harvard Business Review article, 40% of US workers have received workslop from their peers in the past month.

The survey's research team from and found on average, each instance took recipients almost two hours to resolve, which they estimated would result in US$9 million (about A$13.8 million) per year in lost productivity for a 10,000-person firm.

Those who had received workslop reported annoyance and confusion, with many perceiving the person who had sent it to them as less reliable, creative, and trustworthy. This mirrors that there can be trust penalties to using AI.

Invisible AI, visible costs

These findings align with our own on AI use at work. In a of 32,352 workers across 47 countries, we found are common.

While many employees in our study reported improvements in efficiency or innovation, more than a quarter said AI had increased workload, pressure, and time on mundane tasks. Half said they use AI instead of collaborating with colleagues, raising concerns that collaboration will suffer.

Making matters worse, many employees hide their AI use; 61% avoided revealing when they had used AI and 55% passed off AI-generated material as their own. This lack of transparency makes it challenging to identify and correct AI-driven errors.

What you can do to reduce workslop

Without guidance, AI can generate low-value, error-prone work that creates busywork for others. So, how can we curb workslop to better realize AI's benefits?

If you're an employee, three simple steps can help.

  1. Start by asking, "Is AI the best way to do this task?" Our research suggests this is a question many users skip. If you can't explain or defend the output, don't use it
  2. If you proceed, verify and work with AI output like an editor; check facts, test code, and tailor output to the context and audience
  3. When the stakes are high, be transparent about how you used AI and what you checked to signal rigor and avoid being perceived as incompetent or untrustworthy.

What employers can do

For employers, investing in governance, AI literacy, and human-AI collaboration skills is key.

Employers need to provide employees with clear guidelines and guardrails on effective use, spelling out when AI is and is not appropriate.

That means forming an AI strategy, identifying where AI will have the highest value, being clear about who is responsible for what, and tracking outcomes. Done well, this reduces risk and downstream rework from workslop.

Because workslop comes from how people use AI—not as an inevitable consequence of the tools themselves—governance only works when it shapes everyday behaviors. That requires organizations to build alongside policies and controls.

Organizations must work to close the AI literacy gap. Our research shows that AI literacy and training are associated with more critical AI engagement and fewer errors, yet less than half of report receiving any training or policy guidance.

Employees need the skills to use AI selectively, accountably and collaboratively. Teaching them when to use AI, how to do so effectively and responsibly, and how to verify AI output before circulating it can reduce workslop.

Provided by The Conversation

This article is republished from under a Creative Commons license. Read the .The Conversation

Citation: AI 'workslop' is creating unnecessary extra work. Here's how we can stop it (2025, October 17) retrieved 19 October 2025 from /news/2025-10-ai-workslop-unnecessary-extra.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further


0 shares

Feedback to editors