Âé¶¹ÒùÔº


Evidence shows AI systems are already too much like humans. Will that be a problem?

a human and an AI having a serious Platonic discussion
Credit: AI-generated image

What if we could design a machine that could read your emotions and intentions, write thoughtful, empathetic, perfectly timed responses—and seemingly know exactly what you need to hear? A machine so seductive, you wouldn't even realize it's artificial. What if we already have?

In a comprehensive meta-analysis, in the Proceedings of the National Academy of Sciences, we show that the latest generation of large language model-powered chatbots match and exceed most humans in their ability to communicate. A growing body of research shows these systems now reliably , fooling humans into thinking they are interacting with another human.

None of us were expecting the arrival of super communicators. Science fiction taught us that (AI) would be highly rational and all-knowing, but lack humanity.

Yet here we are. Recent experiments have shown that models such as GPT-4 outperform humans in and also . Another study found that large language models (LLMs) in human-written messages.

AI can be more persuasive than humans in debates, scientists find

— The Guardian ()

LLMs are also , assuming a wide range of personas and . This is amplified by their ability to and intentions from text. Of course, LLMs do not possess true empathy or social understanding—but they are highly effective mimicking machines.

We call these systems "anthropomorphic agents". Traditionally, anthropomorphism refers to ascribing human traits to non-human entities. However, LLMs genuinely display highly human-like qualities, so calls to avoid anthropomorphizing LLMs will fall flat.

This is a landmark moment: when you cannot tell the difference between talking to a human or an AI chatbot online.

On the internet, nobody knows you're an AI

What does this mean? On the one hand, LLMs promise to make complex information more widely accessible via chat interfaces, . This has applications across many domains, such as legal services or public health. In education, the roleplay abilities can be used to create Socratic tutors that ask personalized questions and help students learn.

At the same time, these systems are seductive. Millions of users already interact with AI companion apps daily. Much has been said about the negative effects of companion apps, but anthropomorphic seduction comes with far wider implications.

Users are ready to so much that they disclose highly personal information. Pair this with the bots' highly persuasive qualities, and genuine concerns emerge.

further shows that its Claude 3 chatbot was at its most persuasive when allowed to fabricate information and engage in deception. Given AI chatbots have no moral inhibitions, they are poised to be much better at deception than humans.

This opens the door to manipulation at scale, to spread disinformation, or create highly effective sales tactics. What could be more effective than a trusted companion casually recommending a product in conversation? ChatGPT has already in response to user questions. It's only a short step to subtly weaving product recommendations into conversations—without you ever asking.

What can be done?

It is easy to call for regulation, but harder to work out the details.

The first step is to raise awareness of these abilities. Regulation should prescribe disclosure—users need to always know that they interact with an AI, . But this will not be enough, given the AI systems' seductive qualities.

The second step must be to better understand anthropomorphic qualities. So far, LLM tests measure "intelligence" and knowledge recall, but none so far measures the degree of "human likeness". With a test like this, AI companies could be required to disclose anthropomorphic abilities with a rating system, and legislators could determine acceptable risk levels for certain contexts and age groups.

The cautionary tale of social media, which was largely unregulated until much harm had been done, suggests there is some urgency. If governments take a hands-off approach, AI is likely to amplify existing problems with , or the loneliness epidemic. In fact, has already signaled that he would like to fill the void of real human contact with "AI friends".

Relying on AI companies to refrain from further humanizing their systems seems ill-advised. All developments point in the opposite direction. OpenAI is working on making their systems more engaging and personable, with the ability to . ChatGPT has generally become more chatty, often asking follow-up questions to keep the conversation going, and its adds even more seductive appeal.

Much good can be done with anthropomorphic agents. Their persuasive abilities can be used for ill causes and for good ones, from fighting conspiracy theories to enticing users into donating and other prosocial behaviors.

Yet we need a comprehensive agenda across the spectrum of design and development, deployment and use, and policy and regulation of conversational agents. When AI can inherently push our buttons, we shouldn't let it change our systems.

More information: Sandra Peter et al, The benefits and dangers of anthropomorphic conversational agents, Proceedings of the National Academy of Sciences (2025).

Provided by The Conversation

This article is republished from under a Creative Commons license. Read the .The Conversation

Citation: Evidence shows AI systems are already too much like humans. Will that be a problem? (2025, May 22) retrieved 23 May 2025 from /news/2025-05-evidence-ai-humans-problem.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

ChatGPT shown to be more persuasive than people in online debates

1 shares

Feedback to editors