Âé¶¹ÒùÔº


New AI tool can spot shady science journals and safeguard research integrity

New AI tool can spot shady science journals and safeguard research integrity
A summary of our datasets, methods, and results. Credit: Science Advances (2025). DOI: 10.1126/sciadv.adt2792

One of the big benefits of open-access journals is that they make research articles freely and immediately available to everyone online. This increases exposure for scientists and their work, ensuring there are no barriers, such as cost, to knowledge. Anyone with an internet connection can access the research from anywhere.

However, the rapid growth of this model has also led to the rise of questionable journals that exploit publishing fees paid by authors. They often promise quick publication but lack a rigorous peer-review process. Now there's a new AI tool that can spot telltale signs of shady journals, helping scientists avoid publishing in disreputable outlets.

In a paper in Science Advances, researchers describe how they trained AI to act like a detective. They fed it more than 12,000 high-quality journals and around 2,500 low-quality or questionable publications. These were once part of the Directory of Open Access Journals (DOAJ) but were removed for violating guidelines. The AI then learned to look for red flags on journal websites and in their publications, such as a lack of information about the , sloppy website design and low citation numbers.

The researchers then applied their trained model to a dataset of 93,804 open-access journals from Unpaywall, an online service that helps users find free versions of scholarly papers that are usually behind paywalls. It flagged more than 1,000 previously unknown suspect journals that collectively publish hundreds of thousands of articles.

The study does not name individual journals, partly due to concerns about potential legal reprisals. It does, however, state that many of the iffy ones are from developing countries.

Although this AI-based method is good at finding questionable journals at scale, it has some limitations. Currently, the system has a false positive rate of 24%, which means that it flags roughly one out of every four legitimate journals as suspect. As the researchers write in their paper, this means that human experts will also be needed.

"Our findings demonstrate AI's potential for scalable integrity checks, while also highlighting the need to pair automated triage with expert review."

Protecting scientific integrity

The authors believe that further research can refine the AI tool's features and help it keep up with evolving tactics of questionable publishers.

This will be an ongoing battle that requires sharp human eyes and smarter AI systems. Humans and machines working together can help guide authors away from deceptive outlets and protect the integrity of scientific publishing across the world.

Written for you by our author , edited by , and fact-checked and reviewed by —this article is the result of careful human work. We rely on readers like you to keep independent science journalism alive. If this reporting matters to you, please consider a (especially monthly). You'll get an ad-free account as a thank-you.

More information: Han Zhuang et al, Estimating the predictability of questionable open-access journals, Science Advances (2025).

Journal information: Science Advances

© 2025 Science X Network

Citation: New AI tool can spot shady science journals and safeguard research integrity (2025, August 28) retrieved 28 August 2025 from /news/2025-08-ai-tool-shady-science-journals.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

The death of open access mega-journals?

8 shares

Feedback to editors