Credit: Unsplash/CC0 Public Domain
Artificial intelligence (AI) is everywhere. In the world of academia, it's having a big impact.
When used appropriately, AI can greatly increase .
It can be an asset for and filing them nicely (something every academic struggles with) or for checking the final publication is free from grammatical nightmares.
AI is also particularly well suited to the task of organizing and analyzing data for useful information.
Making life easier
Science is, by nature, data heavy.
But with an AI companion, researchers are now able to investigate than ever, spot that would otherwise have gone unnoticed and focus their attention on what's really important.
A great example is Google DeepMind's .
Before the , it would take several years to solve a protein structure—critical information if you want to know what a protein does and how it does it. AlphaFold can solve it in a matter of minutes.
But that's not what AI is most widely known for. AI is particularly good at writing.
In contrast, scientists are not known for perfect (or particularly engaging) prose.
Thanks to generative large language models (LLMs) like , even they can have their research read and understood by the masses.
Basically, AI can do a lot. But what happens if AI does it all?
Wait, AI wrote this?
Well, not this article. (Or any Particle article.) But researchers have been experimenting with AI to see if it really could write an academic article from start to finish.
There are many steps in the creation of a novel research paper: find a gap in the literature, ask a question, conduct experiments, analyze the data and make a conclusion.
In 2023, a pair of data scientists put their custom-built AI to the .
It wrote all its own code, analyzed a large dataset, established noteworthy trends and put it into a logical discussion. It even created its own review system to better refine the article.
All this in under an hour.
In 2024, introduced the world to , a program able to independently generate a "novel" article at a cost no human research team could compete with.
While this might work well for writing a review article, there are still many areas of research that remain beyond the reach of AI—only a human knows how to pipette their life away in the lab.
Hallucinating robot liars
As impressive as these new models may be, they're still prone to hallucinations. Not in an "I see dead people" way, but in a "I don't know the answer so I'm going to make one up" kind of way.
In other words, a hallucination is when AI generates a misleading or false statement because it doesn't have adequate training.
A hallucination can become a recurring problem as it is incorporated into future training algorithms to be replicated forever.
hunt for these hallucinations or "" to help uncover fraudulent AI written papers and have them retracted from the scientific literature.
Robot liars are not the only problem, AI-derived results can be fraught with .
Because current AI tools are so tunable, scientists must ensure they're not training their models to just the data they want.
Clearly, this kind of generative AI doesn't hold up to rigorous scientific standards.
Until it can overcome the issues of hallucinations and data biases, automated AI isn't going to win a Nobel Prize any time soon.
Institutions under threat
AI may not be good at producing groundbreaking new ideas, but research that builds on current knowledge is still needed, even if it's only pointing out the obvious.
These AI written articles pose a serious threat to the peer-reviewed publication process.
It's a system already being undermined by and .
An AI model like The AI Scientist could rapidly generate articles, potentially flooding the literature with subpar "research." This would make it even more difficult to spot significant results in the haystack of AI junk.
When discussing the ethics of their tool, even Sakana AI Labs admits in their that "there is significant potential for misuse."
For academic journals and universities, strong AI are being enforced to try to overcome the ethical pitfalls.
recently released AI detection software to spot plagiarism among students (ironically, an AI system built to sniff out AI).
So will scientists get Robocopped?
Science is about the non-conventional ways of looking at things, novelty and expanding human knowledge.
Despite the power of AI, human scientists still hold the upper hand.
Provided by Particle
This article first appeared on , a science news website based at Scitech, Perth, Australia. Read the .