Âé¶¹ÒùÔº

February 26, 2025

Why the use of GenAI in higher education is a cautionary tale

Credit: Pixabay/CC0 Public Domain
× close
Credit: Pixabay/CC0 Public Domain

New research into the use of Generative AI (GenAI) among students studying law at university has found that guidelines and training are essential, but are not enough to ensure the responsible use of AI.

Dr. Armin Alimardani, from the University of Wollongong's (UOW) School of Law, authored the , in IEEE Transactions on Technology and Society, which revealed both the promise and pitfalls of using GenAI in student assessments.

As part of an elective subject, Law and Emerging Technologies, Dr. Alimardani examined how law students engaged with GenAI in their assignments.

Students were tasked with preparing a government policy submission on the ethical and legal implications of autonomous vehicles.

They had the freedom to use GenAI, provided they critically reflected on their usage and substantiated AI-generated content with credible sources.

The students received explicit training in responsible AI use, yet Dr. Alimardani said the results of the study were revealing and highlighted the challenges of including AI in the learning process.

"Many students successfully leveraged AI to refine arguments, distill complex information, and enhance engagement. However, a considerable number of students disregarded instructions on responsible AI use," he said.

"Some included entirely fabricated academic sources, while others misrepresented legitimate sources, citing papers that did not contain the claims they were purported to support. This , known as AI 'hallucination' is a well-documented issue with large language models, but its impact in education is particularly concerning."

Get free science updates with Science X Daily and Weekly Newsletters — to customize your preferences!

Dr. Alimardani said if develop a habit of relying on unverified AI outputs, the consequences could extend far beyond the classroom.

"This is not merely a hypothetical concern—there have already been real-world cases where lawyers have cited non-existent case law or misinterpreted legal precedents due to AI-generated misinformation, resulting in professional embarrassment and even disciplinary action. In response, the Supreme Court of NSW has issued a practice note to promote the responsible use of GenAI in legal proceedings," he said.

However, the study's findings suggest that while such guidelines and training are essential, they may not be sufficient to ensure accuracy and ethical AI use in practice. Dr. Alimardani attributes part of the issue to what he calls "verification drift." This phenomenon occurs when GenAI users are aware of the technology's limitations and understand the need to verify AI-generated content. However, as they review the material, the authoritative tone and polished presentation of GenAI gradually lead them to perceive it as reliable, ultimately making verification seem unnecessary.

While many academics have already incorporated GenAI into their assessments, Dr. Alimardani urged them to exercise greater vigilance. In some student submissions, it took Dr. Alimardani hours to uncover the content that was not credible or the cited sources that were misrepresented.

"I suspect that many other educators may have unknowingly overlooked instances of seemingly plausible content that were, in fact, hallucinations," he said.

"I don't believe students should bear the blame. Even experienced lawyers who are aware of GenAI's tendency to hallucinate, have repeatedly submitted fabricated materials to the court." This demonstrates that simple guidelines are insufficient.

"Educators need to go beyond just providing instructions on AI usage; they should actively engage students in responsible GenAI practices, provide ongoing feedback, and, most importantly, showcase real examples of situations where AI-generated content has been misleading or completely fabricated."

More information: Armin Alimardani, Borderline Disaster: An Empirical Study on Student Usage of GenAI in a Law Assignment, IEEE Transactions on Technology and Society (2025).

Load comments (0)

This article has been reviewed according to Science X's and . have highlighted the following attributes while ensuring the content's credibility:

fact-checked
trusted source
proofread

Get Instant Summarized Text (GIST)

Research on the use of Generative AI (GenAI) in law education highlights the necessity of guidelines and training but indicates these measures alone are insufficient for responsible AI use. Students often misused AI, citing fabricated or misrepresented sources, a phenomenon known as AI "hallucination." This issue, termed "verification drift," arises when users perceive AI outputs as reliable due to their authoritative tone. The study suggests educators must actively engage students in responsible AI practices and provide real-world examples of AI-generated misinformation.

This summary was automatically generated using LLM.