Âé¶¹ÒùÔº

July 22, 2025

AI in universities: How large language models are transforming research

Credit: Pixabay/CC0 Public Domain
× close
Credit: Pixabay/CC0 Public Domain

Generative AI, especially large language models (LLMs), present exciting and unprecedented opportunities and complex challenges for academic research and scholarship.

As the different versions of LLMs (such as ChatGPT, Gemini, Claude, Perplexity.ai and Grok) continue to proliferate, academic research is beginning to undergo a significant transformation.

Students, researchers and instructors in higher education to address these challenges and risks.

In a time of rapid change, students and academics are advised to look to their institutions, programs and units for discipline-specific policy or guidelines regulating the use of AI.

Researcher use of AI

A recent study found that at least 13.5% signs of AI-generated text.

Large language models can , although caution and human oversight are always needed to judge when use is appropriate, ethical or warranted—and . LLMs can:

However, there are significant concerns and challenges surrounding the appropriate, ethical, responsible and effective use of generative AI tools in the conduct of research, writing and research dissemination. These include:

Get free science updates with Science X Daily and Weekly Newsletters — to customize your preferences!

AI research assistants, 'deep research' AI agents

There are that support academic research:

1. AI research assistants: The number of AI research assistants that support different aspects and steps of the research process is growing at an exponential rate. These technologies have the potential to enhance and extend traditional research methods in academic work. Examples include AI assistants that support:

2. 'Deep research' AI agents: The field of artificial intelligence is advancing quickly with the rise AI agents. These next-generation agents combine LLMs, and sophisticated reasoning frameworks to conduct in-depth, multi-step analyses.

is currently being conducted to evaluate the quality and effectiveness of deep research tools. are being developed to assess their performance and quality.

Criteria include elements such as cost, speed, editing ease and overall user experience—as well as .

The purpose of deep research tools is to meticulously extract, analyze and synthesize scholarly information, empirical data and diverse perspectives from a wide array of online and social media sources. The output is a detailed report, complete with citations, offering in-depth insights into complex topics.

In just a short span of four months (December 2024 to February 2025), several companies (like Google Gemini, Perplexity.ai and ChatGPT) introduced their "deep research" platforms.

The Allen Institute for Artificial Intelligence, a , is experimenting with a new open access research tool called Ai2 ScholarQA that helps more efficiently by providing more in-depth answers.

Emerging guidelines

Several guidelines have been developed to encourage the responsible and ethical use of generative AI in research and writing. Examples include:

LLMs support interdisciplinary research

LLMs are also powerful tools to support interdisciplinary research. Recent emerging research (yet to be peer reviewed) on the suggests they have great potential in areas such as biological sciences, chemical sciences, engineering, environmental as well as social sciences. It also suggests LLMs can help eliminate disciplinary silos by bringing together data and methods from different fields and automating data collection and generation to create interdisciplinary datasets.

Helping to analyze and summarize large volumes of research across various disciplines can aid interdisciplinary collaboration. "Expert finder" AI-powered platforms can analyze researcher profiles and publication networks to map expertise, identify potential collaborators across fields and reveal unexpected interdisciplinary connections.

This emerging knowledge suggests these models will be able to help researchers drive breakthroughs by combining insights from diverse fields—like epidemiology and physics, climate science and economics or social science and climate data—to address complex problems.

Research-focused AI literacy

Canadian universities and research partnerships are providing AI literacy education to people in universities .

The offers K-12 AI literacy programming and . The institute is a organization and part of .

Many offering that focus specifically on the use of generative AI tools in .

Collaborative university work is also happening. For example, as vice dean of the Faculty of Graduate & Postdoctoral Studies at the University of Alberta (and an information science professor), I have worked with deans from the University of Manitoba, the University of Winnipeg and Vancouver Island University to develop guidelines and recommendations .

Considering the growing power and capabilities of , there is an urgent need to develop AI literacy training tailored for academic researchers.

This training should focus on both the potential and the limitations of these tools in the different stages of the research process and writing.

Provided by The Conversation

Load comments (0)

This article has been reviewed according to Science X's and . have highlighted the following attributes while ensuring the content's credibility:

fact-checked
trusted source
written by researcher(s)
proofread

Get Instant Summarized Text (GIST)

Large language models (LLMs) are rapidly transforming academic research by supporting tasks such as idea generation, literature review, data analysis, and interdisciplinary collaboration. Their use raises concerns about data accuracy, authorship, privacy, and ethical standards. Effective integration requires AI literacy, institutional guidelines, and ongoing evaluation to ensure responsible and high-quality research practices.

This summary was automatically generated using LLM.