Âé¶¹ÒùÔº


University ranking systems are being rejected. African institutions should take note

star ranking
Credit: Pixabay/CC0 Public Domain

The , founded in Paris in 1253 and known globally as a symbol of education, science and culture, has , starting in 2026, it will stop submitting data to rankings. It is joining a growing movement of universities questioning the value and methodology of these controversial league tables.

Rankings companies add together various indexes that purport to measure quality. The indexes include research outputs, the results of reputation surveys, the amount of money they receive in and donations, and how many Nobel prize winners they have ever employed.

Nathalie Drach-Temam, president of the Sorbonne, that "the data used to assess each university's performance is not open or transparent," and "the reproducibility of the results produced cannot be guaranteed."

This echoes wider concerns about the lack of scientific rigor of ranking systems that claim to measure complex institutional performance through simplified metrics.

The problem is that the general public believe that the rankings offer an indication of quality. As a result rankings have enormous influence over the market. This includes the choice of .

The university's decision aligns with its commitment to the , an agreement signed by more than 700 research organizations, funders and professional societies, and the , signed by about 200 universities and research institutes. Both advocate for open science practices to make , data, methods, and educational resources transparent, accessible and reusable by everyone without barriers. And both recommend "avoiding the use of rankings of research organizations in research assessment."

The Sorbonne joins a growing list of high-profile institutions abandoning rankings. , and several have opted out of major ranking systems. In the US, 17 medical and law schools, , have withdrawn from discipline-specific rankings.

There are five major ranking companies and at least 20 smaller ones. On top of these are a similar number of discipline-specific and regional rankings. Together they . Yet the rankings are accessible without charge.

The rankings industry has increasingly . It sees the continent at a time when it is losing traction among high-profile institutions in the global north.

There has been a rapid increase in run by rankings organizations on the continent. These events are very expensive and often quite luxurious—attended by vice-chancellors, academics, consultants and others.

As an academic involved in higher education teaching, I believe that chasing the rankings can fragile higher education system. There are two main reasons for this.

Firstly, the rankings metrics largely focus on research output, rather than on the potential for that research to address local problems. Secondly, the rankings fail to consider higher education's role in nurturing critical citizens, or contributing to the public good.

The Sorbonne's decision reflects a growing body of opinion that the rankings industry is unscientific and a .

Nevertheless, many vice-chancellors are not willing to risk the cost of withdrawing. Rankings might do a poor job of indicating quality, in all its nuanced forms. Nevertheless, they are very good at . And even if a university chooses to stay out of the ranking by refusing to hand over its data, the industry continues to include it, based only on limited publicly available data.

The ranking industry

Rankings themselves are available for free. The derives most of its revenue from reselling the data that universities provide. Universities submit detailed institutional data to ranking companies without charge. That information is then repackaged and sold back to institutions, governments and corporations.

This data includes institutional income. It often also includes contact details of staff and students. These are used for "reputation surveys." In the case of .

This business model has created what can be described as a sophisticated data harvesting operation disguised as academic assessment.

Mounting criticism

Academic research the problems with ranking methodologies. These include:

  • the use of proxy metrics that poorly represent institutional quality. For example, while many university rankings do not include a measurement of teaching quality at all, those that do use measures such as income, staff to student ratio, and international academic reputation.
  • composite indexing that combines unrelated measurements. The metrics that are collected are simply added together, even though they have no bearing on each other. Our students are repeatedly warned of the dangers of using , and yet this is at the heart of the .
  • subjective weighting systems that can dramatically alter results based on arbitrary decisions. If the system decides to weight reputation at 20% and then make university income worth 10%, we have one order of institutions. Switch these weightings to make the former 10% and the latter 20% and the list rearranges itself. And yet, the quality of the institutions is unchanged.

Rankings tend to favor research-intensive universities while ignoring teaching quality, community engagement and local relevance.

Most ranking systems emphasize English-language publications. This reinforces existing academic hierarchies rather than providing a meaningful assessment of quality.

Where new rankings are being introduced, such as the , or the , or even the , they sadly still have the problem of proxy measures, and composite and subjective weightings.

In addition, many of the ranking companies precise methodological details. This makes it impossible to verify their claims or understand on what basis institutions are actually assessed.

Researchers that rankings have thrived because they align with the idea of higher education as a marketplace where institutions compete . This has led universities to that improve their ranking positions rather than activities that best serve their students and communities.

The emphasis on quantifiable outputs has created what scholars call ""—pressure for all universities to adopt similar structures and priorities regardless of their specific missions or local contexts.

that striving for a spot in the rankings limelight affects resource allocation, strategic planning and even which students apply to institutions. Some universities have from teaching quality to research output specifically to improve rankings. —manipulating data to boost their positions.

Looking forward

Participation in methodologically flawed ranking systems presents a contradiction: universities built on principles of scientific research continue to support an industry whose methods would fail basic peer review standards.

For universities still participating, Sorbonne's move raises an uncomfortable question: what are their institutional priorities and commitments to scientific integrity?

Provided by The Conversation

This article is republished from under a Creative Commons license. Read the .The Conversation

Citation: University ranking systems are being rejected. African institutions should take note (2025, September 30) retrieved 14 October 2025 from /news/2025-09-university-african.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Adding an online dimension to university rankings

0 shares

Feedback to editors