Strengths, weaknesses and dangers of university rankings
Errare humanum est, perseverare diabolicum
We list below a number of relevant issues which have arisen in the last few years concerning university rankings:
The Project 5-100 in Russia: to enter five universities in the top 100 in the three best (understand “most mediatized”) rankings. While developing research at a governmental level should be encouraged, it should not be done at the expense of ethics. The future will determine if appropriate protocols have been followed, but it is the primary objective that raises some serious concerns.
The authors appreciate the professionalism of Dmitry Livanov stating: “entering the international rankings can’t be a goal in itself”. “We understand that the rankings only provide a rough evaluation of university performance.” We couldn’t agree more. However, it is ironic that the title of his paper mentions university rankings.1
Too many universities focusing now on trying to improve their score rather than aiming at high-quality research and teaching.2
Internationalization: this is a new dimension that has been added to the way universities are currently being ranked. But we will argue that it is much easier, in many situations, to hire an incompetent foreigner rather than a competent local in order to meet ranking parameters. This dimension is a little more difficult to manipulate for student intakes but still not impossible. Therefore, we believe that measuring the ratio of foreign staff and students cannot be taken seriously anymore and does not in any way guarantee performance.
Citations: self-citations, network of citations, blackmailing while peer-reviewing… methods are countless to bias the citation parameter.3,4 While the diffusion of knowledge is central to academic performance, the crude measure of the parameter should be performed very carefully. Tenure committees may often misjudge professorial maturity and the competency of individuals. As a result, they undermine ‘genuine’ faculty contribution by applying these quantifications which are not necessarily a reflection of scientific merit.
The famous “Publish or Perish” criterion: disseminating research is necessary but an "at all costs" approach encourages a “low quality-high quantity” research paradigm thus penalizing faculty who require time to create high impact, rigorous research that translates into quality papers.
We would like to present a critique of four of the main international university rankings: The ARWU (Shanghai Ranking), the THE ranking, the QS ranking and the Leiden ranking. We will present them ranked from what we deem as the most honest to the most obscure. They are given a score out of 10. As we explained previously, we live in a society of rankings, so let’s ‘rank the rankings’.
The Leiden ranking: 6/10
Strengths: Transparency, honesty, open access, professional work, no pretension. The only ranking out of these 4 which is clearly stating what it is ranking.
Weaknesses: Few parameters. It is more a very useful database than a ranking.
The ARWU: 5/10
Strengths: deal with top Awards (such as Nobel prizes), papers published in Nature and Science, consider the “performance per capita”
Weaknesses: Encourages as well publications without quality; biased towards “hard sciences”; the highly-cited researcher score is creating major issues.1
The THE ranking: 4/10
Strengths: many different parameters somehow well balanced. Born from the QS ranking and learn from their mistakes. A certain transparency in data and methodology.
Weaknesses: the parameter “citations per faculty” has been abused at least twice2,3,5 and encourages academic misconducts. Many researchers got feedbacks from reviewers asking to cite papers with the same authors3… can it be indirectly related to this parameter? Journal editors may be interested to investigate or provide data on this. And internationalization: it is easy to hire international faculties instead of good faculties. The reputation can also be manipulated.
The QS ranking: 1/10
Strengths: Beautiful website. The ranking per fields is a good idea, however not discussed here.
Weaknesses: Reputation represents half of the weight and can be manipulated; “student-faculty” ratios are skyrocketing in some universities year after year2; international parameters can also be easily manipulated, as well as “citations per faculty” (cf THE ranking weaknesses). Although several parameters are almost identical than in the THE ranking, there are major inconsistencies2. Also, until 2017, low scores were “hidden”, not a great example of transparency! Our article2 may had an impact on this, at least.
We believe that the QS ranking is by far the less reliable ranking, and its large mediatized exposure is a concern. However, in 2018, that's the entire archives apart from the last 3 years which have been removed: long-term analysis on the effect of QS rankings on university politics are now much more difficult.
From this short analysis, the current situation is clear:
University rankings have now a major place in the mind of virtually everybody
They are not designed to encourage a good work and can be easy to game
Our challenge: to display a ranking which is reliable, transparent, and virtually impossible to manipulate.
1Alekseev O. Climbing the global university rankings. July 2014. Available at: http://www.universityworldnews.com/article.php?story=20140702115809236
2Mussard M, James AP. Engineering the Global University Rankings: Gold Standards, Limitations and Implications. IEEE Access. 2018;6:6765-76.
3Mussard M, James AP. Boosting the ranking of a university using self-citations. Current Science. 2017;113(10):1827.
4Resnik B, Gutierrez-Ford C, Peddada S. Perceptions of Ethical Problems with Scientific Journal Peer Review: An Exploratory Study. Sci Eng Ethics. 2008;14(3): 305–310.
5Guttenplan, D. Questionable science behind Academic Rankings. Available at: https://www.nytimes.com/2010/11/15/education/15iht-educLede15.html