Prompted by numerous unethical strategies employed by universities to increase their ranking, we present here our abuse-proof ranking: the A3 Ranking. Why A3? An Academic ranking made by Academics, and for Academics (well, and for anybody else sharing academic values).
Top 500 and Under 50 rankingsFirst, unlike most other rankings, we name what we are ranking: this ranking is a measure of the research impact of the universities. This is because research is recognized as the most important output for the visibility of the world-class university; and because it is easily measurable.
By looking at the strengths and weaknesses of other rankings, we decided to collect the following data:
Publications: in addition to count the number of publications from 2014 to 2016, we weight them by the average contribution per paper of a given university. The papers for each year are weighted by the Field-Weighted Citation Impact (FWCI) of the university for each given year. As this parameter can be manipulated (voluntarily or not), it is limited to 2.5 maximum; and it cannot be higher than the percentage of top 10% articles (per source normalized impact factor, or SNIP) divided by 15. The total number for each year is then multiplied by the square root of the percentage of 10% articles per SNIP divided by 15.1
This system discourages numerous publications at the expense of quality, as the score will be at best stable with numerous low-quality papers. It limits the impact of citations by considering as well the quality of the journal itself.
Books: We are the first ranking, as far as we know, to significantly consider books in addition to publications. Books are an essential part of dissemination for some of the research areas, for example in social sciences. However, they are barely recognized as simply counted as “one publication” at best. The attributed weight for books is here roughly equivalent to 10 or 20 “traditional” papers per book, depending on how much a university is publishing books compared to traditional papers.2
Awards: inspired by the ARWU ranking, we decided to grade in a similar manner awards: Nobel prizes of Physics, Chemistry and Physiology or Medicine; Fields Medal; and, with a weight of 0.4: “Nobel prize” for Economics, Kluge prize and Holberg prize. The weight of 0.4 is necessary to balance the absence of a universal prize for social sciences and to distribute it on these 3 prizes.
The institution affiliated to an award winner (or his/her last affiliation if retired) got a weight of 1; and 0.5 for the place (s)he got his/her PhD (if any). The official weight for sharing the Nobel prizes is used; in the case of several universities affiliations, the weight is shared equally; in the case of both universities and private companies’ affiliation, the private companies are discarded and the weight redistributed to universities. Apart from Nobel, the other prizes are awarded individually.
The last 60 years are considered and the weight is reduced linearly with time (the year of the award is considered for both the award itself and the PhD affiliation).
It is very difficult to manipulate this parameter: impossible to know in advance that a PhD student will receive a top award; very difficult to hire in advance a potential awardee to increase a ranking. Only top quality research and work environment will attract such top researchers.
Score per capita: again, following the ARWU ranking strategy, we included a Score per capita. Two main reasons: at equal research impact, a university with a higher density of high quality staff should be encouraged; also, it limits the impact of expanding universities just to increase the visibility, but without caring about the quality. When no source was available for the number of staff, a (low) score of 8/100 was awarded. Note that the score per capita gives a reliable indication of the quality of the work in a given university; the 100 best scores can be considered the top 100 universities in term of quality of research. Below this, there may be universities not ranked today in the original top 500 but with a high score per capita.
The weights of the different parameters are as follow:
Scores in each category are not linear but proportional to the square root of the "performance"
Key information: these parameters may evolve in the future if necessary, to ensure the potential for manipulation is neutralized and to follow future ideas and recommendations from our followers.
Also, the concept of ranking is dangerous: we decided to group the universities in categories (Top 20, Top 300…), with buffer categories (about 100th, about 200th); for this last reason, 510 universities are ranked in total in this 2018 version.
It is very easy to find the mathematical ranking of a given university with the data we provide; however we refuse to encourage communication on such accurate positions. We believe that the nomenclature that has been followed is a good compromise between the need of communication and the uncertainties of the numbers.
Country rankingThe country ranking is considering publications (65%) and awards (35%). Data are extracted from SciVal. It is a measure of the research impact per country.
The Publications score is proportional to the number of publications from 2014 to 2018 affiliated to institutions of a given country, multiplied by the average FWCI at the power 1.5. "Institutions" includes universities, research centers and companies.
The Awards score is proportional to the number of awards got by researchers affiliated to the institutions of each country; if there is no affiliation, the country of primary work activity is considered.
The ranking displays as well an estimation of the percentage of the global contribution of the country research activities.
Top 2000 and Under 20The universities already ranked in the top 500 keep the same score. For all the other universities, only the Publications score is considered. To calculate it, the methodology is the same than previously; except that the contribution is calculated considering 1 for the papers entirely authored within the university, and 0.3 for papers written with external collaboration. The average contribution per paper (number between 0.3 and 1) is then considered.
SourcesPublications and books: Scopus www.scopus.com
FWCI, Publications per country and Publications in top 10% journal per SNIP: SciVal www.scival.com
Contribution per paper: Leiden Ranking* www.leidenranking.com
Number of faculties: THE Ranking** https://www.timeshighereducation.com/world-university-rankings
Awards: official websites for the awarding organization, and sources from wikipedia in case of ambiguities.
*We consider the papers from 2014 to 2016; the Leiden ranking counts papers from 2013 to 2016, however the contribution per paper proved to be stable throughout the years and the potential difference is disregarded. Also, if no data available, we take the number of publications single-authored and university collaboration only, plus 15% of the number of publications in collaboration with other universities, this sum divided by the total amount of publications.
**if not available, the numbers were collected as a last resort on other websites such as official websites or wikipedia. There is a certain room for uncertainty on this parameter, however its impact on the final ranking is very low
1Example: During 3 years, a university publishes 100, 150 and 200 papers. The FWCI is 1, 2.63 and 2; and the percentage of top 10% articles is: 20, 40 and 12, respectively.
The score for Publications will be then: 100*1*sqrt(20/15) + 150*2.5*sqrt(40/15) + 200*(12/15)^(3/2).
2For each parameter, the square root of crude data is compared to the square root of the maximum. As a consequence, the first books (and the first papers) have more weight that the next ones. This system benefits universities spreading their score on different parameters.