Prompted by numerous unethical strategies employed by universities to increase their ranking, we present here our abuse-proof ranking: the A3 Ranking. Why A3? An Academic ranking made by Academics, and for Academics (well, and for anybody else sharing academic values).

Top 500 and Under 50 rankings

First, unlike most other rankings, we name what we are ranking: this ranking is a measure of the research impact of the universities. This is because research is recognized as the most important output for the visibility of the world-class university; and because it is easily measurable.

By looking at the strengths and weaknesses of other rankings, we decided to collect the following data:

The weights of the different parameters are as follow:

Scores in each category are not linear but proportional to the square root of the "performance"

Key information: these parameters may evolve in the future if necessary, to ensure the potential for manipulation is neutralized and to follow future ideas and recommendations from our followers.

Also, the concept of ranking is dangerous: we decided to group the universities in categories (Top 20, Top 300…), with buffer categories (about 100th, about 200th); for this last reason, 510 universities are ranked in total in this 2018 version.

It is very easy to find the mathematical ranking of a given university with the data we provide; however we refuse to encourage communication on such accurate positions. We believe that the nomenclature that has been followed is a good compromise between the need of communication and the uncertainties of the numbers.

Country ranking

The country ranking is considering publications (65%) and awards (35%). Data are extracted from SciVal. It is a measure of the research impact per country.

The Publications score is proportional to the number of publications from 2014 to 2018 affiliated to institutions of a given country, multiplied by the average FWCI at the power 1.5. "Institutions" includes universities, research centers and companies.

The Awards score is proportional to the number of awards got by researchers affiliated to the institutions of each country; if there is no affiliation, the country of primary work activity is considered.

The ranking displays as well an estimation of the percentage of the global contribution of the country research activities.

Top 2000 and Under 20

The universities already ranked in the top 500 keep the same score. For all the other universities, only the Publications score is considered. To calculate it, the methodology is the same than previously; except that the contribution is calculated considering 1 for the papers entirely authored within the university, and 0.3 for papers written with external collaboration. The average contribution per paper (number between 0.3 and 1) is then considered.


Publications and books: Scopus

FWCI, Publications per country and Publications in top 10% journal per SNIP: SciVal

Contribution per paper: Leiden Ranking*

Number of faculties: THE Ranking**

Awards: official websites for the awarding organization, and sources from wikipedia in case of ambiguities.

*We consider the papers from 2014 to 2016; the Leiden ranking counts papers from 2013 to 2016, however the contribution per paper proved to be stable throughout the years and the potential difference is disregarded. Also, if no data available, we take the number of publications single-authored and university collaboration only, plus 15% of the number of publications in collaboration with other universities, this sum divided by the total amount of publications.

**if not available, the numbers were collected as a last resort on other websites such as official websites or wikipedia. There is a certain room for uncertainty on this parameter, however its impact on the final ranking is very low

1Example: During 3 years, a university publishes 100, 150 and 200 papers. The FWCI is 1, 2.63 and 2; and the percentage of top 10% articles is: 20, 40 and 12, respectively.

The score for Publications will be then: 100*1*sqrt(20/15) + 150*2.5*sqrt(40/15) + 200*(12/15)^(3/2).

2For each parameter, the square root of crude data is compared to the square root of the maximum. As a consequence, the first books (and the first papers) have more weight that the next ones. This system benefits universities spreading their score on different parameters.