First of all, this release of June 2018 is still a beta version.
Unlike other rankings, we are not afraid to list the weaknesses of our ranking. As academics, doubt and skepticism should be first applied to our own work, and we will follow this principle as much as possible.
To count the publications, we should ideally take the weight per SNIP and contribution of each single article; also, the SNIP itself is not perfect (it advantage medicine over other fields) and will probably be replaced by a better metric in the coming years. As explained before, combining the FWCI, the articles in top 10% journal per SNIP and limiting to 2.5 the FWCI seem to be a good compromise so far. A better access to data will certainly help to continuously improve the process.
We did not include patents at this stage; it is a short-term objective of the ranking.
Although the data have been carefully analyzed, a human error is of course still possible.
More awards should be included, to capture the full spectrum of research activities.
The data for academic staff have been taken mostly from the THE ranking, and complemented by other sources when necessary. There is room for minor inconsistencies here, although the impact on an individual ranking should be minor.
Other parameters with an ethical significance may be included in the future, for example a corruption index.
The accurate impact of each book (reputation of the publisher for example) is not considered so far.
We acknowledge that our ranking is size-dependent, although we introduce a slight weight of performance per capita. Our objective is clearly to quantify the research impact, the size of a given university has of course an influence.
These weaknesses are however much less problematic than other rankings, and we believe there is now very little room for gaming. If you find other weaknesses, or have simply ideas to improve the measure of the impact of research, we sincerely invite you to contact us. The spirit of this ranking is to be open and transparent, and we believe that academics or public opinion must be listened to when ranking the universities.
Also, we do not measure the quality of the teaching at all; a separate ranking may be necessary for this, and the data collection may be much more adventurous. We however do acknowledge that teaching is a fundamental part of the university mission.
Finally, the authors sincerely hope that this ranking can help decision makers in and out universities and prompt further discussion on this important challenge. There should be no other purpose than to develop long term and honest strategies to retain and promote the quality of universities worldwide.