Demystifying academic rankings
Director of Research Performance, Sian Wright, explains why academic rankings matter.
Why do we have academic rankings?
Academic rankings were developed in the early 2000s with a view to provide a robust, independent assessment of the quality of tertiary education providers globally. The rankings provide an indication and comparable benchmark of an institutions performance across a range of factors – predominately teaching and research, but others also include an element of internationalisation and industry engagement.
What are the key rankings?
There are a multitude of international rankings, all with their own methodology and data sourcing models. The key rankings Monash follows are The Times Higher Education (THE), the Academic Ranking of World Universities (ARWU) and the Quacquarelli Symonds (QS). Subsequent rankings we also follow are the US News World Rankings Rankings and Leiden Rankings.
What impact and influence do they have?
The focus of the introduction of the initial international ranking (the AWRU in 2003) was to provide an indication of the performance of Chinese institutions relative to the world best, to influence government policy . Since this release, international rankings have been used to influence a large range of stakeholders for universities including prospective students and parents, industry partners, prospective staff and higher research degree students and national governments. In the transitory international competitive market that is tertiary education in the 21st century, any quick indication of an institution’s performance relative to their competitors may influence perception.
How do they differ from each other?
The primary differentiation is in the methodology used for calculating the ranking and on the data sources used in the analysis. Some utilise a completely independent sources whilst others source a large amount of data directly from institutions. This has caused a range of questions relating to the validity of some rankings as there is potential to ‘game’ or influence the rankings based on the data an institution provides.
There are, of course, additional factors that need to be taken into consideration when considering some of the rankings in relation to specific disciplines or subject areas. Due to the nature of the criteria of some of the rankings, they are innately aligned to the STEM (Science, Technology, Engineering, Maths) areas, so the applicability for HASS (Humanities, Arts, Social Sciences) subjects may be less relevant.
The ARWU ranking is a completely independent ranking that sources information directly from external information sources. The primary provider of bibliometric information for the ARWU is Clarivate Analytics who manage the Web of Science database, which is then supplemented by information sourced directly from major prize or medal awardees e.g. Nobel or Field Medals. This information has also been used to inform a new set of Global Rankings of Academic Subjects (GRAS) released in 2017.
The Times Higher Education (THE) methodology has a few more components to supplement the bibliometric data, with reputational opinions of discipline leaders sought and a requirement for institutions to submit a small amount of accompanying contextual information. The primary provider for the THE ranking bibliometric data is Elsevier who manage the Scopus database, the reputation component is completed through an annual audited survey and institutions submit their additional information annually. THE also used reuse the information collated to produce a subject level analysis of institutional performance.
The QS methodology also uses an element of reputational standing with both the academic and employer perception of an institution contributing to the ranking. The reputational scores have a higher weighting and are accompanied by bibliometric information on citations sourced from Scopus and information on student-staff ratio. The QS reuse this information to produce a number of indicative rankings including by subject, on graduate employability, top 50 under 50 and best student cities.
What measurements do they use?
The AWRU ranking is calculated through four component criteria:
Quality of Education
Alumni of an institution winning Nobel Prizes and Field Medals
Quality of Faculty
Staff of an institution winning Nobel Prizes and Field Medals
Highly cited researchers in 21 broad subjects
Papers published in Nature and Science
Papers indexed in Science Citation (includes Social Science)
Per Capita Performance
Per Capita academic performance of an institution
THE has a slightly broader methodology and includes an element of internationalisation and industry engagement:
International-to-domestic student ratio
International-to-domestic staff ratio
The QS uses similar criteria to the THE but has significantly different weighting applied to the component parts.
Faculty to Student ratio
Faculty to Student ratio
Citations per faculty
International faculty ratio
International student ratio
How has Monash been performing overall in the key rankings?
Monash has been performing well in recent years in the international rankings, increasing in all the key rankings we follow.
We achieved a significant increase in our ARWU ranking in 2016 jumping from 114th to 79th particularly commendable as Monash has not scored any points in the Nobel Prizes or Field Medals for alumni or staff criteria. In 2017 Monash has maintained its performance increasing to be ranked 78th in the world. The 2017 THE rankings are yet to be released but Monash has achieved gains in all previous years being ranked 74th in 2016. Monash increased performance in the QS rankings for 2017/2018 increasing to be ranked 60th globally up from 65th in the previous year.
Monash’s ambition by 2020 is to have improved our ranking internationally by 20 per cent.