What Can UK Higher Education Learn From Australia?
Address to the York Festival of Ideas
Professor Margaret Gardner AO
President and Vice-Chancellor Monash University Australia
8 June 2018
This is the topic you give to an Australian, an Australian with a recognisable Australian accent, so you can be sure the audience, overwhelmingly non-Australian, will begin:
- confident there is nothing much to learn from the place that began by copying them;
- relaxed that some innovation will be described that is the equivalent of the platypus - evidence of Antipodean exceptionalism; and
- inured to any defensive aggression or hyperbole on the part of the speaker, because the Brits have lots of experience of the Australian cricket team.
So to begin, some broad context to illustrate that while there are strong underlying similarities between the two systems, there are some differences, which affect culture and to a lesser extent, policy.
1. There are fewer universities in Australia than in Britain, although the overwhelming majority, 39, are public, as is the case in Britain.
2. In Australia, university founding legislation and governance is a creature of State legislatures, with funding dominated by the federal government, unlike Britain where funding and governance are typically unified in one layer of government.
3. The underlying model informing Australian universities dates largely from the first universities established, the University of Sydney in 1850 and the University of Melbourne in 1853. So all Australian universities begin with some of the key features that dominated the foundations of early nineteenth century universities in the United Kingdom.
Because of a combination of these three features outlined above, Australian universities are:
- non-sectarian and secular;
- comprehensive, rather than specialised, characterised in legislation as serving the interests of the State that founded them, while there are a number of specialist British universities;
- commuter (only around 5% of Australian university students reside on campus and only 11% are enrolled outside their home State) and therefore overwhelmingly metropolitan, concentrated in the capital city of each State., In the UK there is still considerable internal movement of students between cities and regions to attend university and a much higher proportion live on campus. British universities are found scattered in university towns and across many towns, rather than clustered in large metropolitan centres.
As a result, Australian universities are more uniform in culture, expectations and education offerings than their British counterparts. They are also large, with the ‘average’ Australian university being over 32,000 students. In contrast, on a rough estimate, the equivalent is something over 14,000 students, on average, for a UK university.
Both countries have shared a similar increase in the numbers and proportion of fee-paying international students in their universities. And both are characterised by high levels of autonomy from government, being ‘self-governing’, despite being public universities.
These latter two features increase the diversity of mission, as well as, the diversity of the student and staff cohort. They are accompanied by an ability by each university to effect changes in revenue, expenditure, program and policy, responding to market and regulatory environments.
Lessons to be Learned
The first point to make is that there is a much higher degree of understanding and use of policy ideas from the UK in Australia than of any other country. Moreover, apart from the major importation from North to South in the 19th Century, there has been much higher bilateral exchange in this 21st Century than ever before.
From the Antipodes to the Sceptred Isle or ...?
The following four areas are the key features in our landscape for policy exchange but, while I will briefly speak to each, I am going to concentrate on the last.
- Research policy
- Student contributions or loans schemes
- Education or learning and teaching innovation.
Here, we have collectively and severally ranged over encouraging university-industry collaboration and innovation, as well as performance related schemes.
In 1990 Australia introduced Cooperative Research Centres (CRCs) to encourage research collaboration with industry and increase impact (although the latter was not a term of research art at the time). There are now over 150 active centres, which have produced some 4,400 research students and a number of innovations and commercialized outcomes. In 2012, a review of the CRCs suggested a benefit to Australia of some $7.5 billion.
The UK introduced Catapult Centres in 2011 which were arguably an improvement on this first and still successful Antipodean model. The focus was on 10 areas for operation from cell and gene therapy through high value manufacturing to transportation systems in not-for-profit enterprises set up to facilitate innovation and commercialization. And indeed some years later in 2015 Australia introduced not-for-profit industry Growth Centres in six industry sectors from advanced manufacturing through medical technology and pharmaceuticals to oil, gas and energy. Now the continuing CRC program is asking that rebids consider how they link with these new Growth Centres.
I do not have the evidence to say whether the Australian CRC programme, with its longevity, is better placed to deliver on the collaboration and technology transfer outcomes that both nations seek from their publicly funded research than the UK Catapult Centres or Australia’s recently introduced version of a similar scheme in Growth Centres.
However, I do know that it takes an intensive examination to understand whether a funding programme works against its objectives and in its national context and another level of deep benchmarking to work out how this might translate in a different national environment.
One advantage of the translation of policy and programme between the UK and Australia or vice versa is that there are enough similarities and connections to facilitate informed benchmarking - and a greater chance that context will be understood.
What can UK higher education learn from Australia, just as much as Australia can learn from the UK, if it is interested in a particular programme or policy? It may be one of the closest to a trial programme or policy that either nation will get, without running a trial.
The UK and Australia have a variety of research performance measurement or funding schemes. The UK had Research Assessment Exercise (RAE) with funding attached, Australia has Excellence in Research for Australia (ERA) with no funding attached. The UK has the Research Evaluation Framework (REF) measuring research impact with funding. Australia is embarking on a related research impact measure with no funding attached. The difference between the UK and Australia, apart from the funding, is that Australia has tended to run an institution- and all academic staff-wide, metric focused approach, while the UK has allowed a more selective within institution and less metric based approach.
Again, I do not have the evidence to assert that the Australian approach which, while requiring significant organisational effort, is less intensive than the UK approaches because it relies more heavily on metrics for its submission and assessment, although that is the Australian presumption. Nor can I assert that attaching funding has lifted excellence more than the sheer ranking that is the only outcome of the Australian exercise – though again it is an assumption that no university really wants tested.
Australia does have performance schemes that also distribute funding, but they are less applied to differentiated performance than to outcomes. For example, completion of doctorates attracts funding that rewards timely completion and Australia has a strong focus on timely completion of research degrees.
What we can say from the Australian experience is that performance schemes tend to increase uniformity of approach. Sometimes that is an almost unalloyed good, like timely completion of doctorates. In schemes that rank quality of outcome this tends to lead to a single hierarchy or ranking of performance. Faced with a single formula of this type, much research strategy is tactical and focused on the same metrics. Success is rightly rewarded, but there is a single and clear hierarchy in research intensity and performance, much as we see emerge in international rankings.
How this clashes with the other objective of much research policy, industry-university collaboration and commercialisation of research is an interesting issue that may be ameliorated by impact metrics – time will tell.
UK higher education might ask whether the Australian approach to measuring research excellence is more effective and efficient than its current approaches. The UK and Australia could jointly ask how much these schemes are adding to the drive for excellence, in the context of our collective universal preoccupation with international rankings, and which is more effective.
Student contributions or loans schemes
The Higher Education Contribution Scheme introduced in Australia in 1989 (now called HELP – Higher Education Loans Program) was a great bureaucratic innovation to provide a more equitable means than fees to seek a private student contribution to the cost of higher education. In Australia it was introduced with the government paying the student contribution annually to the universities. Interest rates for students were set at CPI (not at market rates), and student contributions have been generally increased only by CPI. The exception to this student contribution increase is instructive. This, along with a repayment threshold set at median weekly earnings, kept the impact of this scheme on students as a contribution, not a loan – and of course it was paid back through the tax system.
The UK introduced its loans scheme, through its existing Student Loans Company, in 1998. Its repayments, although tied to median earnings, were set to a specific installment plan. In 2012 it introduced a major increase in fees through setting a new maximum fee, which became largely the de facto fee for all students. In 2005 Australia had once also introduced an increased maximum fee (albeit at a lower level than recently introduced in the UK). Almost all institutions immediately lifted to the maximum, to the Australian government’s surprise. And this has not yet been attempted again.
It is clear that despite apparent price insensitivity by students there are elements of a social bargain in relation to the proportion and level of contribution and the impact of repayments that, if transgressed, create dissension and disaffection in the student and graduate population.
The details of policy implementation matter. Moreover, effective translation of any major policy innovation relies on clear understanding of the way elements of the policy interact. And we ignore the history of implementation of any innovative programme at our peril. In this case, the lessons for Australia from UK higher education policy are clear.
This is a story of two countries with increasingly large and diverse fee-paying international student cohorts. Of countries that have, more than other nations, established campuses outside their home countries, principally in the Asian region. As a result, we have stronger than normal engagement with the higher education policies of Asian nations, and great direct interest in and engagement with the immigration policies and foreign policies of our nations.
We have much to share in ways to build effective engagement in policy domains outside traditional higher education policy.
Education or learning and teaching
The UK measures and rewards teaching quality through the Teaching Excellence Framework (TEF) and has done so since 2017. Australia has a website of comparative education information, QILT (Quality Indicators for Learning and Teaching), but, after abandoning a performance funding scheme for teaching in the 1990s, is only now talking about new ways to measure and/or reward education outcomes.
There are really no international benchmarks for learning and teaching or education quality, in the way that we have international outcome measures for research and associated rankings.
The UK has the Higher Education Academy to promote and encourage good education practice – and a number of Australian universities have joined this Academy. This is an important professional association and set of activities.
However, how is innovation in learning and teaching encouraged and supported? This is not the same question as how do we reward excellence. Indeed, often students are reluctant to embrace innovations, and so to experiment is to potentially incur negative feedback and assessments from students on the quality of teaching.
Until 2016 Australia had a government funded Office for Learning and Teaching (OLT), which funded schemes that assisted innovation and good practice in learning and teaching in higher education.
The OLT had four main program areas: awards recognising excellence and innovation in learning and teaching for individuals and groups; grants that supported innovation in a number of broad thematic or disciplinary areas; broad evaluations or good practice summations of what was understood; and fellowships that went to individuals for a program that would foster and disseminate an innovation across their university or across a number of universities. The latter, in particular, required the support of the relevant university/s senior leadership, such as the Vice-Chancellor.
This Office and its predecessors provided direct support for innovation, including recognition and reward. Indeed, winning a grant or a fellowship had rewards akin to winning a competitive research grant or fellowship. All outcomes were publicly available and it was expected that fellows and award winners would make themselves available to share their expertise.
My focus here is to talk about the impact of these programs on learning and teaching across Australia’s universities. An impact evaluation was undertaken by Margaret Hicks in 2016 on some of the key areas that had attracted funding to see if innovation had been shared and had changed practice. It focused on the period 2012-2016 and on four areas: English language, academic integrity, learning analytics (particularly around student retention outcomes) and graduate employability. The funding distributed by the OLT in this four year period was some $60 million – a small amount compared to overall outlays on teaching university students.
The evaluation found that senior leadership saw a contribution to supporting a culture of learning and teaching improvement and innovation. The frameworks, policies and practices developed through OLT funding in the four areas outlined had shaped practice and policy in many and sometimes most of Australia’s universities. It was an example of effective and comparatively fast dissemination of the OLT project outcomes.
There were other related findings about the dissemination and networks of innovation. As in TEF, the distribution of dominance was not the same as in research. Most important, the networks and interpersonal collaborations were broadly distributed, there were clusters of disciplinary expertise, and the clusters were distributed across the research ranking hierarchy.
This funding for the OLT and its predecessor bodies, remarkably small, given the scale of the higher educational endeavour, was not only key to encouraging innovation, but most importantly to achieving that most difficult objective the dissemination of learning and teaching innovation across a sector.
Programmes can be abandoned for good reasons or bad, in knowledge of their effect and outcomes or in ignorance of them.
There is a lesson here from Australia for the UK. Change and innovation in educational practice is difficult. Innovations tend to rise and die with their innovators. Australia had a programme that made a difference, that disseminated innovation, which it abandoned for fiscal, not effectiveness or efficiency reasons.
We have barely established clear objectives for teaching excellence that are clearly linked to educational outcomes, and we have no clear expectations about how innovation can be fostered and disseminated. Let us take time to search for evidence of what has worked and why.
And finally, some examples of innovation from my own university, Monash, which is Australia’s largest university with over 73,000 students. We are also on international rankings in the top 100 in the world; depending on which rankings you choose, somewhere between 59 and 80 in the world and in the top three to six in Australia.
These examples are to illustrate learning and teaching innovation links the impact of the digital with enhanced physical environments, for formal and informal learning.
Our latest new building, known as the Learning and Teaching Building, has no conventional lecture spaces and is characterised by highly innovative spaces that challenge academics to engage with large classes in active learning. The evidence from useage is completely contrary to typical classroom and lecture attendance. Student attendance does not decline over the course of the semester but remains high including across the second half of the semester.
In these last three years we have also begun trials of increased digital enhancement of learning and teaching from live streaming of all large first year lectures with tutor-mediated questions in class, through replacing fixed computer labs with MoVE which is the Monash Virtual Environment, a bring your own device environment, based in the cloud. This has been rolled out across five of our ten faculties.
We are in rapid uptake of eAssessment replacing all handwritten exams, replacing the 360,000 exams we undertake annually. We are building our own platform to provide the functionality across the question types, including use of video that we need in a simple platform attached to our learning management system. And we extended in class polling to give teachers with large classes access to immediate feedback from students.
These are a sample of innovations. One lesson we bring is implementation at scale and across a large and comprehensive institution. If something is going to break or fail, we are the place that tests robustness and efficiency. And because we are comprehensive, we test the need for and possibilities for differentiation of approach.
Long-term success and sustainability requires an ability to innovate and adapt. In large and complex sectors, such as higher education, with considerable institutional autonomy and longevity, there are always concerns about agility and willingness to experiment. Indeed the doomsayers of disruption expect that these features make public higher education ripe for disruption.
Australian universities were built from the models of British universities coupled to adaptation to the time and the place. As a result and following from the continuing strong links, we are two higher education sectors best placed to provide a trial and observation site for policy change and innovation for the other. We have borrowed, and will and should, continue to borrow from one another.
It would be better if we tried to learn those lessons more systematically:
- Taking the time to unpack the lessons of implementation, not just policy design.
- Considering setting some key benchmarks that could assist more informed evaluation and comparison.
- Sharing with our respective government and government agencies the implications of their broader policies on the internationalisation that is the lifeblood of our ability to fuel high quality knowledge economies and societies.
- Recognising the lessons that the scale and comprehensiveness of Australian universities provide for innovation dissemination, just as we in Australia learn about the outcomes from and conditions for the more specialized and differentiated system in the UK.
- And finally, embracing the opportunities that come from comparative policy design and innovation lessons for and from sectors that are ‘sisters under the skin’.
 Glyn Davis, The Australian Idea of a University, Melbourne University Press, 2017, p53.
 Davis, p47.
 Allen Consulting Group, The Economic, Social and Environmental Impacts of the Cooperative Research Centres Program, report to the Department of Industry, Innovation, Science, Research and Tertiary Education, Canberra, 2012, pvi.
 Margaret Hicks, Impact Evaluation of Key Themes Funded by the Office for Learning and Teaching 2012-2016, Final report September 2016, Department of Education and Training, Canberra.
 Hicks, p10.