Automation, uncertainty, and the Robodebt scheme

Monash Law and Castan Centre academics collaborated on a submission to the Robodebt Royal Commission.

22 March 2023

The Castan Centre for Human Rights Law has been concerned by the evidence on the operation of Robodebt given in recent weeks to the Royal Commission. Monash Law and Castan Centre academics collaborated on a submission to the Royal Commission which is now available on the Commission's webpage.

Download the submission

The submission was led by Associate Professor Joel Townsend (Monash Law Clinics Director) together with Associate Professor Brendan Gogarty, Associate Professor Maria O'Sullivan (Deputy Castan Centre Director Research), Associate Professor Yee-Fui Ng, and Professor Christopher Marsden. The submission makes 10 key recommendations, including that the Cth Parliament legislation creates an oversight body to improve fairness and accountability in automated decision-making by governments in Australia.

Following the submission, Associate Professor Joel Townsend (Monash Law Clinics Director) was asked to comment on radio, and contribute to this Monash Lens thought piece with Associate Professor Michelle Lazarus (Monash Medicine, Nursing and Health Sciences).

Automation, uncertainty, and the Robodebt scheme

The recently concluded Royal Commission into the Robodebt Scheme exposed the manifest flaws in an automated system used for raising social security debts.

Among the many lessons we can draw from the wider Robodebt scandal is the need to design systems (whether human or automated) with a complex, uncertain world in mind.

While Robodebt wasn’t an artificial intelligence system, it’s a cautionary tale as we contemplate an increasingly automated future, especially in the context of substantial developments in AI.

Robodebt was a process by which Centrelink took annual Australian Taxation Office income data, and averaged it into fortnightly instalments. If this averaged amount didn’t match income declared by a social security recipient, and the person didn’t respond to a request for further information, they were assumed to have incorrectly declared their income, and a debt was raised against them.

Illustration of a businessman's hands holding a person upside down with coins falling out of his pockets.

These debt-raising decisions were unfair, and were seen as such by many of those receiving notifications of automatically generated debts.

The findings of the royal commission will shed important light on issues relating to the extent of automation of government decision-making, review processes, and the social security system. What’s clear from the evidence put before the royal commission is that many people recognised Robodebt’s flaws, but failed to take issue with the outputs of the automated system.

Automation bias and uncertainty tolerance

Why did so many people choose to trust the Robodebt automated system over the drumbeat of criticism that it was unlawful, and its outcomes were flawed?

One factor, among many, is likely the human tendency for uncertainty intolerance.

As humans, we tend to crave certainty, despite the complex world we live in being filled – at every turn – with uncertainty. When faced with uncertainty, especially when risk and liability are involved (as in Robodebt), we seem to focus on suppressing uncertainty.

The appeal of an “objective” non-human decision-making machine can be an alluring salve to treat the discomfort of uncertainty.

Illustration of people standing around staring at a robot.

The automated system used in Robodebt didn’t hedge when it produced an output. Humans needed to decide whether the technology provided a proper basis for raising a social security debt. Too few acted when questions about its adequacy were raised, and the flawed, unlawful system persisted for years.

If we, as individuals and a species, are more uncertainty-intolerant, we may be less comfortable with the discomfort uncertainty typically provokes, and more likely to perceive the output of an automated system as “certain”.

This trust in technology can be so extreme that we allow the technology to run without monitoring or oversight, termed automation bias (or automation misuse).

The American version of the satirical television mockumentary series The Office illustrated this well in an episode in season four. Michael Scott, the show’s protagonist, played by Steve Carell, drives his car into a lake because the GPS tells him to take “a right turn”. Michael does so because “the machine knows”, despite clear visual evidence that this would result in the car plummeting into the lake.

This example illustrates the important role human judgment has in interpreting automated systems.

Those engaging with such platforms must be able to adaptively respond to uncertainty – to acknowledge the limitations of automated recommendations, sit with the discomfort of uncertainty, and still make a decision.

If they accept the uniform presence of uncertainty in complex decision-making, then the tendency towards automation bias and misuse may be reduced. The alternative of drawing on this human skill could be drowning in the lake.

Those in the automation field (including AI and machine learning) agree that human intellect is critical in all fields, including medicine, policing, and law. We’re the ones responsible for the final decision, best-placed to know what to ask of the technology, and what to make of technological recommendations. But this process, too, is filled with uncertainty.

AI isn’t as ‘certain’ as it may first appear

Our community, faced with an opaque decision-making system, might well be more inclined to conclude it’s an unfair system. “Procedural justice” research confirms that members of the community more readily see decisions as just when they can see and understand the reasoning process behind them.

Robodebt was opaque; it’s taken Senate committee hearings, judicial review proceedings, a class action and the royal commission to unpick the details of the scheme.

Illustration of robots dressed as humans, taking masks off.

In contrast, true artificial intelligence systems are likely to make decisions without ever showing their work – the humans subject to government decisions made using AI might never be able to understand the precise reasoning leading to the decision.

Much can be learned here from the literature about the human capacity to tolerate uncertainty.

At Monash, we recognise that a key component of education is preparing students for adaptively responding to uncertainty naturally present in our complex and unpredictable world.

More broadly, research into uncertainty tolerance has important implications as government considers implementing more automated decision-making, and the increasing use of artificial intelligence.

Hard lessons learned

Uncertainty is everywhere – we can’t eliminate it even with technology. Attempts to replace uncertainty with the mirage of AI certainty can also be dangerous. Technology itself introduces uncertainties, as the steps that AI takes towards an output are often proprietary.

So what are some ways forward?

  • Developing workplaces that acknowledge and value the central role of humans in decision-making and judgment. This requires an assertion of the complex and messy nature of decisions such as social security.
  • Build in processes and protocols for effective AI management, making it clear who bears accountability for automated systems.
  • Developing ourselves, the systems we work within, and our communities to be more uncertainty-tolerant. Ultimately, this can help us work better together with the natural uncertainties present in our complex world, and with the algorithmic “certainties” that automated systems introduce.

This article was first published on Monash Lens. Read the original article.

Download the submission Public Administration and Automation Submission to the Robodebt Royal Commission.