Facebook and the Age of Algorithm Outrage

By Neerav Srivastava

Facebook's algorithm allows the company to profit off hatred and outrage among users. What can be done about it? Neerav Srivastava argues that this 'outrage algorithm' may amount to unconscionable conduct under Australian Consumer Law.

In 2018 Facebook made a major change to its algorithm. Facebook CEO Mark Zuckerberg stated that the aim was to strengthen bonds between users and improve their well-being by fostering interactions between friends and families.

In September 2020 Tim Kendall, a former director, alleged that Facebook intentionally made its product as addictive as cigarettes and he feared it could cause ‘civil war’. Other Facebook analysts worried that Facebook’s algorithms may be inciting violence.

A few months later, on 6 January 2021, a mob attacked the Capitol Building in Washington, D.C. An internal document was to the effect that Facebook had not done enough to curb the use of its platform by the Stop The Steal movement regarding the 2020 Presidential Election. The ‘Facebook Papers’ Exposé in October 2021 allege that within Facebook, staffers warned the algorithmic change was making users angrier.

Facebook is a trillion-dollar company built on advertising revenue and ultimately, therefore, user engagement. When content goes viral it generates more advertising revenue. Facebook was concerned that users were becoming passive, possibly just watching professional videos, rather than being engaged. The 2018 algorithmic change was designed to fix disengagement by prioritising comments of depth rather than just a ‘like’. The problem is that it is divisive content that is most likely to be commented on.

Kendall warns:

Media preys on the most primal parts of your brain. The algorithm maximizes your attention by hitting you repeatedly with content that triggers your strongest emotions— it aims to provoke, shock, and enrage … When you see something you agree [or disagree] with, you feel compelled to defend [or] attack it. […] All the while, the technology is getting smarter […] at provoking a response from you.

The result, perhaps unintentional but nevertheless continued by Facebook, is that divisive content was hot-wired into the system. Outrage content is prioritised, users respond to it, advertisers follow, and even publishers like Buzzfeed feel compelled to prioritise divisive content.

It seems that the more users hate, the more Facebook profits.

There is, arguably, an urgent need for regulatory oversight. That will take time. This piece argues that outrage algorithms potentially may represent a grave intrusion on our ‘autonomy’, and, as such, could amount to “unconscionable conduct” under s 21 of the Australian Consumer Law (ACL).

Autonomy

To understand the s 21 argument, we first need to be clear what is ‘autonomy’ and why legal protections of it are in need of reinforcement. Autonomy involves being able to make our own choices. Christman states that it is ‘the capacity to be one’s own person, to live one’s life according to reasons and motives that are taken as one’s own and not the product of manipulative or distorting external forces’. Autonomy is fundamental to human rights and to being human.

Solove speaks of the right against decisional interference. An intrusion on our decision-making curtails that right. The more insidious harm is individuals ceasing to control their own lives. Fostering outrage is a dignitary harm since people are treated as objects rather than individuals. Intrusions on our autonomy are also a social harm because democracy less accurately reflects the collective will when our autonomy is restricted.

Heteronomy

Historically, autonomy was indirectly protected by Data Privacy Laws (‘DPLs’). In the Information Age, from about the 1950s, technology enabled aggregation of actual information about an individual. DPLs apply when ‘personal information’, i.e. information about ‘an identified individual, or an individual who is reasonably identifiable’, is collected and processed. It is accessible when retained. Knowledge about the individual could be used to affect their autonomy, i.e. their decision-making, provided the information was retained and the individual(s) were identifiable. This was the context in which DPLs were able to protect autonomy.

We are now in a post-Information Age, and in what might be called ‘the Age of Heteronomy’. It is an age in which we are in the process of being automated and DPLs are being outpaced:

a) The volume and quality of information on individuals has grown exponentially. Reports suggest that social media knows more about you than your best friend and even that it knows you better than you know yourself. That is a powerful enabler of influence.

b) Meanwhile DPLs are less able to protect our autonomy because, first, securing reams of raw data about yourself is not useful. It is not explicable and does not tell you how it is being analysed.

"It seems that the more users hate, the more Facebook profits."

Second, the most valuable aspect of the information, namely profiling or the analysis, might not be ‘personal information’ and so be beyond the reach of the DPLs. There has been a sea-change from the traditional profiling of the Information Age to what can be called ‘de facto profiling’. A traditional profile is personal information and therefore falls under DPLs. A de facto profile is hugely powerful but may not be personal information as an individual may not be ‘reasonably identifiable’ from it. The entity may use a unique code and large groups based on characteristics, such that the individuals are largely anonymised.

Third, traditional profiling requires information to be held in order for an entity to leverage it. If held, it is accessible to individuals under the DPLs. A de facto profile does not need to be held, and so the question of accessibility to individuals does not arise. For de facto profiling powerful inferences can be made and about characteristics from a few data points by an algorithm using big data. The inferences are applied algorithmically, creating a fleeting de facto profile, without any need to identify the individual or retain the profile.

In this new paradigm, what is valuable are characteristics or tokens, and not individual identity. Comprehensive actual information no longer needs to be collected, retained, analysed or linked to an individual. A few data points are sufficient to generate a sophisticated profile. This new phenomena is being called indirect profiling, individuation, the need for ‘group privacy’ and algorithmic groups.

A platform in Indonesia can identify its users’ religion based on how they use their mobile. Target Supermarket’s algorithm was able to identify pregnant women based on buying patterns. Facebook knows our moods. Uber knows if a passenger has been drinking and if a passenger is sleeping with someone. In these examples, it is the characteristics of the individual rather than their identity that is important. Those characteristics form a rich de facto profile.

c) Social media fosters a long-term relationship with users based on regular use and possibly addiction.

d) Platforms make scientific, purposeful, and pervasive use of behavioural influencing. Users are being disciplined to make real-time decisions without consulting friends or family. ‘We are being conditioned to obey. More precisely, we’re being conditioned to want to obey.’ Platforms are deploying the ‘holy grails’ of behaviour influencing, namely identifying moods when we are most vulnerable to influence; and subconscious cues such as a hovering cursor that shows interest levels.

e) Platform architecture uses ‘dark patterns’, such as obstruction, to herd us in the platform’s preferred direction. Dark patterns are strikingly effective and reinforce that in the digital world platforms are sovereign and ‘code is law’.

f) Sometimes the influencing or architecture is ‘manipulative’, i.e. covertly influencing a user, which is unethical from a design perspective. Unlike tobacco, Facebook does not come with a warning label that it uses outrage algorithms.

As a result of having (a) a long-term relationship with users, (b) deep and intimate information on them, (c) employing de facto profiling, (d) dark patterns, (e) pervasively using behavioural science, and sometimes manipulation, platforms are able to exert very considerable influence over users.

Platforms such as Facebook are able to exert very considerable influence over users by building rich de facto profiles. (Image: Unsplash/Victoria Heath)

Unconscionability

It is arguable that Facebook’s outrage algorithms are unconscionable. Section 22 of the Australian Consumer Law sets out non-exhaustive factors to consider in assessing statutory unconscionability. Unconscionability is:

Conduct that is so far outside societal norms of acceptable commercial behaviour as to warrant condemnation as conduct that is offensive to conscience … [A finding of unconscionability is] informed by a sense of what is right and proper according to values in contemporary Australian society. Those values are not entirely confined to […] values which historically informed courts administering equity … They include respect for the dignity and autonomy and equality of individuals.

Conduct that is formally legal may still be unconscionable. But establishing unconscionability is exceptional and the conduct needs to be grave. To be fair to Facebook, it does not produce the divisive content. But it is not neutral or innocent.

Facebook’s conduct is arguably unconscionable because:

  • Facebook’s relationship is one of asymmetric information and influence under s 22(1)(a).
  • Under s 22(1)(d) it is plausible that Facebook is exerting undue influence over some users, such that the relationship has been abused and the user’s conduct are not ‘free acts’. Facebook has been described as the most powerful behavioural modification machine in history.
  • It can be argued that Facebook is using unfair tactics under s 22(1)(d). Zuckerberg’s 2018 announcement appears to be misleading under s 18. Prioritising outrage without informing users of that fact is potentially manipulative and tainted by a high level of moral obloquy. An exacerbating feature was Facebook’s initial denial that outrage algorithms existed.
  • Scant regard is being shown for users’ dignity and autonomy. Inciting passions for the purposes of profits amounts to objectification, thus compromising our dignity. The algorithm is potentially covertly affecting a user’s conduct, thereby compromising their autonomy. We are agitated by the outrage and sometimes act on it. That Facebook’s conduct has been the subject of severe criticism suggest that it is far outside societal norms.

Ultimately, the question is whether ‘manipulatively’ fostering divisiveness on a mass scale, knowing that it may be contributing to violence, for the purposes of profit is unconscionable. This piece submits that it is: that a grave intrusion on our autonomy is unconscionable.

There is a broader issue. If platform behaviour influencing remains unchecked, the next age might be ‘The Automaton Age’, an age in which Zuboff warns that ‘autonomy is irrelevant … psychological self-determination is a cruel illusion’.

Neerav Srivastava is a PhD candidate at the Faculty of Law, Monash University.

[Author's note: This piece is a precis of arguments made in my draft PhD. I am grateful to my, frankly, wonderful supervisors – Profs Normann Witzleb and Moira Paterson – and to Prof Jennifer Hill for their comments. Any errors are my own.]