Interventions to Prevent and Disrupt Deepfakes and AI-Facilitated Abuse

Project supervisors

A/Prof Asher Flynn, Faculty of Arts (Main Supervisor)
Dr Campbell Wilson, Faculty of IT
Prof Jonathan Clough, Faculty of Law

PhD project abstract

AI-Facilitated Abuse is transforming the landscape of technology-facilitated harm. The rapid advancement and commercialisation of AI technologies has meant tools to abuse, track, monitor and harass others no longer require the same technological expertise or high powered graphics computer to cause harm. Instead, open source tools are being made readily available. One of the biggest emerging concerns relating to AIFA is the rapid development of deepfake technologies and imagery which are used to develop harmful, manipulated pornography and for harassment and abuse. This project will respond to these emerging technologies by exploring best practice interventions to prevent and disrupt deepfake technology. This will include exploring legal, technical and social interventions, such as using AI tools to block or automatically detect deepfake imagery, the development of laws to criminalise the harmful creation of deepfakes, the role of social and corporate responsibility in relation to preventing deepfakes. The aim will be to develop an evidence-base to inform best-practice legal, social and technical interventions to AI-Facilitated Abuse.

Areas of research

Criminology, AI, Law, IT

Project description

Deepfakes are a significant and serious problem at individual, societal and economic levels, warranting substantial legal, social and technical innovation. It is important that multidisciplinary research is undertaken to generate knowledge on the scope and adequacy of existing legal, social and technical responses to deepfakes, and explore how we can use AI-technologies as a weapon against deepfakes, through the development of robust techniques that prevent and disrupt this serious harm. This is a rapidly advancing area both in relation to law and policy, and in relation to technological advances. As such, the successful applicant will need to work closely with supervisors in law, criminology and IT to explore these shifting policies and state-of-the-art technologies.

The project aims to create transformative and lasting change through the development of AI and Data Science for Better Governance and Policy. It directly responds to a key priority of the Australian Government’s digital economic strategy, as outlined in the Australia 2030: Prosperity through Innovation by exploring questions around AI policy, law and potential harms. It will benefit and build upon the team’s existing industry partnerships with police and Internet intermediary stakeholders producing outcomes that are of direct interest and use to these organisations. The project will also build new industry relationships by providing a rich source of data that will be of much interest to community-based and government agencies tasked with preventing and responding to deepfakes.

PhD student role description

Deepfakes are a significant and serious problem at individual, societal and economic levels, warranting substantial legal, social and technical innovation. This project will examine potential legal, social and technical interventions to disrupt and respond to deepfake technologies and the harms generated by deepfake content. Key questions to be considered include: what are the existing legal, social and technical responses to deepfakes? How can we improve these responses? Are legal responses always possible? Where is a technical response required versus a legal or social response to prevent and disrupt deepfake imagery? What responsibilities do Internet intermediaries need to take in these contexts?

The student will undertake a mixed qualitative and quantitative study that may involve:

  1. Using existing and/or developing new AI tools and techniques to test disruption and prevention of deepfake technology
  2. Conducting in-depth interviews with relevant stakeholders and/or victims and/or perpetrators of deepfake abuse
  3. Run a national or international survey to gauge understandings of deepfake harms\
  4. Social media scrapping to identify discussions around deepfake imagery
  5. Legal and policy analysis of existing responses to deepfakes.

The main role will be to help develop an evidence-based to inform best practice interventions to disrupt and prevent AI-Facilitated Abuse.

Required skills and experience

The successful applicant(s) will have an excellent academic track record in Criminology, Law, IT or other relevant discipline (e.g., Sociology, Political Science, Gender Studies, Psychology), and have a background and interest in AI technologies. They will need to have basic understanding of AI, qualitative and quantitative methodologies, as well as skills in IT and Criminology or Law.

Applicants will be considered provided that they fulfil the criteria for PhD admission at Monash University. Details of eligibility requirements to undertake a PhD in the Faculty of Arts are available at

Candidates will be required to meet Monash admission requirements which include English-language proficiency skills. Scholarship holders must be enrolled full time and on campus.