Ethical issues in the use of machine learning for clinical decision support
Project supervisors
- Professor Robert Sparrow, Faculty of Arts (main supervisor)
- Professor Patrick Kwan, Monash Institute of Medical Engineering
Areas of research
- Applied ethics; bioethics; ethics of AI
Project description
Machine learning has the potential to generate exciting advances in medicine. An important class of applications of machine learning involves the provision of decision support to clinicians. By combining the power of machine learning, big data, and human judgement based on clinical experience, such clinical decision support systems (CDSS) leverage the strengths of both artificial intelligence and human beings. However, there are multiple human factors that may undermine the future application of CDSS based on ML in clinical practice, including ethical concerns that may affect the acceptance of the tool by physicians and patients. How to best bring together human and algorithmic decision making is recognised as one of the key challenges in the successful application of these systems in clinical practise. Unless this barrier is adequately addressed, there is a serious risk that the hefty investment in developing ML-assisted clinical decision support will be “wasted”.
Professor Kwan at the Monash Institute of Medical Engineering, leads a team of researchers who have recently developed a world-first ML model that can predict response to, and therefore help select, the initial antiseizure medication for individual adults with newly diagnosed epilepsy. The PhD student will work with Professor Robert Sparrow, one of Australia’s leading researchers in applied ethics, and Professor Kwan, to improve understanding of the ethical issues and human factors and challenges associated with the use of this ML-based CDSS tool for epilepsy treatment planning. This interdisciplinary project, which sits at the intersection of philosophy, applied ethics, and medicine, will identify key ethical challenges and evaluate various policy instruments and design choices that could help maximise the benefits of this tool for patients, physicians, and the broader community.
PhD student role description
This project addresses the ethical issues that arise in the use of machine learning in Clinical Decision Support Systems in medicine. The successful applicant will collaborate with researchers in philosophy, applied ethics, and social sciences who are working to understand these challenges. They will develop their skills in argument, critical analysis, and ethical reasoning to solve urgent ethical and policy questions. They will hone their skills in qualitative research methods by interviewing physicians and patients about their hopes for, and experience with, machine learning systems. The skills and experience that they will gain through this process will allow them to become a leader in the emerging field of the ethics of medical AI render them well-placed to pursue career opportunities in bioethics, applied ethics, public policy, and medical regulation.
Required skills and experience
The successful applicant will have an honours degree, with a major in philosophy, bioethics, or medical humanities, or the equivalent, and a demonstrated capacity to develop ethical arguments about new technologies. Qualitative research skills are highly desirable.
Expected start date
- January 2023