Why bioethics matters more than ever
If you had a serious illness, would you be ok with a machine making a recommendation about your treatment plan? Do you have privacy concerns about your health information in big electronic databases?
Artificial Intelligence, also known as ‘automaton’, is a growing global phenomenon which promises great strides forward for our quality of life, but there’s also controversy around whether the benefits outweigh the risks. In the field of medicine and healthcare, machines are now using electronic health data to diagnose patients, recommend treatments and select patients for treatment opportunities. While innovation and prognostic accuracy still outweigh risk assessments, there can be a lack of critical analysis in the field. And that’s where ethicists come in.
As a Monash PhD candidate in Bioethics, Josh Hatherley (pictured left) looks at the ethical, social and political implications for AI in medicine and beyond. Josh’s work in the medical sector opened his eyes to care management and sparked his interest in pursuing social-political medical ethics. After graduating with a Monash Master of Bioethics, he decided to launch into a PhD, securing Professor Robert Sparrow, world-renowned philosopher, as his supervisor.
‘People in the technology and medical industries are calling out for ethicists,’ says Professor Sparrow (pictured below). 'There’s currently a scarcity of people with the relevant expertise, so there are a great number of opportunities for PhD candidates looking to conduct interesting, important and necessary research.’
Josh’s experience reflects Professor Sparrow’s assertions. ‘I’ve been amazed at the number of opportunities that presented themselves as soon as I started my PhD. There’s a huge need for people with bioethics training.’
While there are definite benefits to machine labour, Josh maintains the need to keep on top of risk analysis as the implications of using AI in such a personal and intimate field are dangerously underestimated.
‘Machines have bias, through use of existing datasets, the programming of algorithms, and the humans who interpret the results,’ says Josh. ‘We need to find a better balance between innovation and assessing risks associated with new technologies and implementing necessary safeguards.’
Josh explains some of the negative implications of AI are already being identified. For example, electronic health records are prone to attacks and there is a massive financial incentive for hackers to sell medicine prescriptions. He says that in our enthusiasm for innovation, we forget that a new type of medicine has emerged which needs separate analysis, and that the risks need to be taken much more seriously.
‘We need more research into these issues, because otherwise we as a society aren’t going to be equipped for their future implications. Without this, we’re sleepwalking into the technological future.’
Professor Robert Sparrow is currently open to taking applications for PhD candidature from exceptional scholars with a strong grounding in philosophy.