6 May 2019
Last week Dr Mor Vered joined the Faculty of Information Technology as a Lecturer in the new Laboratory for Dialogue Research. She brings along wide-ranging knowledge in the rapidly changing field of artificial intelligence (AI) and the drive to push the boundaries of technological innovation.
Mor has devoted her career to probing the interaction between humans and intelligent agents (programs that can make decisions or perform services based on their environment, user input and experience). “I incorporate lessons and inspirations from cognitive science, neuroscience and biology,” she explains. “I am a firm believer that only by focusing on interdisciplinary studies can we achieve results that can strongly impact human life.”
Mor recently completed a PhD at Bar-Ilan University in Ramat Gan, Israel. In 2018, she won the prestigious Israeli Association for Artificial Intelligence Outstanding Dissertation Award for her doctoral thesis entitled “Mirroring: A General Approach to Plan and Goal Recognition”, which investigated methods for predicting the plans and intentions in continuous environments.
“I have since then begun working on Explainable AI, generating explanations built on cognitive theories,” says Mor. “Because such explanations need to be easily understood by humans situation awareness models should be taken into account.” Mor’s research interests extend to social human-agent interaction, cognitive modelling and psychology.
While working last year as a Research Fellow in Human-Agent Planning at the University of Melbourne, Mor co-wrote “What were you thinking?”, an article addressing the dangers of relying blindly on AI for important decisions, and the need for transparency. Her video “Pitch it clever: Human-centred Explainable AI” illustrates these issues perfectly.
Mor is lead CI in a 2018 Defence Science Institute Collaborative Research Project; “Behaviour Recognition in Real Time, Continuous Environments”. She has also collaborated on a Defence Science and Technology Group research project called “Why? Causal Explanations in Trusted Automated Systems” at the Centre for Eye Research Australia.