Human Centered AI
Research in Human Centered AI
The dominant paradigm for interacting with computers now involves new media and multimodal input on mobile devices—such as speech, images, gestures, gaze, writing, multi-touch, bio-signals and a multitude of sensors. These new interfaces provide better support for human performance than keyboards of the past, and they are proliferating rapidly on everything from smart watches to automobiles to robots.
Our group is developing new “deeply human-centered” systems at the boundary of HCI and AI that can identify human emotional, cognitive, and health status, and then develop more personalised and adaptive interfaces based on this information for health, education, and other areas.

Identifying the socio-affective competence of robots
Researchers: Leimin Tian, Sharon Oviatt Robotics and AI technology has been applied to various domains, such as health care and education. In these applications, it is important to maintain a human-centered design approach and understand any potential influences the system may have on the mental and physical well-being of its users. The goal of this project is to identify major attributes which influence human's perception of the socio-affective competence of robots. We aim at developing a long-term human-robot interaction system which is personalized, adaptive, and socially acceptable.

Multimodal behavioural Analytics
Researchers: Sharon Oviatt, Jionghao Lin, Abishek Sriramulu, Jesse Tse, Chathurika Palliya Guruge Multimodal behavioral analytics analyzes human behavioral and communication signals (e.g., speech, handwriting, physical movements) to identify users' mental or health status. In our lab, it currently is being applied to detection of users' level of domain expertise, for example in mathematics. It also is being applied to detecting users' health and mental health status, for example diagnostics for dementia, depression, and neuro-degenerative disease. It aims for ultra-reliable prediction (98-100%) by combining techniques that center on signal analysis, user behavioral modeling, machine learning, and scientific research uncovering causation.

Human-robot interactions
Researchers: Leimin Tian, Sharon Oviatt When humans and robots interact (HRI), there is an expectation of a positive relationship with social and emotional interaction. However, in current HRI systems, errors in socio-affective competence of a robot or our social relationship with it, are common. This project aims to understand the impact of social errors in HRI and identify effective strategies to manage failures. At the same time, utilising these social errors as learning opportunities for the personalisation of HRI in the long term. We’re teaching robots to learn from their mistakes.
How a robot’s ability to express emotions influences the human-robot collaboration
Researchers: Leimin Tian, Shujie Zhou A crucial factor that influences the efficacy of a human-robot team is the robot’s ability to communicate actions and plans using emotion. Emotional expressions are intuitive for humans to observe, perceive, and understand. Our research aims to implement an emotional Belief-Desire-Intention (BDI) agent model. Working with Cozmo the robot we will explore how a robot’s ability in expressing emotions influences human-robot collaboration. Moreover, we will study how robots can generate adaptive behaviour using the emotional BDI model.
Boosting activity with wearable devices
Researchers: Jarrod Knibbe, Graeme Gange Inactivity is costing Australia billions of dollars each year. Chronic illness was the leading cause of death amongst males in 2016. It was the second most prevalent cause of death in females. And mental health disorders plague our younger populations. The primary risk factor across all of these burdens- Inactivity. How do we boost activity across the population? How do we boost skill-levels in sports to boost participation? This project aims to develop electronic garments that can automatically improve skilled performance in sports, and thus lead to greater participation. This work will solve outstanding challenges in garment development and non-invasive muscle stimulation to make a significant impact on both the eHealth and sports industries.