Research in Human-centered AI

The dominant paradigm for interacting with computers now involves new media and multimodal input on mobile devices—such as speech, images, gestures, gaze, writing, multi-touch, bio-signals and a multitude of sensors. These new interfaces provide better support for human performance than keyboards of the past, and they are proliferating rapidly on everything from smart watches to automobiles to robots.

Our group is developing new “deeply human-centered” systems at the boundary of HCI and AI that can identify human emotional, cognitive, and health status, and then develop more personalised and adaptive interfaces based on this information for health, education, and other areas.