Dr. Michel Valstar: Facial Expression Recognition in the Age of Deep Learning

calendar icon

Event Details

17 Sep 2019 10:30 am - 11:30 am
Clayton Campus: Room G12A, 14 Rainforest Walk
Caulfield Campus: Video Conference to Room 215, Building B
IT research seminars


Speaker: Dr. Michel Valstar


Behaviomedics is the application of automatic analysis and synthesis of affective and social signals to aid objective diagnosis, monitoring and treatment of medical conditions that alter one’s affective and socially expressive behaviour. It can be used to create new digital tools for addressing life-changing conditions including depression, anxiety, chronic pain, Autism Spectrum Disorder, ADHD, among others.

Join Dr Michel Valstar, Associate Professor of Computer Science at The University of Nottingham, and a member of Computer Vision and Mixed Reality Labs, as he rethinks the notion of behaviour assessment in relation to machine learning systems.

In an era where the interpretability of machine learning systems is increasingly a basic requirement, emotional recognition and other higher-level behaviours are pertinent and build on objective assessment methods. During this session you will:

  • Learn about Dr Valstar’s  lab efforts in the objective assessment of expressive behaviour
  • Discover three areas where Dr Valstar has applied expressive behaviour to automatic assessment of behaviomedical conditions, to wit, depression analysis, distinguishing ADHD from ASD, and measuring the intensity of pain in infants and adults with shoulder pain.
  • Understand how Virtual Humans can be used to aid the process of screening, diagnosing, and monitoring of behaviomedical conditions.

Download (PDF, 47.27 MB)

Presenter bio

Michel Valstar is an associate professor at the University of Nottingham and member of both the Computer Vision and Mixed Reality Labs. He received his masters' degree in Electrical Engineering at the Delft University of Technology in 2005 and his PhD in computer science at Imperial College London in 2008 and was a Visiting Researcher at MIT's Media Lab in 2011. He works in the fields of computer vision and pattern recognition, where his main interest is in automatic recognition of human behaviour, specialising in the analysis of facial expressions.

He is the founder of the facial expression recognition challenges (FERA 2011/2015/2017), and the Audio-Visual Emotion recognition Challenge series (AVEC 2011-2019). He was the coordinator of the EU Horizon2020 project ARIA-VALUSPA, which will build the next generation virtual humans, deputy director of the 6M£ Biomedical Research Centre's Mental Health and Technology theme, and recipient of Melinda & Bill Gates Foundation funding to help premature babies survive in the developing world, which won the FG 2017 best paper award. His work has received popular press coverage in, among others, Science Magazine, The Guardian, New Scientist and on BBC Radio. He has published over 90 peer-reviewed papers at venues including PAMI, CVPR, ICCV, SMC-Cybernetics, and Transactions on Affective Computing (h-index 38, >8,400 citations).

Event Contact

Marketing, Faculty of IT