Embodied Visualisation research

Explore the range of innovative research and software produced by the Embodied Visualisation.

Immersive Geovisualisation: Visualising geospatial data with virtual reality and augmented reality

Researchers: Bernie Jenny, Zeinab Ghaemi, Sarah Goodwin, Barrett Ens, Jiazhou ‘Joe’ Liu, Benjamin Lee, Kadek Ananta Satriadi (UniSA), Kurtis Danyluk (University of Calgary), Wesley Willett (University of Calgary), Maxime Cordeil (University of Queensland), Tobias Czauderna (Hochschule Mittweida)

What are the most intuitive and least fatiguing ways to interact with maps in virtual reality and augmented reality? How do we best zoom and move virtual maps? How can we use virtual maps outdoors? And how can we place virtual bar charts and other diagrams in the real world? We explore these and many similar questions related to immersive geovisualization, an exciting new field focusing on the visualisation of spatial data with virtual reality and augmented reality indoors and outdoors.

Machine learning for cartography

Researchers: Bernie Jenny, Dilpreet Singh, Bridget Walker, Tom Patterson (, Magnus Heitzler (ETH Zurich), Lorenz Hurni (ETH Zurich), Marianna Farmakis-Serebryakova (ETH Zurich)

Creating beautiful and informative maps requires costly manual labour. We use machine learning to transfer the aesthetics and outstanding readability of manual maps to digital cartography. Our goal is to accelerate the production of maps and enable everybody to create maps that are easy to read and a pleasure to look at. We develop neural networks that create shaded relief images, contour lines, and coastlines for a variety of map scales.

The image shows a shaded relief image of the Yarra ranges (east of Melbourne) created with a neural network.

Immersive Visualisation and Analysis of Medical Imaging Data

Active Industry projects
Researchers: Vahid Pooryousef, Tim Dwyer, Richard Bassed, Maxime Cordeil (University of Queensland), Lonni Besançon (Linköping University)
Industry partners: Victorian Institute of Forensic Medicine (VIFM)

In partnership with the Victorian Institute of Forensic Medicine (VIFM), we aim to develop an immersive system to remove barriers between the end users (medical experts), medical imaging data, and artificial intelligence services through immersive technology (virtual and augmented reality), which will promote the efficiency and quality of healthcare services. Thus far, we have developed a prototype that combines various interaction and visualisation techniques for efficient analysis of medical images, as well as easy access to medical reports, and on-the-fly documentation in an immersive environment. Our plan for the future of this project is to continuously improve the workflow of forensic experts at VIFM, as well as keep them up to date with the trend of switching from 2D desktop applications to 3D immersive applications.

ADaPt EH: Actionable data for clinicians and external accreditors in support of quality care provision and continuous accreditation

Active Industry projects
Duration: 2021–2025
Researchers: Michael Wybrow, Agnes Haryanto, David Cheng Zarate
Industry partners: Eastern Health, Australian Council of Healthcare Standards, Victorian Department of Health, Digital Health CRC

The Actionable Data project aims to develop and prove a reusable framework for delivering the use of live streaming clinical analytics and reporting for quality improvement and hospital accreditation in an Australian digital hospital with stand-alone EMR and a statewide incident monitoring system, which is Electronic Medical Record (EMR) and Incident Management System (IMS) agnostic.  The project will demonstrate the impact of the framework on clinical practice; hospital audit (QPI) teams; and external auditing agencies. It will document a roadmap for Australian hospitals to adopt live digital dashboard generation supported by new models of proactive and continuous quality improvement and accreditation.

MobileDLSearch: Ontology-based Mobile Platform for Effective Sharing and Reuse of Deep Learning Models

Researchers: Zhangcheng Qiangy, Yuxin Zhang, Pari Delir Haghighi, Abdur Forkan, Prem Prakash Jayaraman (Swinburne University)

This study introduces an ontology-based platform (MobileDLSearch) that offers end-users greater flexibility to store, query, share and reuse pre-trained DL models for various mobile applications. The ontology represents various DL models with different backends (e.g. TensorFlow, Keras and PyTorch) and is used to semantically search and retrieve DL models using an intuitive and interactive user interface. The implemented system also provides an automatic model converter to optimise desktop/laboratory-oriented pre-trained DL models for mobile platforms, and has an on-device real-time model integration module to benchmark the model's performance on mobile devices.

Previous 1234 Next