Ballistics: What are the limitations of the quantification of the characteristics of gunshot as a cause
Led by Dr Xiaojun Chang
Funded by Leidos, a global IT company with a background in military health, the goal of this project is to improve the tracking of bullet trajectories and fragments in the human bodies. Collaborating with the Victorian Institute of Forensic Medicine and state coroner, our work here is leveraging machine learning techniques and CT scans to create 3D models of the human anatomy. The results are expected to enhance the quality of evidence and reduce the need for invasive post-mortems.
Immersive Visual Analytics for Medical Images
Cordeil M, Bassed R, Dimmock M and Dwyer T
A project funded by the Human-in-the-Loop Analytics (HiLA) GRIP program. Medical imaging technology captures slices of the 3D internal structure of the human body (e.g. bones, tissues, organs) and creates 3D digital images (e.g. CT, MRI scans). Currently, doctors, medical and legal practitioners visualise these 3D data on 2D computer screens and explore the 3D slices in order to investigate a disease or an injury. Today, Augmented and Mixed Reality (AR/MR) technology allows us to visualise and interact with 3D data in an immersive way; for example, using a mixed-reality head-mounted display such as the Microsoft Hololens, a user can interact with 3D stereoscopic virtual graphics as if they were anchored in the 3D environment. While providing a more natural visualisation space for this type of data, interacting with a 3D scan in augmented reality remains challenging. For example, how can we browse the slices of a 3D scan? How do we extract useful information? How do we zoom into certain regions of the body? Promising ways to better interact with immersive 3D visualisations include:
- Spatial interaction: for example using gestures such as pointing of swiping to select regions of interests in a 3D visualisation
- Tangible interaction: for example using physical devices to support fine grained selections of CT scan slices