FIT honours projects

Project Title: Solving Problems in Augmented Reality

Supervisor: Dr Guido Tack

Student cohort:

Both

Background:

Constraint problems occur everywhere in real life: from Sudoku puzzles to your assignment to lab classes to the crew scheduling for your holiday flights. In the last decade, very fast algorithms have been developed for solving these problems. These algorithms need to be fed with data (e.g. the Sudoku puzzle clues, your lab preferences or the pilot’s annual leave schedule). Augmented Reality is an exciting area of research that aims at overlaying computer displays over real world objects, for example on your phone’s screen or by using special glasses or other hardware. Wouldn’t it be great to be able to solve optimisation problems by just pointing your phone at them?

Aim and Outline:

The goal of this project is to explore what kinds of constraint optimisation problems can be described using real-world objects and solved in augmented reality. A basic example would be a Sudoku tool that can be pointed at a Sudoku puzzle in a magazine and will either solve it for you or give you a hint. We then want to explore what other types of problems could be modelled using physical objects: Can we describe a scheduling problem using Lego bricks? A timetabling problem using post-its? A car sequencing problem using actual cars? A vehicle routing problem using a paper map of the city?

You will work with modern AR and optimisation toolkits, investigate how these can be made to cooperate, and what kind of physical models users actually want to interact with.

URLs and References:

  1. www.minizinc.org
  2. https://en.wikipedia.org/wiki/Augmented_reality

Pre- and Co-requisite Knowledge: We require solid programming skills, preferably on mobile operating systems (Android or iOS). Some background in problem solving and machine learning (e.g. from some introductory AI unit) will be useful.


Project Title: Custom search strategies in Mixed-Integer Programming

Supervisors:  Prof. Peter Stuckey, Prof. Mark Wallace, Dr. Gleb Belov (FIT)

Student cohort:

Honours 24pt

Background:

For combinatorial optimization problems, custom search strategy can be vital for finding good solutions as well as for efficient optimal solving. While this area is well-established for Constraint Programming (CP), it is far less so for Mixed-Integer Programming (MIP).

Most efficient MIP solvers are commercial (Gurobi, IBM ILOG CPLEX, FICO XPRESS, etc). They only allow partial control of the search, e.g., via branching callbacks. In contrast, open-source solvers (SCIP, COIN-OR CBC) allow full control.

An intrinsic feature of the MIP solution process is that it is guided by the LP relaxation. Thus, an efficient implementation of a custom search is probably some soft rule combining user preferences and LP relaxation’s solution.

Some high-level constraints in CP, such as cumulative which restricts the amount of a renewable resource consumed during the schedule execution, require large amounts of variables and constraints when decomposed into a MIP representation. On the other hand, CP tools might be much more efficient for resolving these constraints, using established search strategies and propagation/learning routines. This suggests yet another but related topic of hybridization of CP and MIP search for specific constraints, along the lines of SCIP and Google OR-Tools.

Aim and Outline:

Faster solving of combinatorial optimization problems through design, implementation, and testing/benchmarking of efficient rules to follow custom search strategies in MIP, and through integration of custom or automatic search strategies for established high-level constraints. The implementation can use the MIP interface of the modeling system MiniZinc.

URLs and References:

  1. H. P. Williams. Model Building in Mathematical Programming. 5th ed., Wiley, 2013
  2. Toby O. Davies, Graeme Gange, Peter J. Stuckey. Automatic Logic-Based Benders Decomposition with MiniZinc. 2017
  3. www.gurobi.com
  4. scip.zib.de
  5. https://www.ibm.com/analytics/cplex-optimizer
  6. https://developers.google.com/optimization/
  7. Global constraint catalog: http://sofdem.github.io/gccat/

Pre- and Co-requisite Knowledge: C++ is the programming language necessary to tweak most open-source solvers and the first choice to control commercial solvers (although other languages can be used there, but that might be less efficient). Thus, we require proficiency in C++.


Project Title:  Automatically Seating your Guests

Supervisors:  Prof. Maria Garcia de la Banda

Student cohort:

Both

Background:

Seating the guests at a particular event (e.g., a UN dinner or a wedding) is a classical discrete optimisation problem. Solving this problem by hand can be a very delicate and time consuming task if there are many guests and constraints. For example, one might want to ensure every guest has at least another guest within reach with whom they share a language, or ensure that no guest from country/family A sits within reach of a guest from country/family B. Users might also want to optimise different objectives, such as ensuring that the age differential for guests sitting at particular tables is minimised, or the number of guests that are known to each other at a table is maximised.

Further, the problem might be such that no solution exists (i.e., all possible assignments of guests to tables violate at least one constraint). If so, the aim is to get solutions that minimise the amount of constraints violations. And even if solutions do exist, things might change: guests might cancel and/or new guests might be invited.

Aim and Outline:

The aim of this project is to explore the suitability of the MiniZinc system to (a) model this problem in as generic terms as possible,  and (b) solve it in a reasonable amount of time, depending on number of guests, number and complexity of constraints, competing objective functions, and re-optimisations due to changes to the original plan. The project will explore the efficiency of different models and solvers, and the usability of different forms of user input (such as a tablet).

URLs and References:

The popular MiniZinc system has been developed at Monash and is publicly available at https://www.minizinc.org/

Pre- and Co-requisite Knowledge:

We require solid programming skills, preferably in C++ or in mobile operating systems (Android or iOS). Background in problem solving and optimisation would be useful.


Project Title:  Experiments with Topic Models using Side Information

Supervisors:  Wray Buntine

Student cohort:

Minor thesis 18pt

Background:

Topic models perform clustering of document collections.

https://en.wikipedia.org/wiki/Topic_model

Recent techniques dramatically improve performance by introducing so-called side information including things like year of publication, author, and synonym sets of words.  This better enables the models to work on tweets, and to deal with structured documents like papers with sub-sections.  We have one such improved topic modelling algorithm, called MetaLDA, developed at Monash, that is state of the art in terms of performance.

Aim and Outline:

There are a number of interesting scenarios we would like to test out using MetaLDA.  How well can we model structured documents (for instance, medical articles that have different sections) and tweets or document collections whose nature should vary from year to year?  Moreover, can we model cross-lingual collections using cross-lingual side information about words?  So the aim is to develop a number of scenarios to test the system, prepare the text collections, dig out cross-lingual word embeddings (for instance), perform the analysis, and interpret results.

URLs and References:

MetaLDA software, an extension of Mallet, is available at             https://github.com/ethanhezhao/MetaLDA

The theory paper explaining the method is

https://arxiv.org/abs/1709.06365

Pre- and Co-requisite Knowledge:

Good Python skills to do basic text munging and text database manipulation.

Basic understanding of probabilistic models to gain an understanding of what the algorithm does. General practical machine learning knowledge. Flair for scripting and experimenting.


Project Title:  Clustering for hierarchical time series forecasting with big time series data

Supervisors: Christoph Bergmeir

Student cohort:

Both

Background

Time series forecasting with large amounts of data gets more and more important in many fields. In this project, we will work with data from a large optical retail company that sells up to 70,000 different products in 44 different countries in over 6000 stores world wide. The goal is to produce accurate sales forecasts, which the company can use for store replenishment and -- more importantly -- supply chain management. The products are mainly produced in China, and have several week of lead time from production until they can be sold in a store.

Aim and Outline

The main challenge of this dataset is that many of the products are similar but have a short history as the assortment changes relatively quickly with fashion trends, so just using univariate time series forecasting may often not be possible due to this short history. In this project, we aim to apply different clustering techniques (kmeans, dbscan, MML-based clustering) on features extracted from the time series and features that are known independently (master data). In this way, we can determine the similarity between series and can then use the these similarities in subsequent forecasting steps, to achieve more accurate forecasts.

Pre- and Co-requisite Knowledge

R programming, Data Science, Machine Learning, Clustering techniques


Project Title: Probabilistic forecasting with Recurrent Neural Networks

Supervisors: Christoph Bergmeir

Student cohort:

Both

Background:

In modern "Big Data" environments, often big quantities of related time series are available, such as sales time series across different stores and products, measurements from many similar machines, e.g. wind farms, server farms, etc. Forecasting in this case, with traditional univariate forecasting procedures leaves great untapped potential for producing more accurate forecasts. Consequently, big tech companies such as Amazon, Microsoft, and Uber have started recently to untap the enormous potentials of such datasets, using deep learning methods in very successful ways for forecasting on their vast datasets. In particular recurrent neural networks are promising, and my research team has already developed various models in this space.

Aim and Outline:

The aim of this thesis is to expand our existing models in our existing software framework towards the possibility of probabilistic forecasting. I.e., instead of point forecasts we aim to get prediction intervals. This is important for many applications such as electricity demand forecasting, sales forecasting, and others. In particular, this project will investigate the use of different loss functions, pinball loss, CRPS, intervall score, and other scoring methods, integrate these into our framework, and perform a systematic experimental study around them. Implementation is in Python, in Tensorflow.

Pre- and Co-requisite Knowledge:

Python, TensorFlow, Data Science, Machine Learning


Project Title:  Deep Learning for Software Internationalization and Localization

Supervisors: Reza Haffari and Chunyang Chen

Student cohort:

Both

Background:

With the globalization, the same software, especially mobile apps may be adopted by users from

different countries and cultures. To cater for different cultures, developers have to localize or

internationalize their software or mobile apps. Many studies have been carried out to locate need-to-

translate strings in software and adapt UI layout after text translation in the new language. However,

no work has been done on the most important and time-consuming step of software localization

process, i.e., the translation of software text.

Aim and Outline:

In this project, we are trying to adopt the neural machine translation algorithm for software

localization and internationalization. Due to some unique characteristics of software text, we have to

customize the original translation model by incorporating domain-knowledge like sentence length,

domain-specific rare words, and app-specific meanings. We have collected a large-scale parallel

corpus from thousands of commercial apps in Google Play, and will propose an algorithm to help

software localization and internationalization.

URLs and References:

https://en.wikipedia.org/wiki/Internationalization_and_localization

https://en.wikipedia.org/wiki/Neural_machine_translation

Pre- and Co-requisite Knowledge:

Strong programming skills

Basic knowledge of software engineering, machine learning or data mining


Project Title: Pathfinding for Games

Supervisors: Daniel Harabor

Student cohort:

Both

Background:

Pathfinding is fundamental operation in video game AI: virtual characters need to move from location A to location B in order to explore their environment, gather resources or otherwise coordinate themselves in the course of play. Though simple in principle such problems  are surprisingly challenging for game  developers: paths should be short and appear realistic but they must be computed very quickly, usually with limited CPU resources and using only small amounts of memory.

Aim and Outline:

In this project you will develop new and efficient pathfinding techniques for game characters operating in a 2D grid environment. There are many possibilities for you to explore. For example, you might choose to investigate a class of "symmetry breaking'' pathfinding  techniques which speed up search  by eliminating equivalent (and thus redundant) alternative paths. Another possibility involves dynamic settings where the grid world changes (e.g. an open door becomes closed) and characters must re-plan their routes. A third possibility is multi-agent  pathfinding, such as cooperative settings  where groups of characters move at the same time or where one character tries to evade another.

Successful projects may lead to publication and/or entry to the annual Grid-based Path Planning Competition.

URLs and References:

http://www.harabor.net/daniel/index.php/pathfinding/

Pre- and Co-requisite Knowledge:

Students interested in this project should be enthusiastic about programming.

They should also have some understanding of AI Search and exposure to the C

and/or C++ programming language.


Project Title: Automated Warehouse Optimisation

Supervisors: Daniel Harabor and Pierre Le Bodic

Student cohort:

Both

Background:

Warehouses are becoming increasingly automated and optimised. A great example is Amazon fulfilment centres (see https://www.youtube.com/watch?v=tMpsMt7ETi8 ). Many computer science problems, ranging from pathfinding to scheduling and facility layout, need to be solved to design efficient warehouses and their systems. These individual problems are not all well formalised and solved yet, and contributions in these directions are bound to have a high scientific and societal impact.

Aim and Outline:

The aim of this project is to formalise one of the problems related to warehouse automation, design methods to solve the problem, and run experiments to assess their performance.

URLs and References:

The videos https://www.youtube.com/watch?v=YSrnV0wZywU and https://www.youtube.com/watch?v=lT4X3cuIHK8provide further background.

Pre- and Co-requisite Knowledge:

Strong general background in CS, both in theory and practice, and interest in pathfinding and/or optimisation.


Project Title: Identifying Intention in Hand Movement Data

Supervisors: Ingrid Zukerman, Jason Friedman (Tel Aviv University), Mahsa Salehi

Student cohort:

Both

Background:

The movements we make generally show stereotypical, bell-shaped velocity profiles. When we change our mind mid-way through a movement, we can decompose this movement into its two parts (a movement to the first target, followed by a movement to the second target). This decomposition is usually performed offline. In this project, rather than performing slow, offline decomposition, we aim to implement a near real-time system for recognizing the intention of a movement, i.e., which target are we currently heading towards. To this effect, we propose to employ time-series analysis and machine learning techniques, in particular Recurrent Neural Networks.

Aim and Outline:

Developing an algorithm to identify the changes in the trajectory of hand movements as soon they occur.

URLs and references:

Susan Aliakbaryhosseinabadi, Ning Jiang, Aleksandra Vuckovic, Kim Dremstrup, Dario Farina & Natalie Mrachacz-Kersting (2015), Detection of movement intention from single-trial movement-related cortical potentials using random and non-random paradigms. Brain-computer Interfaces. https://www.tandfonline.com/loi/tbci20

Anthony Bagnall, Jason Lines, Jon Hills and Aaron Bostrom (2015), Time-series classification with COTE: the collective of transformation-based ensembles. IEEE Transactions on Knowledge and Data Engineering, 27(9), 2522-2535.

Jian Bo Yang, Minh Nhut Nguyen, Phyo Phyo San, Xiaoli Li, and Shonali Krishnaswamy (2015), Deep Convolutional Neural Networks on Multichannel Time Series for Human Activity Recognition. In IJCAI 15:3995-4001.

Tamar Flash and Ealan Henis (1991), Arm trajectory modifications during reaching towards visual targets. Journal of Cognitive Neuroscience 3(3), 220-230.

Tamar Flash and Binyamin Hochner (2005), Motor primitives in vertebrates and invertebrates. Current Opinion in Neurobiology 2005.

Tamar Flash and Neville Hogan (1985), The coordination of arm movements: An experimentally confirmed mathematical model. Journal of Neuroscience 5(7), 1688-1703.

Peter Gawthrop, Ian Loram, Martin Lakie and Henrik Gollee (2011), Intermittent control: a computational theory of human control. Biological Cybernetics 104:31-51.

Fazle Karim, Somshubra Majumdar, Houshang Darabi and Shun Chen (2018), LSTM fully convolutional networks for time series classification. IEEE Access 6:1662-1669.

Daniel Martin-Albo, Luis A. Leiva, Jeff Huang, Rejean Plamondon (2016), Strokes of insight: User intent detection and kinematic compression of mouse cursor trails. Information Processing and Management 52(6), 989-1003.

Brandon Rohrer and Neville Hogan (2006), Avoiding spurious submovement decompositions II: a scattershot algorithm.  Biological Cybernetics 94:409-414.

Pre- and Co-requisite Knowledge:

FIT3080/FIT5047 Intelligent systems or equivalent is a mandatory prerequisite, and a strong mathematical and programming background is highly desirable.


Project Title: Big Data Management and Processing

Supervisor: Assoc Prof David Taniar

Student Cohort:

Minor Thesis

Background:

Are you interested in learning Big Data? Big Data is a multi-million industry. This project focuses on

processing large data volume, including high velocity stream data.

Aim and Outline:

This project aims to solve big data problems by developing programs to answer queries in a timely manner. You will program using the latest Big Data technology, such as Apache Spark.

URLs and References:

http://en.wikipedia.org/wiki/Big_data

Pre- and Co-requisite Knowledge:

You should have done either FIT5148 or FIT5202.


Project Title: Data Warehousing and OLAP

Supervisor: Assoc Prof David Taniar

Student Cohort:

Minor Thesis

Background:

Are you interested in learning Data Warehousing and Business Intelligence? This project focuses on advanced star schemas and OLAP performance.

Aim and Outline:

This project focuses on database design, such as star schema for traffic, sensors, etc. It aims to solve

problems relating to real-time update in data warehouses.

URLs and References:

https://en.wikipedia.org/wiki/Data_warehouse

Pre- and Co-requisite Knowledge:

You should have done FIT5137 or FIT5195


Project Title: (Google) Maps Databases

Supervisor: Assoc Prof David Taniar

Student Cohort:

Both

Background:

Are you interested in developing algorithms to solve queries, such as: "Given my current location, show me the location of the surrounding restaurants"; or "Given a map containing 100 objects of interest, draw a graph which represents that represent the nearest neighbour of each object".

Aim and Outline:

This project aims to develop efficient algorithms to process spatial database queries.

URLs and References:

http://en.wikipedia.org/wiki/Nearest_neighbor_graph

Pre- and Co-requisite Knowledge:

Have a strong interest in programming, and data structure and algorithm; and Have a passion in solving puzzles (e.g. what does the picture shown in http://en.wikipedia.org/wiki/Nearest_neighbor_graphmean?)


Project Title: Music Database Processing

Supervisor: Assoc Prof David Taniar

Student Cohort:

Both

Background:

Do you play any classical music instruments? Do you want to combine your advanced musical skills with computer science. This project analyses classical musics using computer science techniques.

Aim and Outline:

This project aims to process and analyse classical music recordings, including sonata form analysis, chord progression, concerto  identification, etc. You will need to learn the basic of signal processing, and Matlab.

URLs and References:

https://www.dropbox.com/s/rahz86l6j17tx3s/26742446-JunyaoDeng-MinorThesis.pdf?dl=0

Pre- and Co-requisite Knowledge:

You must be an intermediate music instrument player (e.g.minimum level 5 or 6 piano, violin/cello, brass,woodwind). Prior knowledge on signal processing is desirable.


Project Title: Understanding the impact of network layout on cognitive understanding of Bayesian networks

Supervisors: Prof. Ann Nicholson and Dr. Michael Wybrow

Student cohort:

Both

Background:

BARD: Bayesian Argumentation via Delphi [1] is a software system designed to help groups of intelligence analysts make better decisions. The software was funded by IARPA as part of the larger Crowdsourcing Evidence, Argumentation, Thinking and Evaluation (CREATE) program. The tool, developed at Monash University, uses causal Bayesian networks as underlying structured representations for argument analysis. It uses automated Delphi methods to help groups of analysts develop, improve and present their analyses.

Aim and Outline:

While Bayesian networks have long been visualised graphically, very little attention has been paid to the layout of these networks. Network layout itself is also a well studied field [2]  We believe network layout is important in Bayesian networks to assist users to define structure, notice similarities or differences, understand the network, or to detect errors. This research aims to explore approaches for Bayesian network specific layout and evaluate the importance of this when using Bayesian Networks for cognitive understanding or causal explanations.

The existing network visualisation in the BARD system is based on webcola [3] and React, and is built by researchers at Monash.  This project would involve extension of the network visualisation component in BARD (to vary the layout and create a simple interface for evaluation) as well as conducting a series of controlled user studies to explore the effects of layout on cognitive understanding.

URLs and References:

[1] https://bayesiandelphi.org

[2] https://en.wikipedia.org/wiki/Graph_drawing

[3] https://ialab.it.monash.edu/webcola/

Pre- and Co-requisite Knowledge:

Some experience with JavaScript would be useful.


Project Title:   Creating Immersive Mobile VR Environments for Pre-navigation for the Vision Impaired with Customization of Audio and Visual Feedback

Supervisors: Matthew Butler

Student cohort:

Both

Background:

Wayfinding and navigation in unfamiliar places is a challenging task for those with a vision impairment, as it is difficult to convey spatial information to them before they visit a site. While solutions in the form of tactile diagrams are available they are costly to produce, do not convey some spatial information well.

What is needed is a mobile interactive Virtual Reality environment which provides 'embodied' way to explore a location before they visit it. The aim is to create a mental model fornavigating in the space and finding their way to significant target locations. This will allow the vision impaired greater mobility and reduce their anxiety when traveling to new locations.

The development environment will use an Oculus Go, 3D modeling and the Unity game engine. This project extends a successful Honours project from 2018.

When creating an interactive mobile VR environment appropriate audio and visual feedback needs to be embedded. The proposed mobile VR environment will allow customization of environmental sound cues, narrated information points and visual details to cater for different levels of disability. The project will require the student(s) to consult with key stakeholders (vision impaired users and orientation and mobility trainers) and develop a prototype system. The student(s) will need to carry out a user study to evaluate the impact of the VR environment. Working with AR may also be an approach that could be considered for this project if more than one student is interested.

This project is based in the The Inclusive Technologies research sub-group at the Caulfield campus.

Pre- and Co-requisite Knowledge:

Explicit knowledge of any specific technologies is not required, however the student must be prepared to investigate and use any new technologies that may be suitable for the project.

Technologies will most likely include:

* 3D modelling, including basic modelling'

* VR system development,

* Unity interactive environment development and deployment on a mobile platform.


Project Title: Exploring the Role of Emotions in Human-Multi-Agent Collaboration

Supervisors: Dr. Leimin Tian, Prof. Sharon Oviatt

Student cohort:

Minor thesis

Background:

With recent advancements in Robotics and Artificial Intelligence, Human-Robot Collaboration (cobots) has drawn growing interests. One crucial factor that influences the efficacy of cobots is their ability to communicate actions and plans in a human understandable manner. This increases the trust human maintains towards the cobots, which is key to human’s willingness to comply and cooperate. In human-human collaboration and communication, emotions serve as a high-level evaluation of events happening in the environment and their influence on one’s goals and beliefs. Emotional expressions are intuitive and straightforward for humans to observe, perceive, and understand. Moreover, under a human-multi-agent collaboration setting, including emotions in the Belief-Desire-Intention (BDI) agent model has been shown to improve the agent’s ability to generate adaptive behaviors. This in turn benefits the efficacy of the multi-agent system. Thus, in this project, we are motivated to enable the cobots to recognize human emotions, and synthesize emotional expressions guided by an emotional BDI model. We will explore how this may benefit Human-Robot Collaboration, multi-agent system, and explainable AI. This project will be conducted using multiple Cozmo robots.

Aim and Outline:

The student will design experiments to evaluate (1) how humans perceive robots with an emotional BDI model, and (2) efficacy of the human-multi-agent collaboration.

URLs and References:

[1] Zoumpoulaki, A. et al. 2010, May. A multi-agent simulation framework for emergency evacuations incorporating personality and emotions. Hellenic Conference on Artificial Intelligence (pp. 423-428). Springer, Berlin, Heidelberg.

[2] Calvo, R.A et al 2015, The Oxford handbook of affective computing. Oxford Library of Psychology.

[3] Sakellariou, I. et al. 2016, September. The Role of Emotions, Mood, Personality and Contagion in Multi-agent System Decision Making. In IFIP International Conference on Artificial Intelligence Applications and Innovations (pp. 359-370). Springer, Cham.

[4] Kefalas, P. and Sakellariou, I., 2017, September. The Invalidity of Validating Emotional Multi-Agent Systems Simulations. In Proceedings of the 8th Balkan Conference in Informatics (p. 8). ACM.

[5] Görür, O. et al. 2018, February. Social cobots: Anticipatory decision-making for collaborative robots incorporating unexpected human behaviors. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (pp. 398-406). ACM.

[6] Olfati-Saber, R., et al. 2007. Consensus and cooperation in networked multi-agent systems. Proceedings of the IEEE, 95(1), pp.215-233.

[7] https://github.com/anki/cozmo-python-sdk

Pre- and Co-requisite Knowledge:

Students should have excellent grades and written and oral communication skills. Preference will be given to students who have previous experiences conducting robotics research, and who have experiences with Python programming. Students with an interest in pursuing PhD research or careers in research are especially encouraged to apply.


Project Title: Modelling Human-Human Emotion Dynamics in Spoken Dialogue

Supervisors: Dr. Leimin Tian, Prof. Sharon Oviatt

Student cohort:

Minor thesis

Background:

Emotions play an important role in human-human communications. Thus, a spoken dialogue system which incorporates emotions has the potential to provide more natural and desirable outputs. Most existing studies on emotion modelling in spoken dialogue systems have been focused on recognizing emotions or emotional changes in individual utterances, with the dialogue manager deciding the action policy based on a set of hand-crafted rules. However, the dynamics of emotional interaction in dialogue can extend beyond a single utterance, and hand-crafted rules can be brittle. Therefore, we are motivated to model the dynamics of human-human emotional interaction in spoken dialogue beyond individual utterances, from which the dialogue manager can learn a more flexible model for deciding the action policy. We expect this work to benefit state-of-the-art spoken dialogue systems by improving their interaction quality and naturalness.

Aim and Outline:

The student will model emotion dynamics in human-human dialogue, including the emotional state transitions of each speaker and the emotional state transitions between the two speakers, using the IEMOCAP database. The student will incorporate the emotion dynamic model trained from the database with a spoken dialogue system, and conduct experiments to evaluate the performance of this spoken dialogue system.

URLs and References:

[1] Young, S. et al. 2013. POMDP-based statistical spoken dialog systems: A review. Proceedings of the IEEE, 101(5), pp.1160-1179.

[2] Moerland, T.M. et al. 2018. Emotion in reinforcement learning agents and robots: a survey. Machine Learning, 107(2), pp.443-480.

[3] Busso, C. et al. 2008. IEMOCAP: Interactive emotional dyadic motion capture database. Language resources and evaluation, 42(4), p.335.

[4] Majumder, N. et al. 2018. DialogueRNN: An Attentive RNN for Emotion Detection in Conversations. arXiv preprint arXiv:1811.00405.

[5] https://github.com/senticnet/conv-emotion

Pre- and Co-requisite Knowledge:

Students should have excellent grades and written and oral communication skills. Preference will be given to students who have previous experiences in Markov Decision Process, Reinforcement Learning, and Machine Learning. Students with an interest in pursuing PhD research or careers in research are especially encouraged to apply.


Project Title: Using Affective Theory of Mind to Evaluate Performance of Automatic Emotion Recognition Models

Supervisors: Dr. Leimin Tian, Prof. Sharon Oviatt

Student cohort:

Minor thesis

Background:

Most existing studies evaluated performance of automatic emotion recognition models with numerical metrics over annotated databases, such as recognition accuracy or the correlation coefficient between predictions and annotations. Recently, human-like accuracy has been achieve in a number of emotion recognition tasks. Many emotion recognition models have been created with the goal of improving the quality and naturalness of human-computer/robot interactions. However, assessment of emotional and social competences in human-human interaction include dimensions beyond numerical evaluation of emotion recognition accuracy. The “Theory of Mind” (ToM), which represents the ability to understand other people's mental states and to predict their behaviors, has been widely used in Psychology and Social Cognition research on inter- and intra-personal interactions. In particular, various approaches have been proposed to assess Affective ToM, the ability to make inferences about the emotions and feelings of other people. Thus, we are motivated to adopt such Affective ToM assessments as additional evaluations of automatic emotion recognition models.

Aim and Outline:

The aim of this project is to explore more informative approaches for evaluating automatic emotion recognition models. The student will implement state-of-the-art automatic emotion recognition models, such as the Tensor Fusion Network model. These models will then be applied to and evaluated on existing Affective ToM assessment tasks. For example, the “Reading the Mind in Films” test.

URLs and References:

[1] Kalbe E. et al. 2010. Dissociating cognitive from affective theory of mind: a TMS study. Cortex, 46(6), pp.769-80.

[2] Baron‐Cohen S. et al. 2001. The “Reading the Mind in the Eyes” test revised version: A study with normal adults, and adults with Asperger syndrome or high‐functioning autism. Journal of child psychology and psychiatry, 42(2), pp.241-251.

[3] Rutherford M.D. et al. 2002. Reading the Mind in the Voice: A study with normal adults and adults with Asperger syndrome and high functioning autism. Journal of autism and developmental disorders, 32(3), pp.189-194.

[4] Golan O. et al. 2006. The “Reading the Mind in Films” task: complex emotion recognition in adults with and without autism spectrum conditions. Social Neuroscience, 1(2), pp.111-123.

[5] https://github.com/A2Zadeh

Pre- and Co-requisite Knowledge:

Students should have excellent grades and written and oral communication skills. Preference will be given to students who have have experiences in Machine Learning, or have a background in Psychology or Cognitive Science. Students with an interest in pursuing PhD research or careers in research are especially encouraged to apply.


Project Title: Analysing the Time Scale of Emotional Changes

Supervisors: Dr. Leimin Tian, Prof. Sharon Oviatt

Student cohort:

Minor thesis

Background:

Continuous recognition of emotions in naturalistic data has drawn growing interests in current human behavioral analytics research. Because of the temporal dependency of human emotions, time-sequence based approaches for automatic emotion recognition have achieved leading performance in recent studies. However, the most suitable time scale to model emotions under specific contexts remains an open question. Therefore, we are motivated to analyse the time scale of emotional changes under different contexts, which can serve as a reference for future automatic emotion recognition studies.

Aim and Outline:

The student will conduct machine learning experiments to perform visual analytics on the Aff-Wild database. The aim of this research is to understand the time scale of emotional changes in terms of (1) under what circumstances do a person’s emotions tend to stay stable, and (2) on what time scale do emotional changes occur.

URLs and References:

[1] Zafeiriou, S. et al. 2017, July. Aff-Wild: Valence and arousal ‘in-the-wild’challenge. 2017 IEEE Computer Vision and Pattern Recognition Workshops (CVPRW), (pp. 1980-1987). IEEE.

[2] Résibois, M. et al. 2017. The neural basis of emotions varies over time: different regions go with onset-and offset-bound processes underlying emotion intensity. Social cognitive and affective neuroscience, 12(8), pp.1261-1271.

[3] https://ibug.doc.ic.ac.uk/resources/first-affect-wild-challenge/

[4] https://github.com/machrisaa/tensorflow-vgg

Pre- and Co-requisite Knowledge:

Students should have excellent grades and written and oral communication skills. Preference will be given to students who have completed coursework in research methods and statistics, and who have previous experiences using tools to conduct visual analytics (e.g., VGG Face) and machine learning experiments (e.g., Tensorflow). Students with an interest in pursuing PhD research or careers in research are especially encouraged to apply.


Project Title: Container Orchestration for Optimized Renewable Energy Use in Clouds

Supervisors: Dr. Adel Nadjaran Toosi

Student cohort:

Both

Background:

Today's society and its organisations are becoming ever-increasingly dependent upon Information and Communication Technology (ICT) systems mostly based in cloud data centres. These cloud data centres, serving as infrastructure for hosting ICT services, are consuming a large amount of electricity leading to (a) high operational costs and (b) high carbon footprint on the environment. In response to these concerns, renewable energy systems are shown to be extremely useful both in reducing dependence on finite fossil fuels and decreasing environmental impacts. However, powering data centres entirely with renewable energy sources such as solar or wind is challenging as they are non-dispatchable and not always available due to their fluctuating nature.

Recently, container solutions such as Docker and container orchestration platforms such as Kubernetes, Docker Swarm, or Apache Mesos are gaining increasing use in cloud production environments. Containers provide a lightweight and flexible deployment environment with performance isolation and fine-grained resource sharing to run applications in clouds. This project intends to develop scheduling and auto-scaling algorithms for container orchestration within clouds based on the availability of renewable energy.

Aim and Outline:

This project aims to match energy consumption with the availability of renewables in the data centres by harnessing the diversity in Quality of Services (QoS) requirements of applications running on containers. We develop scheduling and auto-scaling policies within the container orchestration platform to optimise renewable energy use while QoS requirements of applications are met.

As part of this project, we implement a small-scale prototype demonstrator using our micro data centre connected to microgrid (Solar panels). This project provides an excellent opportunity for the student to learn cloud backend technologies such as containers, Kubernetes/Docker Swarm, OpenStack and work within a multi-disciplinary area consisting of software engineering, science and electrical engineering.

URLs and References:

Docker, https://www.docker.com

Kubernetes, https://kubernetes.io/

Docker Swarm, https://docs.docker.com/engine/swarm/

Í. Goiri, K. Le, M. E. Haque, R. Beauchea, T. D. Nguyen, J. Guitart, J. Torres and R. Bianchini, “GreenSlot: scheduling

energy consumption in green datacenters,” in Proceedings of 2011 International Conference for High Performance

Computing, Networking, Storage and Analysis, Seattle, Washington, 2011.

Pre- and Co-requisite Knowledge:

Strong programming skills preferably in Java and Python.

Basic knowledge of distributed systems and cloud computing.

Knowledge and familiarity with Kubernetes or Docker Swarm is a plus.


Project Title:    Learning with opponent awareness in settings with more than 2 agents.

Supervisors: Julian Garcia

Student cohort:

Both

Background:

Learning in  environments with multiple agents is inherently more complex than in the standard single-agent case. Optimal policies are best responses that depend on the policies of other agents. This non-stationarity  of the problem gives rise to a number of open challenges. One such challenge is cooperation — naive learners will tend to converge on non-cooperative equilibria or fail to converge altogether. A recently proposed learning algorithm known as LOLA performs reasonably well in settings with two agents [1]. The aim of this project is to extend this framework to settings where more than two players interact at the same time.

Aim and Outline:

This project involves extending their method to more than two players, studying the game theoretical aspects of the problem [2] and evaluating solutions experimentally.

URLs and References:

[1] Foerster, Jakob, et al. "Learning with opponent-learning awareness." Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems. International Foundation for Autonomous Agents and Multiagent Systems, 2018.

[2] Garcia, Julian, and Matthijs van Veelen. "No Strategy Can Win in the Repeated Prisoner’s Dilemma: Linking Game Theory and Computer Simulations." Frontiers in Robotics and AI 5 (2018).

Pre- and Co-requisite Knowledge:

An interest in game theory, excellent programming skills and a solid mathematical background.


Project Title:    Immersive Visualisation for In-Situ Maintenance

Supervisors: Barrett Ens, Arnaud Prouzeau and Tim Dwyer

Student cohort:

Both

Background:

Mobile workers conducting equipment maintenance often require maintenance records or other information related to the job site, which are typically accessed by laptop computers. Wearable Augmented Reality (AR) displays now allow visualization of such information directly in the user’s environment when and where it’s needed. However, the design and use of such immersive visualisations is not well investigated.

Aim and outline:

This project will investigate the development of immersive visualisations using AR (and/or simulated using Virtual Reality). Visualisations will be created in mock-up environments or using 3D building models, and implemented in prototype systems. The developed visualisations will be evaluated in user studies.

Pre- and Co-requisites:

Programming knowledge of Java or C#, knowledge of Unity programming environment is an asset but not required. Knowledge of Human-Computer interaction is desired but not essential.


Project Title:    Prediction and simulation for building management using Machine Learning

Supervisors: Arnaud Prouzeau, Christoph Bergmeir and Tim Dwyer

Student cohort:

Both

Background:

In large facilities, like university campuses, building managers are in charge of the monitoring and maintenances of the different pieces of mechanical equipment. It includes the Heating, Ventilation and Air Conditioning system (HVAC) which is in charge of the conditioning of rooms. This system is composed of different pieces of equipment that goes from the small AC Unit to the large Boiler plants, and that continuously produce a significant amount of data. However, such data is for now barely used.

Aim and outline:

This project will investigate the use of Machine Learning algorithm to, first, understand the real behaviour of the HVAC system, and second, to simulate its future performance. Such simulation could then be used to optimise its operation and predict failure. Data from current Monash buildings will be used in this project.

Pre- and Co-requisites:

R and Python. Knowledge of Machine learning is desired but not essential.


Project Title:  Securing Control Plane in Software-defined Networks Using Trusted Enclaves

Supervisors: Dr. Xingliang Yuan, A/Prof. Carsten Rudolph

Student cohort:

Honours

Background:

Software-defined networking (SDN) operates a control plane to facilitate the management and optimisation for networked applications and infrastructures. However, the control plane is logically centralised and thus becomes a high-value target for cyber attackers. On the one hand, sensitive data on the controller contains critical information of the network infrastructure and private information of traffic flows. On the other hand, a compromised controller could sabotage the network functions and operations. Therefore, there is an urgent call to build a secure and trustworthy control plane for SDN.

Aim and Outline:

In this project, we aim to leverage the latest advancement on trusted computing (Intel SGX) to design a system, which can secure sensitive data (e.g., network topology and flow statistics) maintained at the control plane with practical performance.

URLs and References:

Jagadeesan et al., "A Secure Computation Framework for SDNs", in Proc. of ACM HotSDN, 2014

Costan and Devadas, "Intel SGX Explained." IACR Cryptology ePrint Archive, 2016.

Duan et al., "LightBox: Full-stack Protected Stateful Middlebox at Lightning Speed", arXiv, 2017

Poddar et al., "SafeBricks: Securing Network Functions in the Cloud", in Proc. of USENIX NSDI, 2018

Pre- and Co-requisite Knowledge:

(1) background on information security and network security (2) solid hands-on skills on C/C++  (3) knowledgeable on Software-defined networking and trusted computing


Project Title:  Profit performance analysis and prediction of e-waste recycling jobs

Supervisors:            Chung-Hsing Yeh

Student cohort:

Both

Background:

As one of the Australia’s largest e-recyclers, the case company is keen to see how data science, artificial intelligence (AI), decision analysis techniques can help improve its strategic and operational decisions for profit maximisation.

Aim and Outline:

This project aims to help the case company identify opportunities for profit improvement and select the most profitable recycling jobs by evaluating profit performance of past and current recycling jobs and predicting the profitability of future recycling jobs.

AI unsupervised clustering and supervised predictive techniques will be explored to analyse the costs and profits of the case company’s past recycling jobs in relation to various e-waste products, clients and sites. This profit performance analysis will help identify the most profitable e-waste products, clients and sites, as well as the most profitable recycling jobs with an optimal combination of e-waste products, clients and sites. An intelligent profit prediction model will be developed to help the case company understand causes and effects of its recycling operations in terms of profitability. For a specific recycling job, this model can be used to identify the critical operational factors that will maximise the profitability.

Pre- and Co-requisite Knowledge:

Data science techniques


Project Title: Classifying shredded non-ferrous scape

Supervisors: Chung-Hsing Yeh

Student cohort:

Both

Background:

Shredded non-ferrous scrap usually contain a combination of various valuable non-ferrous metals including aluminium, copper, lead, magnesium, stainless steel, nickel, tin or zinc in elemental or alloyed form. Buying or bidding shredded non-ferrous scrap at scrapyards requires an estimate of the value of the scape lot being offered in terms of the content and weight of valuable metals. This estimate is usually done on a scrapyard site all over the world based on buyers’ experience and knowledge. This current practice is time-consuming, cost-ineffective and sometimes leads to profit loss due to inaccurate estimates.

Aim and Outline:

This project aims to develop an intelligent classification system using convolutional neural networks to estimate the content and weight of non-ferrous metals based on pictures taken from a shredded non-ferrous scrap lot.

URLs and References:

https://www.thebalancesmb.com/an-introduction-to-metal-recycling-4057469

https://www.recyclingtoday.com/article/zorba-mixed-metals-markets/

Pre- and Co-requisite Knowledge:

Image processing in Python


Project Title: Simulating state dynamics of multi-agent systems

Supervisors: Prof David Green, Dr Marc Cheong

Student cohort:

Both

Background:

Social groups consist of agents (people, often referred to as ‘actors’) linked by networks of interactions and relationships. The richness of these networks makes society complex.

However, models of social order have usually omitted the pivotal role that inter-personal

interactions play. Multi-agent models can simulate all manner of complex dynamics in social

networks. They have been used to address social questions, such as: discovering conditions

needed to achieve social consensus; exploring the effects of peer pressure on social

conformity, and understanding the influence of media on public opinion.

Aim and Outline:

This project will develop simulations of multi-agent networks and use them to address a range of topical social issues. This involves, but is not limited to:

  1. Reviewing the state of the art in multi-agent social network research.
  2. Developing or modifying existing models of multi-agent social networks (in Java, primarily); and using said models to study real-world social (epistemological) issues.
  3. Developing visualisations of said models (in Java, primarily).

URLs and References:

https://www.monash.edu/news/articles/of-ants-and-men
https://vlab.infotech.monash.edu/ (specifically e.g. https://vlab.infotech.monash.edu/simulations/networks/media-influence/)
https://philpapers.org/rec/ALFTSA

Pre- and Co-requisite Knowledge:

Knowledge in advanced programming is required - including (but not limited to) strong skills in Object Orientation (OO), GUI building, visualisation, graph generation (Java preferred). Candidates must be strong in research skills; and preferably demonstrate an interest a few of the following: social networks, complex systems, social media, or contemporary philosophy.


Project title: Modelling the effect of autonomous vehicles on other road users

Supervisors: Dr John Betts (FIT) Prof. Hai L. Vu (Faculty of Engineering)

Student cohort:

Both

Background:

Underpinned by emerging technologies, connected and autonomous vehicles (CAVs) are expected to introduce significant changes to driver behaviour as well as traffic flow dynamics, and traffic management systems.

Aim and Outline:

This project aims to investigate the impact of connected and autonomous vehicles on traffic flows and evaluate new possibilities for efficiently managing traffic on future urban road networks.

In this project, students will explore and evaluate the impact of this disruptive technology by implementing and integrating new car-following models into an existing traffic simulation to study the behaviour of CAVs and their interaction with other vehicles and their drivers.

Prerequisite Knowledge:

Good programming skills in any modern programming language. Some modelling and simulation experience would be advantageous.


Project title: Decision models for managing large crowds

Supervisors: Dr John Betts (FIT) Prof. Hai L. Vu (Faculty of Engineering)

Student cohort:

Both

Background:

As the populations grow, and urbanization increases, large crowds of people or pedestrians are becoming the norm in major cities. Large crowds are also often formed when there is a sporting, entertainment, cultural or religious event. It is important to plan and develop strategies for such large crowds in order to efficiently manage people and maintain a safe situation.

Aim and Outline:

This project aims to develop a simulation tool that can assist with timely decisions and resource allocation in the emergency management of large crowds within an urban setting.

In this project, students will explore models and their implementation using agent-based simulation to simulate pedestrian behaviour and to develop crowd management strategies for large crowds.

Prerequisite Knowledge:

Good programming skills in any modern programming language. Some modelling and simulation experience would be advantageous.


Project title: The Future City: modelling the impact of disruptive technologies

Supervisors: Dr John Betts (FIT) Prof. Hai L. Vu (Faculty of Engineering)

Student cohort:

Both

Background, Aim and Outline:

Future cities are smart but full of surprises. In this project, students will explore an open source game engine (simcity.com) to build and model a future city. The focus will be in linking this open source software with another open source agent-based software (matsim.org) to evaluate the changes in society due to the emergence of disruptive technologies. For example, how are people’s transportation habits affected by the emergence of driverless cars or shared mobility?

References:

http://www.simcity.com/

http://www.matsim.org/

Prerequisite Knowledge:

Good programming skills in any modern programming language. Some modelling and simulation experience would be advantageous.


Project Title: An interdisciplinary approach towards early detection of low mental health and well being.

Supervisors: Dr Jojo Wong, Dr Marc Cheong (FIT)

External:

Mr Caley Sullivan (Monash Alfred Psychiatry Research Centre (MAPrc))

Mr Eddie Robinson (Faculty MNHS, Monash University)

Ms Joanne Byrne (Dept. of Anthropology, La Trobe University)

Student cohort:

Both

Background:

This interdisciplinary project aims to discover behavioural indicators of low mental-health and well being (MHWB) - in particular depression - that could be used to develop an early detection and intervention program. Currently, there are limited practically available tools to do this on an ongoing basis. In-depth analysis of data can be achieved by combining both quantitative and qualitative techniques. As our interdisciplinary team has a variety of techniques and methods for data acquisition, this is a unique opportunity to bring techniques from several diverse disciplines, with information technology playing a crucial role, to help solve a critical clinical problem.

Aim and Outline:

This project is divided into two broad streams with two different methodologies. Students can choose between one of the two streams.

- Stream 1: Quantitative analysis

Using quantitative data including (but not limited to), e.g. facial features, voice data in terms of verbal cues, large quantities of EEG data, etc., can quantitative techniques such as classification and machine learning (including deep neural networks) be used to detect depression?

- Stream 2: Qualitative analysis

By using qualitative techniques and data including (but not limited to) e.g. digital ethnography, social media analysis, interviews, electronic survey instruments, etc, can we understand how depression and low MHWB translate into everyday behaviour?

URLs and References:

https://epjdatascience.springeropen.com/articles/10.1140/epjds/s13688-017-0110-z

https://www.ncbi.nlm.nih.gov/pubmed/29650507

https://www.cs.cmu.edu/~ylataus/files/TausczikPennebaker2010.pdf

http://dcapswoz.ict.usc.edu/

Pre- and Co-requisite Knowledge:

For Stream 1: a strong knowledge in machine learning and research skills.

For Stream 2: strong research skills, willingness to learn about (and combine) techniques from different disciplines with traditional IT research.

Students from BOTH streams must have a keen interest in the domain of mental health and well-being.


Project Title:    Muscle Stimulation: Continuously Controlling Human Movement

Supervisors: Jarrod Knibbe

Student cohort:

Both

Background:

Electric Muscle Stimulation (EMS) has emerged as a promising interaction technique in Human Computer Interaction (HCI). EMS applies small currents to a users’ muscle to cause contraction. By targeting specific muscles, you can cause the user to move.

So far, EMS has been used to steer people whilst walking, teach people how to use new objects, or help people draw. One of the challenges of EMS, however, is that users get accustomed to it and it stops working effectively - this could be described as a form of repetition suppression. Fully understanding how this works, and how to overcome it remains an open challenge.

Aim and Outline:

This project aims to explore repetition suppression in Electric Muscle Stimulation. Using the latest hardware for reading brain patterns (through EEG) whilst stimulating the user, this project can help us understand the rate at which people become accustomed, and the ways in which small changes to stimulation patterns impact repetition suppression.

URLs and References:

Lopes, P., Jonell, P., Baudisch, P,. Affordance++: Allowing Objects to Communicate Dynamic Use, in Proc. CHI ‘15.

Anderson, BB., Kirwan, CB., Jenkins, JL., Eargle, D., Howard, S., and Vance, A. How polymorphic warnings reduce habituation in the brain: Insights from an fMRI study. In Proc CHI ‘15.

Knibbe, J., Alsmith, A., Hornbaek, K., Experiencing Electrical Muscle Stimulation, in Proc IMWUT ‘18.

Pre- and Co-requisite Knowledge:

Some signal processing experience is a must, and machine learning knowledge would be a bonus. Interest in brain-computer interaction (BCI), psychology, or body-based interaction would also be a bonus.


Project Title:  A Virtual Writing Assistant - Writing ‘Related Work’ so you don’t have to.

Supervisors: Jarrod Knibbe

Student cohort:

Both

Background:

Writing academic papers or reports requires a deep knowledge of existing research and literature. However, it can often feel as though the creative process is stalled by spending hours trawling through literature - getting in the way of fun problem-solving and development tasks. Can a virtual assistant begin to solve this problem, doing the literature trawling for you?

We have scraped the entire proceedings of ACM CHI - the premier conference for Human-Computer Interaction. Using NLP and AI, can we build a virtual agent to suggest relevant literature and literature themes from just limited notes on a new idea? Eventually, can this agent just write a Related Literature section on its own?

Aim and Outline:

This project will explore NLP and deep-learning techniques for mining an academic literature corpus to identify and group relevant literature based on limited user input. This may involve visualisations, developing auto-generated writing, building a conversational agent, etc. Finally, what is this like to actually use - does it work, does it help the writing / ideation process?

URLs and References:

Nebeling, M, et al., WearWrite: Crowd-Assisted Writing from Smartwatches, in Proc. CHI ‘16.

Olson, JS., et al., How People Write Together Now: Beginning the Investigation with Advanced Undergraduates in a Project Course.

Pre- and Co-requisite Knowledge:

Some relevant NLP or machine learning / AI knowledge is essential. Programming skills in Python are essential. An interest in text mining and recommender systems will help maintain motivation.


Project Title: Lie Detection in Virtual Reality

Supervisors: Jarrod Knibbe

Student cohort:

Both

Background:

Recent research has shown that lie detection can be achieved on mobile devices using only the range of sensors on the device (such as accelerometers, pressure sensing in the screen, etc.), whilst ignoring the content of any input text. Using motion tracking and gaze features (from eye-tracking), can we detect lies in virtual reality?

Aim and Outline:

Implement a range of activities in VR that incentivise lying. Using features from body motion and eye tracking, build a classifier to detect truthful and dishonest / deceptive input. What classification rates can you achieve? What features of important here?

URLs and References:

Mottelson, A., Knibbe, J., Hornbaek, K., Veritaps: Lie Detection on Mobile Devices, in Proc. CHI 2018.

Mottelson, A., Hornbaek, K., An affect detection technique using mobile commodity sensors in the wild, in Proc. Ubicomp ‘16.

Pre- and Co-requisite Knowledge:
Unity experience is vital. Some machine learning / classifier building experience is important, too. Would suit someone interested in pursuing a career in research or game development.


Project Title: Do Difficult Games Actually Need to be Difficult? Stereotype Threat in Gaming.

Supervisors: Jarrod Knibbe

Student cohort:

Both

Background:

Research has shown that people perform better or worse at hard tasks according to stereotypes - ‘Stereotype Threat’. These are often a bit controversial, but unfortunately pervasive. For example, telling boys that ‘girls are better at maths’ makes boys perform worse in maths tests. This only applies for ‘hard’ tasks.

This has a potentially interesting impact on video games. When selecting modes at the beginning of a game, you are often presented with something like ‘HARD mode: bots are more intelligent and work together’. Now, you have told me it will be difficult and suggested that the bots may be better than expected. According to Stereotype Threat, this should make me perform worse. In turn, this would suggest that you don’t actually need to make the game harder.

Aim and Outline:

Explore the effect of stereotype threat in game design. Can all game difficulties be the same in practice, varying only in their description and allowing stereotype threat to take care of everything else? (Presumably not, but is there an effect here?)

URLs and References:

Spencer, SJ., Logel, C., Davies, PG., Stereotype Threat, Annual Review Psychology 2016.

Pre- and Co-requisite Knowledge:

An underlying interest in gaming would be important here. Game development experience is important too. Any crowd-sourcing, Mechanical Turk experience would be a bonus. Would suit someone interested in pursuing either research (academia) or a career in game development.


Project Title: Multi-user VR: Allowing free, dynamic movement, while preventing collisions

Supervisors: Jarrod Knibbe

Student cohort:

Both

Background:

Virtual Reality offers so many opportunities for different, fun activities (fencing, shooting, menial jobs, fishing, whatever floats your boat.) However, VR is still a fundamentally individual activity, that benefits from quite a lot of space. If multiple people are using VR in a shared physical space, there is a large risk of painful collisions.

Bayesian Information Gain has been used in HCI to improve object selection through clever ‘predictions’. Can we apply Bayesian Information Gain to a 3D space, to prevent collisions? Can we enable free, unconstrained movement in multi-user VR settings, without a risk of collisions?

Aim and Outline:Explore opportunities for using Bayesian Information Gain in 3D space to predict user motion. Explore real-time implementations.

URLs and References:

Liu, W., et al., BIGnav: Bayesian Information Gain for Guiding Multiscale Navigation, in Proc. CHI ‘17.

Pre- and Co-requisite Knowledge:Unity experience and some knowledge of Bayesian statistics, probability is crucial here. Would suit someone interested in pursuing a career in academia or game development.


Project Title: Exploration of 3D Scatteplot in Virtual Reality

Supervisors: Maxime Cordeil, Arnaud Prouzeau, Tim Dwyer, Barrett Ens

Student cohort:

Both

Background:

Three-dimensional scatterplots suffer from well-known perception and usability problems. In particular, overplotting and occlusion, mainly due to density and noise, prevent users from properly perceiving the data. Thanks to accurate head and hand tracking, immersive Virtual Reality (VR) setups provide new ways to interact and navigate with 3D scatterplots. VR also supports additional sensory modalities such as haptic feedback.

Aim and Outline:

A first prototype using haptic feedback to assess local density have been already developed and evaluated. The aim of this thesis is to either extend this prototype to improve scatterplot exploration using haptic, or to explore other modalities proposed by Virtual Reality (like Sound). The proposed design will be evaluated in user studies.

Pre- and Co-requisite Knowledge:

Programming knowledge of Java or C#, knowledge of Unity programming environment is an asset but not required. Knowledge of Human-Computer interaction is desired but not essential.


Project Title: Eye-tracking based data selection in Augmented Reality

Supervisors: Dr Maxime Cordeil, Dr Barrett Ens (FIT)

Student cohort:

Both

Background:

The selection of data on data visualisations is fundamental to efficiently support data exploration. On current desktop visualisation applications such as Tableau, Qlisk or Miscrosoft BI, users use their mouse to draw a rectangle and select data points of interests.

This problem is well explored for 2D visualisations on desktop PCs, however it remains a challenge to select data points in a 3D visualisation. Virtual and Augmented Reality technology allows users to perceive 3D data as if they were anchored in the real 3D environment. Current data selection solutions on these platforms consist of using hand tracked controllers, but repetitive arm movements generates fatigue for users and precision may not be optimal.

In this project, we want to explore how eye-tracking technology, that allows to track a user’s gaze, can be leveraged to perform 3D data selections. In particular, the projects sets out to explore how efficient gaze-based combined with gesture interactions to selection data points in 3D visualisations.

Aim and Outline:

Designing novel interactions to select data using gaze (via eye-tracking technology), develop a research platform to explore different interaction designs, evaluate the designs with quantitative and qualitative methods.

The implementation will be done in Unity 3D and using the Immersive Analytics Toolkit, developed at Monash.

URLs and References:

  1. http://www.hci.uni-wuerzburg.de/download/evaluation-of-eyetrackers-JVRB08.pdf
  2. https://www.tableau.com/

3)

Pre- and Co-requisite Knowledge: Applicants should have a strong background in one or more of the following

-       C#/Java programming

-       Computer Graphics

-       3D editors, e.g. Unity

-       Statistics

Participants with knowledge of Augmented/Virtual reality will be particularly considered.


Title: Security Analysis of Smart Contracts in Blockchain Systems

Supervisors: Li Li

Student cohort:

Both

Background:

Blockchain has been a very hot topic in recent days and smart contracts have been proposed as the most promising mechanism to be applied to blockchain platforms. Since smart contracts will be potentially used by many users, there is a need to ensure that the smart contracts deployed on blockchains are secure.

Aim and Outline:

This project aims to provide a system to automatically identify security issues in smart contracts within a platform and across different platforms. Based on our previous experience (e.g., clone behaviour in mobile), when cloning existing code, developers are not likely to thoroughly understand the code before reusing. As a result, vulnerable code or bugs are also cloned and hence are spread in the wild that are then very difficult to fix. Therefore, there is a need to automatically identify the security issues in smart contracts.

URLs and References:

http://www4.comp.polyu.edu.hk/~csxluo/EthereumGraphAnalysis.pdf

http://www4.comp.polyu.edu.hk/~csxluo/Gasper.pdf

Pre- and Co-requisite Knowledge:

Good Programming Skills

Title: Fixing compatibility issues in Android apps

Supervisors: Li Li

Student cohort:

Both

Background:

Because of heavy fragmentation in the Android ecosystem, e.g., different mobile OS versions, customised OEM versions run on different devices produced by different manufacturers, mobile apps are suffering from huge compatibility issues where mobile apps are recurrently crashing on some devices.

Aim and Outline:

This work aims at finding mechanisms to automatically fix compatibility issues in Android apps. More specifically, we would like to first summarise the common patterns that are likely introducing compatibility problems (e.g., resulting in crashes), and consequently design and implement a prototype tool for automatically fixing such problems.

URLs and References:

http://lilicoding.github.io/papers/li2018cid.pdf

http://sccpu2.cse.ust.hk/andrewust/files/ASE2016.pdf

Pre- and Co-requisite Knowledge:

Good Java Programming Skills


Title: Constructing Knowledge Graph for Android: Analyses and Applications

Supervisors: Li Li

Student cohort:

Both

Background:

The success of Android, on the one hand, brings several benefits to app developers and users, while on the other hand, it makes Android the target of choice to attackers and opportunistic developers. To cope with this, researchers have introduced various approaches (e.g., privacy leaks identification [1], ad fraud detection, etc.) to secure Android apps so as to keep users from being infected [2]. All of the above approaches require a reliable benchmark dataset to evaluate their performance. Unfortunately, it is not easy to build such a targeted dataset from scratch. As a result, researchers often apply their approaches to randomly selected apps, including both relevant and irrelevant samples. Many efforts are hence wasted to analyse the irrelevant ones. As an example, many researchers now rely on AndroZoo [3], which provides over 5.8 million Android apps for the community, to obtain experimental samples. If a researcher is only interested in apps that are obfuscated and have used reflection in their code, she still has to download all the apps and then filter in the interested ones.Therefore, to supplement this work, we plan to represent the app metadata via a knowledge graph and share it with the community for our fellow researchers to quickly search for interested apps.

Aim and Outline:

The aim of this work is to provide a knowledge graph of Android apps for our fellow researchers working in the field of mobile app analysis to quickly search for relevant artefacts so as to facilitate their research in various means, e.g., to search for app samples exactly suitable for their experiments.

Outline:

This project is expected to be done in three aspects:

Aspect 1: To design and implement a prototype tool called KnowledgeZooClient, aiming to extract metadata from Android apps and integrate them into a graph database (i.e., knowledge graph). We have already implemented a prototype version that is made available on Github [4]. The students are expected to continuously contribute to this open project.

Aspect 2: Thanks to Aspect 1, we are able to build a knowledge graph of Android apps. In this project, we will release the constructed knowledge graph as an online service, namely KnowledgeZoo. The students working in this aspect are expected to maintain the online service and perform various empirical studies on top of the graph, so as to obtain empirical knowledges that cannot be easily got otherwise.

Aspect 3: This aspect aims to implement various applications on top of KnowledgeZoo. These applications need to go one step further to make the knowledge graph smarter. For example, the potential applications can introduce new relationships derived from existing ones through graph mining. Based on the java packages and the signing certificate, we can introduce ``similar'' or ``repackaging'' relationships to APK nodes.

URLs and References:

[1] Li Li, Alexandre Bartel, Tegawendé Bissyandé, Jacques Klein, Yves Le Traon, Steven Arzt, Siegfried Rasthofer, Eric Bodden, Damien Octeau and Patrick McDaniel, IccTA: Detecting Inter-Component Privacy Leaks in Android Apps, The 37th International Conference on Software Engineering (ICSE 2015)

[2] Li Li, Tegawendé F. Bissyandé, Mike Papadakis, Siegfried Rasthofer, Alexandre Bartel, Damien Octeau, Jacques Klein, Yves Le Traon, Static Analysis of Android Apps: A Systematic Literature Review, Information and Software Technology, 2017

[3] Li Li, Jun Gao, Médéric Hurier, Pingfan Kong, Tegawendé F Bissyandé, Alexandre Bartel, Jacques Klein, Yves Le Traon, AndroZoo++: Collecting Millions of Android Apps and Their Metadata for the Research Community, arXiv preprint arXiv:1709.05281, 2017

[4] https://github.com/lilicoding/KnowledgeZooClient

Pre- and Co-requisite Knowledge:

Shell

Python or Java


Estimating high dimensional linear regression models using Bayesian inference

Supervisors: Daniel Schmidt

Student Cohort:

Both

Background:

High dimensional regression models are increasingly important in the current age of “big data”, both as analysis tools for problems with many predictors, as well as building blocks within other models such as deep neural networks. When there are many more predictors than samples, it is crucial to use regularisation, or penalization methods, that induce “sparsity” -- that is, force many model coefficients to be zero [1]. The Bayesian framework allows us to define penalization methods with excellent theoretical properties.

Aims and Outline:

This project will look at using the expectation-maximisation (EM) algorithm [2] to build fast, novel algorithms to estimate sparse linear models - potentially with hierarchical groupings of the predictors - using state-of-the-art Bayesian methods. The resulting methods are likely to be the amongst the best techniques for the problem that currently exist. This project would be most suited to students with good mathematical and programming skills.

URLS and References:

[1] “Shrink globally, act locally”, N. G. Polson and J. G. Scott, http://faculty.chicagobooth.edu/nicholas.polson/research/papers/Bayes1.pdf

[2] “Maximum Likelihood from Incomplete Data via the EM Algorithm”, A. P. Dempster, N. M. Laird and D. B. Rubin, http://web.mit.edu/6.435/www/Dempster77.pdf

Pre-requisite knowledge:

Ability to program (MATLAB/R/Python); linear regression; reasonable understanding Bayesian statistics

Learning which predictors are important in a linear regression using minimum message length

Supervisors: Daniel Schmidt

Student Cohort: Both

Background:

Linear regression models remain (and will likely continue to remain) one of the most important building blocks in statistical modelling. They benefit from high degree of interpretability, and good performance in high dimensions even with relatively little data. Selecting which predictors are important and which ones are not remains an important aspect of linear modelling. The minimum message length (MML) principle [1], developed here at Monash, is a powerful tool for quantifying the fit of a model to data. Recent work on linear regression within the MML framework [2] has demonstrated excellent performance while remaining computationally straightforward.

Aims and Outline:

This project would aim to develop/extend/test methodology for selecting a plausible set of predictors for regression models using the minimum message length idea. The ideas from [2] will serve as a suitable building block. There are several directions that the student could take; for example, they could:

- explore or examine the performance of MML based model selection methods within the context of search procedures such as the lasso;

- implement a novel MML version of the lasso method;

- use the MML idea to quantify the plausibility of predictors;

- implement the MML model selection criteria within new subset searching technology [3]

This project would be most suited to students with good mathematical and programming skills.

URLS and References:

[1] “Statistical and Inductive Inference by Minimum Message Length”, C.S.Wallace

[2] “A Minimum Message Length Criterion for Robust Linear Regression”, C.K.Wong, E.Makalic, D.F.Schmidt, https://arxiv.org/pdf/1802.03141.pdf[3] “Best Subset Selection via a Modern Optimization Lens”, D.Bertsimas, A. King and R. Mazumder, https://arxiv.org/pdf/1507.03133.pdf

Pre-requisite knowledge:

Ability to program (MATLAB/R/Python); linear regression; reasonable understanding Bayesian statistics


Project Title:    Modelling gene networks controlling heart formation

Supervisors: Dr. Hieu Nim and Dr. Mirana Ramialison

Student cohort:

Both

Background:

The heart is a complex organ and its formation is controlled by a network of genes that remains to be determined. Big data from genomic sequencing are generated to decrypt the components of this network, but there is a lack of cutting-edge bioinformatics technologies to mine this datasets and allow discoveries of new genes important for heart formation.

Aim and Outline:

The aim of this project is to design novel computational pipelines to integrate big sequencing data in order to identify DNA sequence patterns that if mutated, can lead to heart disease.

URLs and References:

https://research.monash.edu/en/persons/hieu-nim

http://www.armi.org.au/research-leadership/ramialison-group

https://research.monash.edu/en/persons/mirana-ramialison/publications/

Pre- and Co-requisite Knowledge:

Programming knowledge, python, R. Interest in science and medicine. Teamwork and collaboration.


Project Title: Network-based algorithm for drug discovery for malaria

Supervisors: Dr. Hieu Nim, Prof. Graham Farr, Prof. Christian Doerig

Student cohort:

Both

Background:

Biological networks play an important role in malaria infection in human, where effective drug combinations are yet to be discovered. In the past, biological network topology and associated data have been sparse due to their high cost and complexity. Rapid advances have been made in recent years in network medicine, creating an ideal opportunity for algorithm-based network analysis in search of drug targets.

Aim and Outline:

The student is expected to integrate malaria data from collaborators at the Biomedicine Discovery Institute, with biological networks obtained from NCBI PubMed literature. An example of such biological networks is at http://msb.embopress.org/content/6/1/453. Other network databases can be integrated as appropriate, which includes KEGG and SIGNOR. The network will need to be analysed, using various graph theory techniques, to study the topology that can potentially identify “interesting nodes and edges” of the network. These could be potential drug targets, which can be validated in the wet-lab by our collaborators.

URLs and References:

https://research.monash.edu/en/persons/hieu-nim

http://msb.embopress.org/content/6/1/453

http://snap.stanford.edu/

https://signor.uniroma2.it/

https://www.genome.jp/kegg/pathway.html

Pre- and Co-requisite Knowledge:

Programming knowledge, python, R. Interest in science and medicine. Teamwork and collaboration.

-------------------

Project Title: Algorithms to predict outcome from clinical data in lupus

Supervisors: Dr. Hieu Nim, A/Prof. Alberta Hoi (Monash Health), Prof. Eric Morand (Monash Health)

Student cohort:

Both

Background:

Systemic lupus erythematosus (SLE, lupus) is a severe auto-immune disease without effective treatments. This is partially caused by data complexity, where many parameters can contribute to the disease outcome in a time-dependent manner. This presents an opportunity for machine-based extraction of useful knowledge from complex data.

Aim and Outline:

The student is expected to integrate and analyse data from collaborators at Monash Health. Data are highly complex (demographic, laboratory test, physician’s assessment) and time-dependent. The student will implement predictive algorithms to analyse the data, and perform cross validation to evaluate the effectiveness of the prediction.

URLs and References:

https://research.monash.edu/en/persons/hieu-nim https://www.mja.com.au/journal/2017/206/5/australian-lupus-registry-and-biobank-timely-initiative

https://doi.org/10.1136/annrheumdis-2013-205171

Pre- and Co-requisite Knowledge

Programming knowledge, python, R. Interest in science and medicine. Teamwork and collaboration.


Project Title: Privacy-centric cloud platform for EEG and clinical data

Supervisors: Dr. Hieu Nim, Mr. Caley Sullivan (plus mentorship from inter-faculty Brain Stimulation Project team)

Student cohort:

Both

Background:

This is part of a proof of concept brain research study, involving the development and deployment of a custom hardware and software solution to trial novel treatments, and means to remotely capture, store, access and analyses the multitude of clinically relevant data produced in the process.

Aim and Outline:

Device data includes:

*    Self-reported rating scales

*    Measures of functional brain activity (EEG), event related and/or ongoing basal activity

*    Treatment parameters (transcranial electrical stimulation, phase, amplitude, frequency, offset, waveform)

*    Cognitive performance (Game/task/behavioral data)

*    Phenotyping data (auxiliary device sensors, audio/video/accel/gyro etc )

Clinical data includes:

*    demographic information

*    patient history

*    medication history

*    diagnostic questionnaire

*    clinician reports

Select portions of this information may be digitized and added to device database via research interface given a set of de-identified unique hardware id (HID), study id (SID) and patient/user id (UID) numbers. The student is expected to work with Monash eSolutions and with potential external software consulting companies to develop the cloud platform.

URLs and References:

http://www.maprc.org.au/brain-stimulation-treatment-trials

https://research.monash.edu/en/persons/hieu-nim/

Pre- and Co-requisite Knowledge

Programming knowledge, cloud platform (AWS or Google Cloud or Azure). Interest in neuroscience and medicine. Teamwork and collaboration.


Project Title: Surface-Integrated Layouts in Augmented Reality

Supervisors: Barrett Ens and Kim Marriott

Student cohort:

Both

Background:

Augmented reality (AR) allows virtual constructs to be overlaid onto the real-world environment. One potential application for this technology is to allow multiple application windows to be distributed around a user’s local environment. Such ‘surface-integrated’ window layouts may be useful for many applications, including mobile industrial work and data visualisation.

Aim and outline:

The goal of this project is to develop new algorithms to manage surface-integrated layouts for AR content. The layouts will balance multiple factors such as spatial configuration of available surfaces, background surface colour and texture, and the context of users and other occupants in the environment. The developed algorithms will be implemented in prototype AR systems and evaluated in user studies.

Pre- and Co-requisites:

Programming knowledge of Java or C#, knowledge of Unity programming environment is an asset but not required.


Project Title: Immersive Visualisation for In-Situ Maintenance

Supervisors: Barrett Ens, Arnaud Prouzeau and Tim Dwyer

Student cohort:

Both

Background:

Mobile workers conducting equipment maintenance often require maintenance records or other information related to the job site, which are typically accessed by laptop computers. Wearable Augmented Reality (AR) displays now allow visualization of such information directly in the user’s environment when and where it’s needed. However, the design and use of such immersive visualisations is not well investigated.

Aim and outline:

This project will investigate the development of immersive visualisations using AR (and/or simulated using Virtual Reality). Visualisations will be created in mock-up environments or using 3D building models, and implemented in prototype systems. The developed visualisations will be evaluated in user studies.

Pre- and Co-requisites:

Programming knowledge of Java or C#, knowledge of Unity programming environment is an asset but not required. Knowledge of Human-Computer interaction is desired but not essential.


Graph layout for SBGN Entity Relationship maps

Supervisors: Tobias Czauderna, Michael Wybrow

Student cohort:

Both

Background:

The Systems Biology Graphical Notation (SBGN) is an emerging standard for graphical representations of biochemical and cellular networks studied in systems biology. Three different views cover several aspects of the represented processes in different levels of detail: 1) Process Description maps describe temporal aspects of biochemical interactions in a network, 2) Entity Relationship maps show the relationships between entities in a network and how entities influence relationships, and 3) Activity Flow maps depict the flow of information between entities in a network. SBGN helps to communicate biological knowledge more efficient and accurate between different research communities in the life sciences.

Aim and Outline

The aim of the project is the development of a graph layout algorithm for SBGN Entity Relationship maps. These maps could be considered as hypergraphs with edges joining more than two nodes. Different approaches to this problem are possible, potentially the layout algorithm can be developed using the hypergraphs or the hypergraphs can be transformed into graphs with edges joining only two nodes. For either approach constraint-based layout techniques, developed by members of the Immersive Analytics Lab at the FIT and available in the Adaptagrams library, could be used.

URLs and References

http://www.sbgn.org

https://www.adaptagrams.org

http://ialab.it.monash.edu/

Pre- and Co-requisite Knowledge

Programming knowledge, interest in graph visualisation and graph layout


Project Title: A Child Protection Recordkeeping App for Parents and Family Members (24 pts)

Supervisors: Associate Professor Joanne Evans and Dr Greg Rolan

Student cohort:

Both

Background:

Within the faculty’s Centre for Organisational and Community Informatics, the Archives and the Rights of the Child Research Program is investigating ways to re-imagine recordkeeping systems in support of responsive and accountable child-centred and family focused out-of-home care. Progressive child protection practice recognises the need, where possible, to support and strengthen parental engagement in the system in order to ensure the best interests of the child.

‘No single strategy is of itself effective in protecting children. However, the most important factor contributing to success is the quality of the relationship between the child’s family and the responsible professional’ (Dartington, 1995 quoted in Qld Department of Communities, Child Safety and Disability Services 2013).

Child protection and court processes generate a mountain of documentation that can be overwhelming and confusing to navigate, hard to manage and keep track of, especially if parents are also dealing with health and behavioural issues. Being on top of the paperwork handed out by workers, providing the documentation the system demands in a timely fashion and ensuring that records are created to document interactions, etc. could be one way in which child protection outcomes could be improved.

Aim and Outline:

In this exploratory project, we would like to investigate how digital and networked information technologies could be used to support the recordkeeping needs of parents in child protection cases. It will involve the use of a design science approach to develop a model the information architecture of a recordkeeping system for parents. This may entail the creation of a prototype utilising existing and/or new open source components as a demonstrator for further research and development.

Challenges include investigating and dealing with the digital, recordkeeping, and other literacies of families involved in child protection. The other challenge is that there will not be time to form the deep, trusted relationships that are required to do this in a truly participatory manner.  The project will rely on secondary sources such as literature and subject matter experts --- rather than interacting with parents and families directly.

URLs and References:

●     Assistant Director Child Protection. (2017). Child Protection Manual. Retrieved February 8, 2018, from http://www.cpmanual.vic.gov.au/

●     Burstein, F. (2002). System development in information systems research. In K. Williamson (Ed.), Research Methods for Students and Professionals: Information Management and Systems (pp. 147–158). Wagga Wagga, N.S.W.: Centre for Information Studies, Charles Sturt University.

●     Gurstein, M. (2003). Effective use: A community informatics strategy beyond the Digital Divide. First Monday, 8(12). Retrieved from http://firstmonday.org/ojs/index.php/fm/article/view/1107

●     Hinton, T. (2013). Parents in the child protection system. Social Action and Research Centre, Anglicare Tasmania. Retrieved from https://www.socialactionresearchcentre.org.au/wp-content/uploads/Parents-in-the-child-protection-system.pdf

●     Hersberger, J. A. (2013). Are the economically poor information poor? Does the digital divide affect the homeless and access to information? Presented at the Proceedings of the Annual Conference of CAIS/Actes du congrès annuel de l’ACSI.

●     Western Suburbs Legal Service. (2008). Child protection : a guide for parents and family members. Newport, Vic.: Western Suburbs Legal Service.

Pre- and Co-requisite Knowledge:

The ideal candidate will have a background in one or more of software development, data analytics, and recordkeeping metadata modelling, with a keen desire to expand their knowledge and skills into the other areas encompassed by this research project. They will have.

This is not so much a technical projects as one that engages with the societal and community needs of the target audience. It would suit someone from an MBIS background with an interest in community informatics, recordkeeping metadata modelling and/or value sensitive research and design, coupled with a keen desire to expand their existing knowledge and skills into the other areas encompassed but this research project.


Project title: Immersive visualisation of thunderstorms (18 or 24 pts)

Supervisor: Bernie Jenny (FIT) and Christian Jakob (Atmospheric Sciences)

Student cohort:

Both

Background:

Thunderstorms in the tropics are the heat engine of Earth’s Climate. A major observing system for thunderstorms are weather radars. Apart from using them to predict if we get rained on at our barbecue in a few hours, radars are great research tools. However, they produce large volumes of data that are often visualised in only very simple ways.

Aims and Outline:

In this project we will explore advanced 3-d visualisations of radar data in the Darwin region. Using 3-d volume measurements of rainfall-related quantities every 10 minutes over a 100 km x 100 km area we will explore visualisation options that will help weather and climate scientists to better understand how thunderstorms evolve over their lifetime. This is a collaborative project of the Immersive Analytics group of the Faculty of Information Technology and the Atmospheric Science group of the Monash School of Earth, Atmosphere and Environment.

Pre- and Co-requisite Knowledge:

Knowledge in computer graphics.


Project title: Virtual Reality and Augmented Reality for data visualisation and immersive analytics:

Supervisors: Immersive Analytics Lab members https://ialab.it.monash.edu/members/

Bernie Jenny, Tim Dwyer, Kim Marriott, Michael Wybrow, Sarah Goodwin, Barrett Ens, Maxime Cordeil, Tobias Czauderna, Arnaud Prouzeau, Cagatay Goncu

Student cohort:

Both

Aims and Outline:

Become part of the FIT Immersive Analytics Lab, and explore exciting new ways to visualise, interact, and analyse all types of data with VR and AR! We are looking for enthusiastic students to work on immersive visualisation using latest technology, such as head-mounted displays with integrated eye-trackers (Microsoft HoloLens and others), gesture recognition devices, and large wall displays. We are leaders in building internationally renowned toolkits for creating immersive data visualisations, we work with various industry partners, and we publish at top visualisation and human-computer interaction conferences. The Immersive Analytics Lab has a vibrant, collaborative research culture including fortnightly group meetings, informal and formal seminars and social events. We have a variety of thesis topics available for you to invent and evaluate new immersive visualisation and interaction techniques for large graphs, geospatial flows, building information models, maps and globes, and many other applications. Become part of our Immersive Analytics Lab and contact any of the members to learn more about current opportunities: https://ialab.it.monash.edu/members/

URLs and References:

https://ialab.it.monash.edu

https://ialab.it.monash.edu/members/

Pre- and Co-requisite Knowledge:

Interest in VR, AR and visualisation in 2D and 3D


Project title: Branching flow maps

Supervisor: Bernie Jenny

Student cohort:

Both

Background:

Recent research has resulted in the automated creation of origin-destination flow maps with flow lines that do not merge or branch. While there exist experimental algorithms for merging and branching flow maps, the visual appearance and the automated selection of merged flows does not result in automated maps that meet the standards of manually produced flow maps.

Aims and Outline:

The goal of this project is to (1) identify design principles applied in manual cartography for creating merging and branching lines on flow maps, (2) review existing algorithms for bundling and clustering origin-destination flows, and (3) propose and implement a method for the automated creation of merging and branching flows.

URLs and References:

http://doi.org/10.1080/17445647.2017.1313788

http://usmigrationflowmapper.com/

https://www.researchgate.net/publication/314281925_Force-directed_Layout_of_Origin-destination_Flow_Maps

https://www.researchgate.net/publication/311588861_Design_principles_for_origin-destination_flow_maps

Pre- and Co-requisite Knowledge:

Interest in flow maps and map design, programming language.


Project Title: Use eye tracking to improve collaboration in augmented reality

Supervisors: Arnaud Prouzeau, Maxime Cordeil, Barrett Ens, Tim Dwyer

Student cohort:

Both

Background:

With the recent evolution in immersive technologies, augmented reality displays (like the Microsoft Hololens) can now be used in scenarios which involve remote collaboration. For example, an electrician who is fixing an installation in a building can be in contact with someone helping him remotely from the company centre, providing him potentially with additional technical information. In such scenario, if the electrician wear an augmented reality headset, the remote collaborator can then have access to a video stream of what he is seeing, provide annotation around its environment around to help him, etc... (Example of such collaboration:https://www.youtube.com/watch?v=UpmolMrf5HQ)

In remote collaboration, it is important for collaborators to know what the others are doing and on what part of the workspace they are working on, this is called Workspace Awareness (Dourish and Belloti, 1992). In remote collaboration using desktop, such information can easily be visualise using telepointer, and minimap (gutwin et al., 1996). But there is a wider range of information that can be provided in augmented reality: head position, field of view, gaze position, hand position, ... and their impact on collaboration is not well known.

Aim and Outline:

the project aims at exploring the different techniques to provide workspace awareness for remote collaboration in augmented reality. More precisely, it focuses on the use of eye tracking, a technology which allow to precisely identify where users are looking. Implementation and testing of such techniques will be done on the Microsoft Hololens for a Maintenance scenario.

URLs and References:

Collaboration in augmented reality:

● Thammathip Piumsomboon, Youngho Lee, Gun Lee, and Mark Billinghurst. 2017. CoVAR: a collaborative virtual and augmented reality system for remote collaboration

Use of Eye Tracking for Collaboration:

● Joshua Newn, Eduardo Velloso, Fraser Allison, Yomna Abdelrahman, and Frank Vetere. 2017. Evaluating Real-Time Gaze Representations to Infer Intentions in Competitive Turn-Based Strategy Games

Workspace Awareness:

● Paul Dourish and Victoria Bellotti. 1992. Awareness and coordination in shared workspaces

● Carl Gutwin, Mark Roseman, and Saul Greenberg. 1996. A usability study of awareness widgets in a shared workspace groupware system

● Arnaud Prouzeau, Anastasia Bezerianos and Olivier Chapuis. 2018. Awareness Techniques to Aid Transitions between Personal and Shared Workspaces in Multi-Display Environments

Pre- and Co-requisite Knowledge:

Programming knowledge of Java or C#, knowledge of Unity programming environment is an asset but not required. Knowledge of Human-Computer interaction is desired but not essential.


Project Title: Exploiting core-guided optimisation techniques in constraint programming

Supervisors: Graeme Gange and Peter J. Stuckey

Student cohort:

Both

Background:

Discrete optimisation problems appear in all areas of endeavour: for example fairly distributing water, generating an efficient nurse roster, or even solving a sudoku.  Generic technologies for tackling discrete optimisation problems include constraint programming (an AI based approach) and mixed integer programming (an operations research approach).

Constraint programming is the core technology used for many important practical problems such as vehicle routing, rostering and scheduling.

While constraint programming methods are very effective when it comes to finding good solutions quickly, they suffer when trying to prove optimality. Branch-and-bound approaches suffer because of extremely weak reasoning about the objective.

Solvers for maximum satisfiability (MaxSAT) problems have seen recent improvements in performance, driven by `core-guided' approaches, which optimistically set parts of the objective to their optimal values, and find either an optimal solution, or a reason why some parts of the objective cannot collectively take their optimal values. However, core-guided optimisation methods have some limitations: they only improve bounds in small steps, and are usually all-or-nothing: the only solution they find is the optimum.

Since the two approaches to optimization: core-guided and branch-and-bound; are complementary in strengths the hope is that there is some hybrid approach that can inherit the strengths and none of the weaknesses of both approaches simultaneously.

Aim and Outline:

The goal of this project is to investigate ways of using core-guided reasoning to improve the performance of constraint programming-based methods for discrete optimisation, while still preserving the ability to obtain good solutions quickly.  The student will work on extending state-of-the-art optimisation technology developed at Monash to be more even more effective on a wide range of challenging problems.

URLs and References:

See [https://www.monash.edu/it/data-science/optimisation] for more information about our group and research projects.

Pre- and co-requisite knowledge:

Strong programming skills (preferably C++), ideally some knowledge of discrete optimisation


Algorithm analysis techniques to improve Integer Programming solvers

Supervisors: Pierre Le Bodic

Background:

Industry problems can often be modelled using Integer Programming (IP) [1], a mathematical abstraction. IP solvers (e.g. IBM Cplex [2]) provide an optimal solution to any problem described in that mathematical setting, but this process can take long. To be efficient,  state-of-the-art IP solvers combine multiple  solving algorithms. Each algorithm is usually well theoretically understood, but the combination used in solvers is not.

Aim and Outline:

We will use algorithm analysis techniques (as in e.g. [3]) to theoretically investigate how algorithms are combined in state-of-the-art IP solving, and try to come up with better algorithms.

URLs and References:

[1] https://en.wikipedia.org/wiki/Integer_programming

[2] https://www.ibm.com/software/commerce/optimization/cplex-optimizer/

[3] http://arxiv.org/abs/1511.01818

Pre- and Co-requisite Knowledge:

A taste for algorithm analysis and computational complexity is desirable.


Automated Warehouse Optimisation

Supervisors: Daniel Harabor and Pierre Le Bodic

Student cohort:

Both

Background:

Warehouses are becoming increasingly automated and optimised. A great example is Amazon fulfilment centres (see https://www.youtube.com/watch?v=tMpsMt7ETi8 ). Many computer science problems, ranging from pathfinding to scheduling and facility layout, need to be solved to design efficient warehouses and their systems. These individual problems are not all well formalised and solved yet, and contributions in these directions are bound to have a high scientific and societal impact.

Aim and Outline:

The aim of this project is to formalise one of the problems related to warehouse automation, design methods to solve the problem, and run experiments to assess their performance.

URLs and References:

The videos https://www.youtube.com/watch?v=YSrnV0wZywU and https://www.youtube.com/watch?v=lT4X3cuIHK8provide further background.

Pre- and Co-requisite Knowledge:

Strong general background in CS, both in theory and practice, and interest in pathfinding and/or optimisation.


Project title: Total recall: learning to retrieve all relevant documents

Supervisors: Wray Buntine

Student cohort:

Minor thesis

Background:

A systematic review (https://en.wikipedia.org/wiki/Systematic_review) is a literature review that tries to collect all relevant literature. Because individual experimental studies can be unreliable, in the field of medicine meta-studies are done to review the full depth of knowledge.  The first step in doing so is finding all relevant studies, hence the phrase "total recall".  Efforts in IR to achieve this usually take as input a short or long search query.   In law, a different approach is used, referred to as technology assisted review (https://en.wikipedia.org/wiki/Electronic_discovery#Technology-assisted_review). Here an initial set of relevant documents are given and then an active learning process is used to gather more.

Aim and Outline:

Research question: given a retrieval task and a (small) set of retrieved items whose relevance is known, estimate the probability of total recall and the expected number of documents to be retrieved to achieve total recall.

Our aim is to evaluate the use of well calibrated learning algorithms (whose probabilities or ranking of classification is considered accurate) in supporting the document ranking task for total recall retrieval.

We will gather a set of document collections suitable for the total recall task, which have known ground truth, apply the learning algorithms

and investigate the quality of ranking/probability signals in evaluating total recall.  This will be an experimental evaluation to understand the statistical nature of the task.

URLs and References:

Description of the task and some research is here:

https://pdfs.semanticscholar.org/1262/40dedd75626fd736f0485d06f1f516517e54.pdf

https://trec.nist.gov/pubs/trec24/papers/Overview-TR.pdf

https://www.lawtechnologytoday.org/2015/11/history-technology-assisted-review/

Pre- and Co-requisite Knowledge:

Good Python skills to do text munging and text database manipulation.

Basic understanding of probabilistic models to gain an understanding of what algorithms do.

General practical machine learning knowledge.

Flair for scripting and experimenting.

Ability to read some papers and try out ideas.


Project Title: Time-related interactive visualisations based on unstructured text documents for intelligence analysis

Supervisors: Monika Schwarz, Kim Marriott and Tim Dwyer

Student cohort:

Minor thesis

Background:

In SWARM, a project funded by the U.S Intelligence Advanced Research Projects Activity (IARPA), we investigate structured analytical techniques to improve human reasoning. Our Monash subsection explores visual support for argument structuring and sense making in particular.

Aim and Outline:

The aim of this study is to apply various text analysis techniques to ‘intelligence-style’ documents developed in the project and create interactive visualisations to provide an aid for initial text structuring and sense making. The study will bring together methods from both the text analysis and the visual analytics field. Specifically, in the text analysis part we want to extract salient features like temporal entities and their context and evaluate their usefulness for our particular goal. Any data structure derived in this manner is still flawed and needs human intervention. In the visualisation part the challenge will therefore be to find the right visual language to facilitate error correction and rearrangement of the results.

While the background of this project is in intelligence analysis it will be helpful to extend the study to different scenarios like political articles and journalism.

URLs and References:

Timeline visualisations: http://timeline.knightlab.com

http://www.cs.ubc.ca/group/infovis/software/TimeLineCurator/

Text visualisations: http://textvis.lnu.se

Swarm Project: https://www.swarmproject.info

Pre- and Co-requisite Knowledge:

Experience in Python and Javascript, a background in data science would be advantageous


Project Title:  Keyphrase identification with word embeddings

Supervisors: Wray Buntine

Student cohort:

Minor thesis

Background:

Word embeddings (https://en.wikipedia.org/wiki/Word_embedding) are real-valued feature vectors intended to summarise the semantics of words. They are built using statistical algorithms usually developed within deep neural networks, but are related to early matrix factorisation methods and latent space methods for documents.

Keyphrase identification is used to identify which single and multiword expressions best summarise what a document is "about". A collection of shared, well-understood keyphrases form a great concept map into a collection of documents. Traditional methods, for instance [Zhan17], rely on word frequency information and are thus limited in their ability to identify thematically and semantically central words. More recent techniques [Maha18,Zhan18] employ word embeddings to bring in semantic information and thus allow a system to determine semantic relevance or “aboutness."  They also allow the identification of implicit keyphrases, those that do not explicitly appear in a document, a significant problem for unsupervised traditional methods. Corpus independent methods do not require supervision for particular keyphrases so generalise across different kinds of collections.

Aim and Outline:

Build a simple keyphrase identifier using keyphrase embeddings We will explore the use of word-embeddings to support keyphrase identification.  We will process a large document collection to identify keyphrases using standard software and then build custom word embeddings which incorporate keyphrases. Standard downloadable word embeddings are provided for individual words only, not keyphrases. We will also review recent keyphrase identification methods to see which ones are readibly adapted to make use of the keyphrase embeddings.

URLs and References:

[Maha18] D. Mahata, et al, “Key2Vec: Automatic Ranked Keyphrase Extraction from Scientific Articles

using Phrase Embeddings”, http://aclweb.org/anthology/N18-2100, in 2018 NAACL-HLT.

[Zhan17] Y. Zhang, et al. "MIKE: Keyphrase extraction by integrating multidimensional information,"

http://www.ntu.edu.sg/home/XLLI/publication/fp0909-zhangA.pdf in 2017 ACM CIKM.

[Zhan18] Y. Zhang et al., "Keyphrase Generation Based on Deep Seq2seq Model," https://ieeexplore.ieee.org/document/8438457, in IEEE Access 2018.

Pre- and Co-requisite Knowledge:

Good Python skills to do text munging, text database manipulation, and big text processing on distriuted systems.  Basic understanding of probabilistic models to gain an understanding of what algorithms do.

General practical machine learning knowledge.  Flair for scripting and experimenting, and running the code of others.


Project Title:  Bayesian Time Series Models

Supervisors: Daniel Schmidt

Student Cohort:

Both

Background:

Time series models capture the behaviour of a time-ordered series of data points. There has been a large amount of development of methods for modelling time series, and estimating these models from empirical data. Such models allow for forecasting of the time series as well as understanding the latent structure that generated the process.

Aims and Outline:

This project will involve using the latest Bayesian estimation techniques [1] to estimate models from univariate or multivariate time series. The project is reasonably open ended, and could vary from extending existing Bayesian time series methodology [2] to implementing new estimation methods for models such as autoregressive conditional hetereskadicity (ARCH) models, and incorporating these into Bayesian time series trend estimators.

URLS and References:

[1] “Shrink globally, act locally”, N. G. Polson and J. G. Scott, http://faculty.chicagobooth.edu/nicholas.polson/research/papers/Bayes1.pdf

[2] “Estimation of Stationary Autoregressive Models using the Bayesian LASSO”, D.F.Schmidt and E.Makalic, http://dschmidt.org/wp-content/uploads/2016/04/Bayesian-AR-LASSO-Schmidt-Makalic-2013.pdf

Pre-requisite knowledge:

Ability to program (MATLAB/R/Python); linear regression; reasonable to good understanding of Bayesian statistics


Project Title:  Interpretable non-linear modelling using Bayesian additive regression

Supervisors: Daniel Schmidt

Student cohort:

Both

Background:

Linear regression remains an important modelling tool due to the fact that it produces models that are very easy to interpret. The drawback is, of course, that they only model linear relationships. Additive models [1] are a natural extension that relax this assumption by building the model as a sum of independent non-linear functions of the inputs. This retains a large degree of interpretability while increasing flexibility. The usual statistical questions that appear in linear models remain: how to estimate the relationships between predictors and the target, and how to decide if predictor variables are important or not?

Aims and Outline:

This project will utilise recent developments in Bayesian regression to build new tools for flexible additive regression. The work will explore different smoothing techniques and will build on existing, highly efficient toolbox for Bayesian regression [2,3]. Ideally the student will add a layer of code that allows simple specification of additive models that utilises the statistical advantages of the Bayesian framework. This work has the potential to be picked up and utilised by others.

This project would be most suited to students with good mathematical and programming skills.

URLS and References:

[1] “ Generalized Additive Models”, T. Hastie and R.Tibshirani,https://projecteuclid.org/euclid.ss/1177013604

[2] “Bayesian Grouped Horseshoe Regression with Application to Additive Models”, Z Xu, DF Schmidt, E Makalic, G Qian, JL Hopper,http://dschmidt.org/wp-content/uploads/2016/12/Grouped-Horseshoe-Regression-Xu-et-al-2016.pdf

[3] “High-Dimensional Bayesian Regularised Regression with the BayesReg Package”, E. Makalic and D. F. Schmidt,https://arxiv.org/abs/1611.06649

Pre-requisite knowledge:

Ability to program (MATLAB/R/Python); linear regression; reasonable understanding Bayesian statistics


Project Title:  Sparsification of Bayesian Regression Models

Supervisors: Daniel Schmidt

Student Cohort:

Both

Background:

The Bayesian paradigm allows for accurate estimation of regression models, even when the number of features is much greater than the number of samples. This is possible through the use of powerful sparsity inducing prior distributions. An interesting property of Bayesian procedures is that while the sparsity inducing priors encourage small coefficients, they cannot actually produce sparse models (i.e., ones in which coefficients are exactly equal to zero). This is a weakness if discovery of which predictors are associated with our target is important.

Aims and Outline:

In this project we will build on the decoupled shrinkage and selection (DSS) procedure [1] that has been applied to “sparsifying” Bayesian regression models. A weakness of this procedure is that there is no clear way to select which sparse model to use; recent work has shown that traditional information criteria approaches can be combined with DSS to produce good sparse models [2]. This project will extend this work, examining modifications to the procedure and exploring its performance.

URLS and References:

[1] “. Decoupling shrinkage and selection in Bayesian linear models: a posterior summary perspective”, P.R.Hahn and C.M.Carvalho, https://arxiv.org/abs/1408.0464

[2] “Bayesian Sparse Global-Local Shrinkage Regression for Selection of Grouped Variables”, Z.Xu, D.F.Schmidt, E.Makalic, G.Qian and J.L.Hopper, https://arxiv.org/abs/1709.04333

Pre-requisite knowledge:

Ability to program (MATLAB/R/Python); linear regression; reasonable to good understanding of Bayesian statistics


Project Title: Cloud Database Security

Supervisors: Xingliang Yuan, Amin Sakzad, Shi-feng Sun, Ron Steinfeld, Joseph K. Liu,

Student cohort:

Both

Background:

The convenience of outsourcing has led to a massive boom in cloud computing. However, this has been accompanied by a rise in hacking incidents exposing massive amounts of private information. Encrypted databases are a potential solution to this problem, in which the database is stored  on the cloud server in encrypted form, using a secret encryption key known only to the client (database owner), but not to the cloud server. However, existing encrypted database systems either are not secure enough, or suffer from various functionality and efficiency overhead limitations when compared  to unencrypted database, which can limit their practicality in various applications.

Aim and Outline:

The goal of this project is to explore, develop and evaluate improvements to a selected functionality and/or efficiency aspect of existing encrypted database systems, with the aim of improving their practicality. Examples include:

*Efficient Implementation of encrypted database using standard distributed computing frameworks like Apache Hive and/or NoSQL systems.

*Ranking Search Results: Current searchable encrypted database schemes do not support such a ranking functionality at the server. The goal is to investigate the feasibility of adding such functionality, while preserving a good  level of privacy against the server.

URLs and References:

--Cash, D., Jarecki, S., Jutla, C., Krawczyk, H., Rosu, M.-C. and Steiner, M. Highly-scalable searchable symmetric encryption with support for boolean queries, Advances in Cryptology–CRYPTO 2013, Springer, pp. 353–373. Available online at https://eprint.iacr.org/2013/169.pdf

--Shi-Feng Sun, Xingliang Yuan, Joseph K Liu, Ron Steinfeld, Amin Sakzad, Viet Vo, Surya Nepal, "Practical Backward-Secure Searchable Encryption from Symmetric Puncturable Encryption" Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, pp. 763-780, 2018.

-- Shangqi Lai, Sikhar Patranabis, Amin Sakzad, Joseph K Liu, Debdeep Mukhopadhyay, Ron Steinfeld, Shi-Feng Sun, Dongxi Liu, Cong Zuo, "Result pattern hiding searchable encryption for conjunctive queries", Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, pp. 745-762, 2018.

Pre- and Co-requisite Knowledge:

The student should have (1) Good programming skills and/or (2) Familiarity with the basics of cryptography and distributed computing environments (such as Hadoop, Hive, HBase).

Previously Offered: Yes (previously offered title: Searchable Encrypted Databases) We have updated the title and the content.


Project Title:  Privacy Technologies for Blockchain

Supervisors:  Joseph K. Liu, Jiangshan Yu, Veronika Kuchta, Amin Sakzad, Ron Steinfeld, Xingliang Yuan,

Student cohort:

Both

Background:

Bitcoin is a payment system invented in 2008. The system is peer-to-peer such that users can transact directly without needing an intermediary (blockchain). Despite its advantages, security is a big concern for such a digital currency.

Aim and Outline:

Although some potential attacks on the Bitcoin network and its use as a payment system, real or theoretical, have been identified by researchers, there exists more vulnerabilities yet to be discovered. The aim of this project is to identify some potential security vulnerabilities of Bitcoin and propose the corresponding mitigation.

URLs and References:

--S. Nakamoto. Bitcoin: A peer-to-peer electronic cash system, 2008. https://bitcoin.org/bitcoin.pdf

--Marcin Andrychowicz, Stefan Dziembowski, Daniel Malinowski, and Lukasz Mazurek. Secure multiparty computations on Bitcoin. In Security and Privacy (SP), 2014 IEEE Symposium on Security and Privacy, May 2014.

-- W. A. Alberto Torres, R. Steinfeld, A. Sakzad, J.K Liu, V. Kuchta, N. Bhattacharjee, M.H. Au, and J. Cheng, ''Post-quantum one-time linkable ring signature and application to ring confidential transactions in blockchain (lattice ringCT v1.0),'' Australasian Conference on Information Security and Privacy (ACISP 2018), pp. 558-576, 2018.

Pre- and Co-requisite Knowledge:

The student should have (1) Good programming skills and/or (2) Familiarity with the basics of digital currency would be an advantage.


Project Title: Privacy Preserving Machine Learning

Supervisors:  Amin Sakzad, Xingliang Yuan, Mahsa Salehi, Ron Steinfeld, Jiangshan Yu

Student cohort:

Both

Background:

Machine learning and Cybersecurity are considered to of the most hottest topics in Information Technology and Computer Science nowadays. One of the most important algorithms in Machine learning is anomaly detection.

Aim and Outline:

This project aims at developing privacy-preserving anomaly detectors. The annual cost of data security breaches in Australian organisations will average $2.5M AUD. One of the main causes is flagged to be the malicious attacks against cloud servers containing sensitive data. While encryption can be used to prevent malicious attacks and address data privacy issues, it slows down the server from mining/processing the encrypted data. An anomaly detection algorithm operating on encrypted data should be conceptually correct and computationally efficient.We aim to work on a new paradigm in data mining, i.e., “Privacy Preserving Anomaly Detection (PPAD)” that addresses the problem of mining on encrypted time-series data efficiently.

URLs and References:

-- R. Bost, R. Popa, S. Tu, and S. Goldwasser, “Machine Learning Classification Over Encrypted Data”, In Proceedings of ISOC NDSS, 2015

--J. H. Cheon, M. Kim and M. Kim, “Optimized Search-and-Compute circuits and their application to query evaluation on encrypted Data,” in IEEE Transactions on Information Forensics and Security, vol. 11, no. 1, pp. 188-199, Jan. 2016.

--P. Mohassel and Y. Zhang, “SecureML: A System for Scalable Privacy-Preserving Machine Learning”, In Proceedings of IEEE S&P, 2017

--M. Salehi, C. Leckie, J.C. Bezdek, T. Vaithianathan, and X. Zhang. “Fast memory efficient local outlier detection in data streams,” IEEE Transactions on Knowledge and Data Engineering, vol.28, no. 12 , pp. 3246-3260, 2016.

--X. Yuan, X. Wang, J. Lin, and C. Wang, “Privacy-preserving deep packet inspection in outsourced middleboxes”, In Proceedings of IEEE INFOCOM 2016, pp. 1-9, 2016.

Pre- and Co-requisite Knowledge:

The student should have (1) Good programming skills and/or (2) Familiarity with the basics of either machine learning or cryptography.

Previously Offered: No


Project Title:  Quantum Resistant Cryptographic Protocols

Supervisors:  Ron Steinfeld, Amin Sakzad

Student cohort:

Both

Background:

Cybersecurity is regarded as a high priority for governments and individuals today. With the practical realization of quantum computers just around the corner, classical cryptographic schemes in use today will no longer provide security in the presence of such technology. Therefore, cryptography based on “Post-Quantum” (PQ) techniques (that resist attacks by quantum computers) is a central goal for future future cryptosystems and their applications.

Aim and Outline:

Lattice-based cryptography, which is considered as the main branch of PQ cryptography, has recently reached the stage of practicality. Several basic practical lattice-based encryption and authentication schemes known are now being submitted for standardization, including [Titanium]. This project aims at proposing  a scheme which preserves practicality but enjoys much stronger security guarantees compared with the alternative efficient schemes, by using a new hard computational problem called Middle-Product Learning With Errors (MPLWE). Hence, the specific objectives of this project are to:

●     Investigate efficient PQ advanced cryptographic primitives and protocols, in particular  homomorphic commitment schemes and compatible zero-knowledge proofs for relations of interest in applications.

●     Explore the potential applications/implementation of the derived primitives designed in the first objective in the areas like practical e-cash/cryptocurrencies, and e-voting.

URLs and References:

-- [Titanium] R. Steinfeld and A. Sakzad and R. Zhao. “Titanium: Proposal for a NIST Post-Quantum Public-Key Encryption and KEM Standard.”, submitted to NIST, 30 Sep. 2017.

Pre- and Co-requisite Knowledge:

The student should have (1) Good programming skills and/or (2) Familiarity with the basics of either machine learning or cryptography.

Previously Offered: No


Project Title: Security of IoT devices

Supervisors: Carsten Rudolph, Amin Sakzad, Jiangshan Yu, Ron Steinfeld, Joseph K. Liu

Student cohort:

Both

Background:

The rapidly increasing number of devices connected to the Internet, especially small devices such as cameras, sensors, and actuators, making up the so-called Internet of Things (IoT), appears to be one of the big trends in computing for the near future. As such devices  are increasingly used to collect  potentially private data, as well as control critical infrastructure, the privacy and integrity security of IoT is becoming a highly important concern. Yet the massive scale of the emerging IoT, its highly distributed nature, and the low computational  abilities of many IoT  devices, pose new challenges in attempting to devise practical solutions for IoT security problems.

Aim and Outline:

The goal of this project is to explore, implement and evaluate the practicality of protocols for securing the privacy and/or integrity of large scale, highly distributed IoT networks of low-power devices. Examples of project topics include:

-- Authentication protocols to enforce access control to IoT devices only to authorized users.

-- Encryption protocols to provide privacy for IoT sensor data (e.g. for sending over the Internet to a cloud-based encrypted database).

Practical Implementation/evaluation-oriented projects will likely involve evaluating the secure protocol implementations on sample embedded hardware devices incorporating sensors.

URLs and References:

-- http://spectrum.ieee.org/telecom/security/how-to-build-a-safer-internet-of-things

Pre- and Co-requisite Knowledge:

Depending on the nature of the project topic selected, the student should have either (1) Good programming skills and/or (2) good mathematical skills, and preferably both. Familiarity with the basics of cryptography would be an advantage.

Previously Offered: Yes (previously offered title:  Security for the Internet of Things (IoT) ) We have updated the title and the content.


Project Title: What we do in the Shadows: Integrity Measurement in Cloud-Architectures

Supervisors: Carsten Rudolph

Student cohort:

Both

Background:

As the use of cloud computing and autonomous computing increases, integrity verification of the software stack used in a system becomes a critical issue. In this project, we analyze the behavior of IMA (Integrity Measurement Architecture). Today, IMA is one of the most popular integrity verification frameworks employed in the Linux kernel. For integrity verification, IMA measures all executables and their configuration files and stores theses measurements in the Trusted Platform Module (TPM). The TPM can then report these measurements to remote parties. Thus, IMA is the basis for so-called remote attestation.

Aim and Outline:

Recently, a Monash led formal analysis has revealed several conceptual shortcomings when IMA is deployed in a multi-tenant scenario. We have developed a sound proposal which addresses these problems. This project will advance this effort to the next stage by creating the first proof of concept implementation to demonstrate the feasibility and evaluate the overall practicality on a target system.

URLs and References:

https://www.usenix.org/legacy/event/sec04/tech/full_papers/sailer/sailer_html/

Pre- and Co-requisite Knowledge:

Solid programming skills are essential and some understanding of operating systems (in particular Linux) are required.
To ease the learning curve you should have an interest in the following topics:

- UNIX system architectures

- Kernel level programming

- C

- Basics of applied cryptography

- Scientific writing


Project Title: Executable Security Standards for the IoT using ACTOR

Supervisors: Carsten Rudolph

Student cohort:

Both

Background:

Computing artifacts are almost everywhere and one day they will control driver-less motorway traffic via communication among sensors and effectors at the roadside and in vehicles; they will monitor and treat our health via communication between devices installed in the human body and in hospitals. This vision is gradually being fulfilled by the Internet-of-Things (IoT) which defines large populations of computing devices which support us while we can be largely unaware of them. However, defining basic security requirements for such systems can quickly become a massive task, especially since the typical boundaries between different domains are deliberately removed and artifacts can be part of many domains. The ACTOR model, once developed for applications in AI, is both a practical and theoretical framework that treats such artifacts as individual agents called Actors. Actors provide encapsulation, an internal state, message-passing communication, indeterminacy, and mobility. Based on the Actor primitives, we will abstract, model, and implement security standards for IoT systems. Using our model we will analyze the current requirements and definitions and refine them using an executable representation.

Aim and Outline:

This project will first develop an informal/semi-formal representation of IoT security architectures and then use the ACTOR model to build an executable formal model. Then, this model will be used to identify and analyse requirements for secure implementations of such scenarios.

URLs and References:https://pdfs.semanticscholar.org/7626/93415b205b075639fad6670b16e9f72d14cb.pdf

Pre- and Co-requisite Knowledge:  
All applications are welcome. To ease the learning curve you should have an interest in the following topics:

- Knowledge of distributed computing

- Concurrency frameworks

- Functional programming, strong types

- Scientific writing


Project Title: Multivariate EEG data analytics based on deep learning

Supervisors: Mahsa Salehi

Student cohort:

Both

Background:

Multivariate time series are time series that has more than one time-dependent variable. In time series segmentation, the task is to partition the time series into several pieces based on a task at hand. One very exciting application of time series segmentation is detecting different mental states of human based on their brain signals. In this project we intend to develop a segmentation algorithm based on deep learning to detect the distraction sub-sequences in drivers based on their EEG brain data. This project is a collaborative work withEmotiv company.

Aim and Outline:

The aim of this project is to develop a deep neural network to predict and segment EEG time series data.

URLs and References:

Anthony Bagnall, Jason Lines, Jon Hills and Aaron Bostrom (2015), Time-series classification with COTE: the collective of transformation-based ensembles. IEEE Transactions on Knowledge and Data Engineering, 27(9), 2522-2535.

Fazle Karim, Somshubra Majumdar, Houshang Darabi and Shun Chen (2018), LSTM fully convolutional networks for time series classification. IEEE Access 6:1662-1669.

Youtube link:https://youtu.be/yg9xdsbTNXs

Pre- and Co-requisite Knowledge:

Data Structures and Algorithms (FIT2004), Python programming, Data Science, Machine Learning


Project Title: Multivariate Brain EEG data collection, wrangling and analysis

Supervisors:  Mahsa Salehi, Levin Kuhlmann

Student cohort:

Both

Background:

Multivariate time series are time series that has more than one time-dependent variable. One very exciting application of time series analysis is detecting different mental states of human based on their brain signals. In this project we intend to collect, wrangle and analyse brain EEG signals using a portable neuro-headset provided byEmotiv company.

Aim and Outline:

The aim of this project is to collect EEG data using a neuro-headset, wrangle and build a predictive model on the collected EEG data.

URLs and References:

Fazle Karim, Somshubra Majumdar, Houshang Darabi and Shun Chen (2018), LSTM fully convolutional networks for time series classification. IEEE Access 6:1662-1669.

Youtube link:https://youtu.be/yg9xdsbTNXs

Pre- and Co-requisite Knowledge:

Python programming, Data Science, Machine Learning


Project Title:  Subspace clustering in multivariate time series data

Supervisors: Mahsa Salehi, Jeffrey Chan (RMIT)

Student cohort:

Both

Background:

Multivariate time series are time series that has more than one time-dependent variable. Clustering multivariate time series is a very useful tool to discover interesting patterns from time series and have many applications. For instance, in human activity monitoring, measurements of wearable sensors such as Fitbit can be clustered for the purpose of human activity identification. In this example, it would be very beneficial to find out which subsets of sensors are responsible in each type of activity for performance improvement purposes, and typically the sensors are not labelled with these activities. Recent time series clustering algorithms consider the full dimensions in time series and hence are unable to find clusters in subspaces.

Aim and Outline:

In this project, we intend to develop an algorithm to detect clusters in subsets of features by taking into account the temporal dependencies between data points in multi-variate time series data.

URLs and References:

Lance Parsons, Ehtesham Haque, and Huan Liu. 2004. Subspace clustering for high dimensional data: a review. Acm Sigkdd Explorations Newsletter 6, 1 (2004), 90–105.

René Vidal. 2011. Subspace clustering. IEEE Signal Processing Magazine 28, 2 (2011), 52–68.

Pre- and Co-requisite Knowledge:

Data Structures and Algorithms (FIT2004), R programming, Data Science, Machine Learning


Project Title:  Democratising Big Data: Public Interactive Displays of Sustainability Data

Supervisors: Sarah Goodwin, Tim Dwyer, Lachlan Andrew, Ariel Liebman, Geoff Webb

Student cohort:

Both

Background:

Our University is committed to becoming Net Zero by 2030. In order to reach the target we need to not only invest in new systems and services, but raise awareness of energy consumption with our campus users; staff, students and visitors alike.  To do this we would like to create interactive engaging visual representations to help discriminate information and raise awareness around the campus. We have detailed data on energy use in buildings around the University, particularly in some of the key new buildings.

Aim and Outline:

With an aim to raise people's awareness of energy usage and efforts to improve sustainability of buildings at Monash we would like to explore and create interactive visualisations to allow people to better understand and engage with the data. One possibility is that we set up a public display that can be controlled by passers-by using a Microsoft Kinect interface. Another (complementary) possibility is that we design a mobile or web app that allows people to explore this data on their own device. One HCI (Human Computer Interaction) research goal is to explore how novel interactive visualization and effective UI design can engage casual observers.

URLs and References:

http://intranet.monash.edu.au/bpd/services/environmental-sustainability.html

https://www.monash.edu/net-zero-initiative

Pre- and Co-requisite Knowledge:

This project should appeal to students with an interest in graphics and natural user interface design.


Project Title: Agile Smart Grid Architecture

Supervisors: Associate Professor Vincent Lee and Dr Ariel Liebman

Student cohort:

Both

Background:

Many multisite industrial firms have to respond to the call for reduction in CO2 emission in their business and production process operations. The incorporation of heterogeneous local renewable energy (wind, solar etc) sources and energy storage capacity in their electricity distribution grid bring greater degree of uncertainty that demand timely reconfiguring the grid architecture to optimise overall energy consumption.

Aim and Outline:

The project aims to:

Analyse and evaluate (using simulation tool) the various feasible agile smart grid architectures, their communication protocols and control schemes.

URLs and References:

[1] Jason Bloomberg, The Agile Architecture Revolution, 2013, John Wiley and Sons Press, ISBN 978-1-118-41787-4 (ebook)

[2] IEEE Transactions on Smart Grid

Pre- and Co-requisite Knowledge:

Some knowledge on graph theory based algorithmic development for sensor network.


Project Title: Where does my electricity go?

Supervisors: Reza Haffari, Lachlan Andrew, Ariel Liebman

Student cohort:

Both

Background:

Have you ever wondered why your electricity bill is high on a particular month? Smart meters have the potential to tell us which devices consume most of our electricity, but we must coax the information out of them. Smart meters report half-hourly energy use to your retailer, but also can distribute finer time-scale data over a wireless LAN. We would like to "disaggregate" this data and determine how much energy is used by individual devices, eg air conditioning, fridges, heating, cooking etc. This helps awareness about usage pattern, hence potentially reducing significantly the electricity consumption.

Aim and Outline:

In this project, we design and develop machine learning techniques suitable for analysing and mining electricity usage data. The ideal model will be able to accommodate other sources of valuable information as well, e.g. time of the day, season, and temperature records. Particularly we explore a powerful statistical model, called Factorial Hidden Markov Models (FHMMs), and augment it with additional components to capture domain knowledge. We will make use of publicly available data in this project (REDD data set from MIT: http://redd.csail.mit.edu).

URLs and References:

http://redd.csail.mit.edu

Pre- and Co-requisite Knowledge:

Basic probability


Project Title: Finding Monash's heating and cooling costs

Supervisors: Geoff Webb, Sarah Goodwin, Tim Dwyer, Ariel Liebman, Lachlan Andrew

Student cohort:

Both

Background:

We have access to potentially finely-grained data on energy use around the University and particularly in some key new buildings. We would like to "disaggregate" this data further, to identify how much energy is used by different systems, specifically HVAC (heating, ventilation and air conditioning), but also lighting or office equipment.

Aim and Outline:

In this project, we will use a combination of manual sleuthing and data mining techniques to determine what component of Monash's electricity consumption is due to heating and cooling. This will combine the above data set with hourly temperature measurements to try to detect the times at which a building's air conditioning or heating turns on or off, and the power consumption while it is on. The resulting data will be useful for raising awareness about which energy-saving strategies are likely to produce substantial savings. This data will ideally also form the input to a data visualisation project to convey this data to the wider campus community.

URLs and References:

http://intranet.monash.edu.au/bpd/services/environmental-sustainability.html

Pre- and Co-requisite Knowledge:

Basic probability. Understanding Fourier transforms would be an advantage


Project Title: Planning for an uncertain energy future

Supervisors: Aldeida Aleti, Ariel Liebman

Student cohort:

Both

Background:

Electricity grids around the world and in Australian are in the midst of a profound transformation. New technologies such as rooftop solar panels, wind farms, and smart meters are challenging current paradigms in system planning and even threatening existing electricity utility business models.

Aim and Outline:

Electricity utilities, system planners, and governments are facing many future trends that are extremely uncertain. For example there is a great deal of uncertainty about electricity demand growth (or decline) compounded by uncertainty in the rate at which renewable technology costs decline. This project aims to develop optimisation techniques to model the impacts of uncertainty in demand growth, technology costs, and electricity generation feed stocks on optimal investment strategies in renewable technologies in an electricity system. The project will take some inspiration from the work done by the CSIRO Future Grid Forum.

URLs and References:

Pre- and Co-requisite Knowledge:

Project will appeal to student with interest in simulation and modelling with some programming experience. No prior knowledge in optimisation and energy systems required.


Project Title: Simulating batteries in smart grid

Supervisors: Vincent Lee, Ariel Liebman, John Betts

Student cohort:

Both

Background:

Electricity grids around the world and in Australian are in the midst of a profound transformation. New technologies such as rooftop solar panels, wind farms, and smart meters are challenging current paradigms in system planning and even threatening existing electricity utility business models.

Aim and Outline:

This project aims to model the integration of batteries into the smart grid using cloud based high performance computing. The model incorporates an industry standard power system simulation tool called Plexos configured to find the optimal investment in renewable generation technologies in a complex electricity network. The project will entail incorporating models of a range of new battery technologies to determine whether batteries can significantly improve the cost of investing in renewable and other low carbon energy technologies.

URLs and References:

http://www.csiro.au/Organisation-Structure/Flagships/Energy-Flagship/Future-Grid-Forum-brochure.aspx

Pre- and Co-requisite Knowledge:

Project will appeal to a student with an interest in simulation, business decision making and modelling. No prior knowledge in optimisation and energy systems required.


Project Title: A Cyber Security Requirements Model for the Monash Micro-Grid

Supervisors: Carsten Rudolph and Ariel Liebman

Student cohort:

Both

Background:

Microgrids in the formal definition of the U.S. Department of Energy are a group of interconnected loads and distributed energy sources (DERs) within defined electrical boundaries that act like a single controllable entities with respect to the grid. A microgrid can connect and disconnect from other, bigger grids, to enable it to operate both stand-alone or in grid connected modes. During disturbances, the generation and corresponding loads can separate from a distribution system to isolate a microgrids load from any disturbances without harming the grids integrity. The ability to operate stand-alone, or island-mode, has a potential to provide higher local reliability than that provided by a power system as a whole.

These microgrids need extensive attention from the computer security community in so as to make sure that not only during their design but also during their operation at cyber threads do not jeopardize requirements such as safety and reliability. In a broader context, bigger power networks in which microgrids are embedded need the same attention to make sure that the decoupling and integration of individual microgrids does not harm other connected grids. Monash is part of the research initiative towards smart microgrids and new energy technologies in collaboration with the Clean Energy Finance Corporation (CEFC)

"Monash University is intent on developing innovative solutions to the challenges in energy and climate change facing our world,” stated Monash University President and Vice-Chancellor Professor Margaret Gardner.

Aim and Outline:

The goal of this project is to improve the understanding of security requirements of the future Monash electricity network. In order to develop this understanding, you will create a model of the network showing the main components and the processes within the network. Then, you will work with micro grid specialized to identify security requirements in terms of processes and data. The first result will be a formal or semi-formal model the provides a precise expression of security requirements on different levels. You will also be to explore suitability of approaches like business process modelling, formal modelling frameworks or more technical trust relation models to express security requirements of such an infrastructure. As part of the ongoing research initiative towards smart micro grids you will be providing a unique insight into cyber security related challenges w.r.t trust and security in smart grids. Finally, the model will be used to evaluate the impact of possible security solutions.

URLs and References:

Community Energy Networks With Storage http://link.springer.com/10.1007/978-981-287-652-2

S. Gürgens, P. Ochsenschläger, and C. Rudolph.

On a formal framework for security properties

International Computer Standards & Interface Journal (CSI), Special issue on formal methods, techniques and tools for secure and reliable applications, 2004. http://sit.sit.fraunhofer.de/smv/publications/download/CSI-2004.pdf

N. Kuntze, C. Rudolph, M. Cupelli, J. Liu, and A. Monti.

Trust infrastructures for future energy networks (BibTeX).

In Power and Energy Society General Meeting - Power Systems Engineering in Challenging Times, 2010. http://sit.sit.fraunhofer.de/smv/publications/download/PES2010.pdf

Pre- and Co-requisite Knowledge:

The project is suitable for student with cyber security knowledge and a sound knowledge of computer networks. Knowledge in UML, sequence diagrams, use-case modelling is useful.


Project Title: Data analysis and visualisation for electrification of remote underdeveloped locations

Supervisors: Ariel Liebman, Lachlan Andrew, Sarah Goodwin, Tim Dwyer

Student cohort:

Both

Background:

A developing country has over 80,000 villages with about 10,000 being unelectrified. The government has allocated an insufficient budget for electrification of these villages? How would you prioritize the villages and select the right ones to be electrified first?

Aim and Outline:

The aim of this project is to utilize machine learning clustering techniques to assess a database with 80,000 rows and develop decision support tools for helping decision makers in finding the most optimal (least cost and fair) set of villages for electrification.

Pre- and Co-requisite Knowledge:

Data analysis, machine learning, decision analysis


Project Title: Environmentally friendly mining of cryptocurrencies using renewable energy

Supervisors: Adel Nadjaran Toosi, Jiangshan Yu

Student cohort:

Both

Background:

Blockchain technology and its popular cryptocurrencies such as bitcoin and Ethereum have most revolutionary technological advances in recent history, capable of transforming businesses, government, and social interactions. However, there is a darker side to this technology which is the immense energy consumption and potential climate impact of the blockchain and crypto currencies. There is an urgent need to address this danger to ensure the long-term sustainability of IoT.

Aim and Outline:

This project aims to build a Bitcoin or Ethereum mining algorithm which shapes its computing and mining activities based on the availability of on-site renewable energy in the local infrastructure and current energy consumption.

URLs and References:

https://qz.com/1204842/bitcoin-mining-should-use-renewable-energy-if-we-want-cryptocurrencies-to-be-ethical/

Pre- and Co-requisite Knowledge:

Strong programming skills preferably in Java and Python.

Basic knowledge of blockchain.

Strong mathematical background


Project Title: Security of virtualisation on Internet of Things gateways in smart grids

Supervisors:  Carsten Rudolph, Xingliang Yuan

Student cohort:

Both

Background:

Smart grids require communication between various devices connected via different types of field buses, home managements systems, energy management systems, automation, SCADA and industry control systems. This places Internet of Things gateways in a central role to provide this connectivity and as a basis for applications. Current architectures create space for core smart grid services as well as third-party applications on these nodes in terms of containers or virtual machines. Security is a major concern for these applications.

Aim and Outline:

The aim of this project is to use Trusted Computing technology to develop and implement a solutions that provides attestation for containers or virtual machines on IoT gateway nodes.

URLs and References:

Pre- and Co-requisite Knowledge:

Good knowledge of containers (such as Docker), good programming (in particular around Linux operating system)


Project Title: Environmentally friendly mining of cryptocurrencies using renewable energy

Supervisors: Adel Nadjaran Toosi, Jiangshan Yu

Student cohort:

Both

Background:

Blockchain technology and its popular cryptocurrencies such as bitcoin and Ethereum have most revolutionary technological advances in recent history, capable of transforming businesses, government, and social interactions. However, there is a darker side to this technology which is the immense energy consumption and potential climate impact of the blockchain and crypto currencies. There is an urgent need to address this danger to ensure the long-term sustainability of IoT.

Aim and Outline:

This project aims to build a Bitcoin or Ethereum mining algorithm which shapes its computing and mining activities based on the availability of on-site renewable energy in the local infrastructure and current energy consumption.

URLs and References:

https://qz.com/1204842/bitcoin-mining-should-use-renewable-energy-if-we-want-cryptocurrencies-to-be-ethical/

Pre- and Co-requisite Knowledge:

Strong programming skills preferably in Java and Python.

Basic knowledge of blockchain.

Strong mathematical background


Project Title: Visualising Smart Grid Data in Augmented and Virtual Reality
Supervisors: Sarah Goodwin and Barrett Ens

Student cohort:

Both

Background:

With advances in technology our energy networks are becoming modernised and smarter, enabling them to monitor and respond dynamically to local changes in energy demand. Creating effective and engaging visualisation methods can help with several aspects of improving efficiency, e.g. the control and distribution of electricity in operation, and raise awareness of energy conservation among consumers. One effective way would be leveraging Augmented Reality (AR), Virtual Reality (VR) or Mixed Reality (XR) to enable operators to visualize information in the context of physical infrastructure.

Monash University is committed to be Net Zero by 2030 and is currently building an on-site Microgrid.

Aim and Outline:This project aims to compare and evaluate different methods for visualising the smart grid through running user experiments. Different visualisation methods would involve the use of different combinations of visualisation technologies such as tabletop, AR, XR and 3D fabricated models. The contribution would be the design study within the context of spatial-temporal visualisation, for example, to answer questions such as “What information are best visualised in AR or on Tabletop or with 3D fabricated models?”

Participants will make use of our specifically designed Future Control Room facility.

URLs and References:

https://www.monash.edu/net-zero-initiative/microgrid

https://www.monash.edu/net-zero-initiative

https://www.monash.edu/researchinfrastructure/mivp/access/facilities/future-control-room

Pre- and Co-requisite Knowledge:Basic programming knowledge advantageous. Knowledge of Unity programming environment is an asset but not required. Knowledge of User-Centred design and Human-Computer interaction desired but not essential.


Project Title: Impact of visualisation to encourage eco-friendly behaviour

Supervisors: Arnaud Prouzeau, Sarah Goodwin, Tim Dwyer

Student cohort:

Both

Background:

With more and more buildings containing intelligent sensors and systems it is now possible to optimise a building’s energy consumption based on several other parameters such as time of day, outside temperature and room occupancy. For example, on an extremely hot day in a large building when there is low building occupancy, it would be for more efficient to only condition small sections of the building, rather than trying to maintain the temperature across the entire building. Yet, for such a building system to work effectively we need the building occupants to understand, engage and participate. The system not only needs to communicate optimal solutions to occupants, but to encourage a change in occupants group behaviour may not only improve their comfort level, but can have a positive impact on energy consumption.

Visualisation has been used extensively to communicate household energy consumption and encourage occupants to control their consumption (Abrahamse et al., 2005; Bonino et al., 2012). However, there has been limited work with more complex buildings such as office buildings or universities. The use of a central dashboards has been explored in a university with positive results (Timm and Deal, 2016). Other work focused on the use of personal visualisation to communicate group behaviour and encourage participation (Promann and Brunswicker, 2018).

Aim and Outline:This project aims at continuing the exploration of the use of visualisation to encourage eco-friendly behaviour in a university context. One direction would be to explore the impact of different visual representations on a personal device (e.g. smartphone) to encourage students to limit the use of study space during extreme weather days or to accept comfort degradation. The impact of different visual representations on occupant behaviour will be evaluated, but also on their global understanding of the situation and on how they habits can impact energy consumption.

URLs and References:

W. Abrahamse, L. Steg, C. Vlek, and T. Rothengatter. 2005. A review of intervention studies aimed at household energy conservation, Journal of Environmental Psychology, 25(3), Sept. 2005, pp. 273-291

D. Bonino F. Corno and L. D. Russis. 2012. Home energy consumption feedback: A user survey, Energy and Buildings, 47, Apr. 2012, pp 383-393

S. N. Timm and B. M. Deal. 2016. “Effective or ephemeral? The role of energy information dashboards in changing occupant energy behaviors,” Energy Research & Social Science, 19, Sept. 2016, pp 11-20.

M. Promann and S. Brunswicker, 2018, The Effect of Proximal in Social Data Charts on Perceived Unity, Proceedings of IEEE VAST, Berlin, Germany, October 2018.

Pre- and Co-requisite Knowledge:

Programming knowledge of Javascript.


Project Title:   Novel visualisations of energy networks

Supervisors: Sarah Goodwin, Ariel Liebman

Student cohort:

Both

Background:

Traditionally energy networks are displayed as large complex system maps in order to visualise the details of the system. Mapping the geographical components are secondary and the two views are used for separate analysis tasks  (see 1).

Aim and Outline:

We want to explore the use of the current alternatives in more detail and create new designs to incorporate the spatial, temporal and the detailed views. This project involves user observations, visual design components and evaluation studies. Participants will make use of our specifically designed Future Control Room facility (see 2).

URLs and References:

  1. https://www.aemo.com.au/Electricity/National-Electricity-Market-NEM/Planning-and-forecasting/Interactive-maps-and-dashboards/Other-Maps-and-Diagrams
  2. https://www.monash.edu/researchinfrastructure/mivp/access/facilities/future-control-room

Pre- and Co-requisite Knowledge:

At least basic programming knowledge. Some basic knowledge of visual representations components incorporated in network diagrams and maps. Knowledge of user-centred evaluation methods desired but not essential.