This workshop introduces natural language as data for deep learning. We discuss various techniques and software packages (e.g. python strings, RegEx, Word2Vec) that help us convert, clean, and formalise text data “in the wild” for use in a deep learning model. We then explore the training and testing of a Recurrent Neural Network on the data to complete a real world task. We'll be using TensorFlow v2 for this purpose.
With a balanced mix of lectures and hands-on coding, by the end of this workshop, you should be confident with the basic end-to-end process of applying deep learning techniques to natural language. We will show you how to see text as data, and how to process text depending on your intended solution. You would also be familiar with a variety of neural network architectures that are suited for natural language and time series data in general.
Difficulty level: Foundational.
Prerequisites: You should have some experience coding in Python. Additionally, it would be beneficial to have prior understanding of basic deep learning concepts such as backpropagation and gradient descent.
This workshop is an introduction to how deep learning works and how you could create a neural network using TensorFlow v2. We start by learning the basics of deep learning including what a neural network is, how information passes through the network, and how the network learns from data through the automated process of gradient descent. You would build, train and evaluate your very own network using a cloud GPU (Google Colab). We then proceed to look at image data and how we could train a convolution neural network to classify images. You will extend your knowledge from the first part to design, train and evaluate this convolutional neural network.
This workshop explores how data visualisation techniques could be utilised to better understand data and to communicate research efforts and outcomes. We'll be covering a broad range of techniques from simple and static 2D graphics to advanced 3D visualisations in order to provide a broad overview of the tools available for data analysis, presentation and storytelling. We'll also explore, among others, animated charts and graphs, web visualisation tools such as scrollytellers, and the possibilities of 3D, interactive, and even immersive visualisations. We use real world, concrete examples along the way in order to tangibly illustrate how these visualisations can be created and how viewers perceive and interact with them. We also introduce the various tools and skill sets you would need to be proficient at presenting your data to the world.
By the conclusion of this workshop, you would gain familiarity with the various possibilities for presenting your own research data and outcomes. You would have a more intuitive understanding of the strengths and weaknesses of various modes of data visualisation and storytelling, and would have a starting point to obtain the right skill sets relevant to developing your visualisations of choice.
This is an introduction to statistical inference for all branches of academic research. The course covers the following things.
The foundations of statistics.
Useful hypothesis tests.
Hypothesis test p-values, size, power and confidence intervals.
Statistical reasoning in modern research papers.
This course is delivered as a single 3 hour session. The course is lecture-style, during which student comments and questions are strongly encouraged. A full set of notes and several accessible journal articles are provided. Real data sets and analysis from the Monash Statistical Consulting Service will be provided as case studies.
The course ends with a guidance about learning more about statistics - for example, online courses, books, computer software and Monash courses.
Recommended or required prerequisites No prior experience in statistics is needed.
At the end of this course, you should:
Understand sampling variance;
Be able to articulate your empirical research in the language of formal hypotheses and statistical inferences;
Appreciate the value of coherent data summaries, charts and tables;
Know what further resources are available to you in order to augment your knowledge about statistics.
Research Methods & Analysis: This is an introduction to the best computer software that you can use to manage and analyse data.
We will cover the advantages and the disadvantages of different interfaces, data visualization options and programming languages. Specifically, we will discuss Excel, R, SPSS, STATA and Graphpad Prism.
Tips on accessing and installing these programs will be provided (for example via open-source downloads, Monash’s MoVE system and free “cloud” settings).
A list of relevant training courses available to Monash students will also be provided. Real data sets from the Monash Statistical Consulting Service will be used as case studies.
Using R and SPSS as examples, we will explore essential elements of working with data, including data set-up, analysis functions and basic coding. In addition, we will discuss general topics about data entry, data organization, and transferring data between software environments.
There are no requirements or prerequisites.
At the end of this course, you will be able to:
Select the right software for your research and data;
Save time in your research by avoiding unnecessarily complex options, practices and habits;
Know how to access further help and training on statistical software.
Excellence in Research and Teaching - Research Methods & Analysis
This is an introduction to the basic statistical concepts and computer skills that are necessary for analysing data. The focus is on how to make your empirical research useful and credible. Real data sets from the Monash Statistical Consulting Service will be used as case studies.
There are 3 computer-lab style sessions, each of 3 hours duration. During each session, everyone will be required to analyse data on their own computer.