What a great year 2019 was, busy, but great. I progressed in my career, made
I was recently asked to present on the topic of deep learning in the real world at Manchester Metropolitan University for their series 'Professional Development'. I was more than happy to present at the University where I did my undergraduate and postgraduate studies.
My approach for the lecture , which needed to last circa 50 minutes with Q&A, was to present a top level overview of deep learning, why students’ should care and how they can get involved (career/hobby).
I first needed to define the scope and context, I decided that instead of focusing on the work we do at the Centre For Military Health Research, new entrants into the field, I wanted to use examples from across the industry. While I was aggregating these examples it dawned on me how far we have actually come in the last 5 years, from smart cars (AutoPilot at Tesla), health (DeepMind) and even to dating (Tinder with AI), we really have come a long way.
The intention of this post is to share the content of the lecture (some examples of deep learning), but also to provide links and sources - more than you ever could provide to a cohort of year 2 undergraduate students in 50 minutes.
What is Deep Learning?
Before you try and understand deep learning it is very important to understand the field of machine learning (deep learning is a subset). The ultimate goal of deep learning is to aid in the development of artificial intelligence (think Person of Interest but without the Hollywood drama).
Deep learning is about learning multiple levels of representation and abstraction that help to make sense of data (typically big big datasets). Some examples of data: images, audio and text. The crux of deep learning is to understand neural networks and the relationship these networks have with the function of the human brain.
However, with this top level discussion on what is deep learning I think it is very important to stress that you do not need to understand the inner workings (like theory) of deep learning to utilise its function, we have frameworks which enable you to interact with deep learning algorithms with relative ease. You only need to master the basic concepts and theories and not, for example, the intricacies of max pooling.
Limitations/Criticisms of Deep Learning
Before we start looking at some of the great work deep learning does in the wild it is important to list some of its limitations and criticisms. Interestingly, these types of issues are starting to become more common and researchers are actively trying to address these. A great article was published in Computer Vision and Pattern Recognition Conference by Anh Nguyen of University of Wyoming entitled "Deep Neural Networks are Easily Fooled:
High Confidence Predictions for Unrecognizable Images".
I do not want to dwell on these for long, therefore I will list some of the other limitations and criticisms below:
- Requirement for expensive rigs such as a GPU
- Does not capture the biological mechanisms of the brain
- Many hyperplane parameters to fine tune. This is a major issue with current implementations
- Black Box - It can be very difficult to understand how the model is 'thinking' and what mapping has been undertaken throughout the layers
- Easily fooled with examples
- Requirements for large amounts of data and can easily be overfit
- Computational expense of training the models. This can take weeks and months to generate a single model for testing.
These are just some limitations/criticism but some of them can easily be overcome.
Deep Learning in the Wild
Here, I will provide a couple of examples of how deep learning is being used in the wild.
Autoencoding Blade Runner
A really fun example is that by Terence Broad, who has blogged extensively on using deep learning to autoencoding movies. A great example (see below) is that of autoencoding the Blade Runner trailer.
You can read about this process on his blog here. It is great to see how well the models performed and the outcome Terence was able to achieve. Further, the methods employed to autoencode Blade Runner can be applied to other domains.
Terrence has published his source code here.
Self Driving Cars
Tesla, Google and Uber are currently battling for dominance in the self driving car market. Deep learning is utilised extensive to enable cars to be autonomous. The objective is to utilize deep learning for tasks such as lane detection, pedestrian detection, sign detection, traffic light detection and blind-spot monitoring. Deep learning is also being used to predict behaviors of other road users and act accordingly.
An example of deep learning in action is below:
nVidia has published a detail article on how it utilises deep learning and you can read that here.
MIT has published a source on deep learning and self driving cars. You can find it here.
Udacity is working on an open source self driving car. You can find information about it here.
DeepMind and Space Invaders
We all love the classic game of Space Invaders, so do the guys at DeepMind (a subsidiary of Google). They developed a deep learning method called Deep Q which is able to master a diverse range of Atari 2600 games to superhuman level with only the raw pixels and game score as input. The objective of the work has been to get deep learning to perform at the same level, or better, than a human performing the same task.
Deep Q was able to outplay a professional human player on 49 games of the Atari platform. Truly amazing. You can see a video of it in action below:
You can read the paper in Nature here.
DeepMind and Healthcare
One of the biggest areas in which deep learning can have an impact is in the field of healthcare. DeepMind is actively focused on utilising healthcare records data to improve hospital efficiency, mental and physical health diagnoses and treatment. DeepMind utlise Electronic Healthcare Records (an area I have worked in the past) as the basis for its modeling.
DeepMind Streams has been developed to provide clinicians with specific information about the patient they are seeing in real time. Without the need to consult notes and other systems. Streams aggregates a wide array of data sources and brings it all together into one easy to use mobile phone application. Streams already utlises deep learning to help identify specific conditions and flags them to the clinician for investigation.
Deep Learning at the Centre for Military Health (King’s College London)
At the Centre for Military Health Research we are very interested in using Deep Learning to help us improve the physical and mental health of the UK Armed Forces (serving and veterans).
We are actively deploying it in several research projects, for example the InDEx App, which is a highly personalised alcohol invention for veterans using cloud-based deep learning to personalise SMS text messages and a local embedded solution (classic Support Vector Machines) to personalise the app (enables us to handle offline states). These features are very experimental and not yet in the production version of the app. In another study, we have used deep learning to help identify those likely to get Post-Traumatic Stress Disorder based on service exposures, automatically without the need for human intervention.
I will be posting updates on these projects and releasing all source code(sadly no data) in due course.
Road to using Deep Learning
Getting started in deep learning is actually easier than you might think, with the vast majority of code released open source (even by DeepMind and Facebook). I would recommend the following steps:
- Learning the basics of how deep learning operates and functions
- Get some data - many big datasets available for download
- Get the source code and start to play around.
A lot of deep learning is about experimentation, trial and effort.
"You do NOT have to learn everything about a technology before participating. Learning by doing & building is important."
Deep learning is truly transforming the way we process, store and manage data, it is making us safer, healthier and happier.
I wonder if lecture is the correct term to use here. ↩︎