Postgraduate research project

Uncertainty quantification and reliable feature extraction in deep learning

Funding
Competition funded View fees and funding
Type of degree
Doctor of Philosophy
Entry requirements
2:1 honours degree View full entry requirements
Faculty graduate school
Faculty of Engineering and Physical Sciences
Closing date

About the project

The predictive power of deep learning models offers widespread promise, but they are hard to interpret. Their predictions do not have associated measures of uncertainty. To address these shortcomings, this project will develop measures to characterise the reliability of deep learning models based on prior work on probabilistic model compression

The predictive power of deep learning models offers promising opportunities in a wide range of application domains.  However, it is widely acknowledged that these models with millions of parameters are hard to interpret, nor do model predictions come with measures of uncertainty. These are shortcomings that need to be overcome to realise the promise of safely deploying deep learning models widely.  


Inference using machine learning is usually a stepping stone,  feeding into further analysis and decision making.  For instance, after segmenting cell boundaries in biomedical images created using fluorescent molecules, every delineated cell in a tissue is associated with a vector of molecular characteristics.  These are then further processed and eventually interpreted by cancer biologists.  Since errors in the deep learning based segmentation step can propagate and influence downstream interpretive decisions, having uncertainty quantification of the segment boundaries would bring vigilance to such meaning-making.

The project will build on prior work wherein a Bayesian deep learning approach was used to replace the millions of weight values of a deep network by a few hundred codewords, with no loss in performance.  This enables an ensemble of deep networks to be sampled, providing a distribution of features and predictions for each data point.  Designing calibrated measures of reliability from these distributions will be the objective of your PhD.   
You will have the opportunity to grow your understanding of the mathematical underpinnings of the subject.  You will also work in a vibrant group environment and be exposed to a broad range of ideas in this area of artificial intelligence.