Postgraduate research project

Irresponsible Artificial Intelligence: detection, regulation and mitigation

Funding
Competition funded View fees and funding
Type of degree
Doctor of Philosophy
Entry requirements
2:1 honours degree View full entry requirements
Faculty graduate school
Faculty of Engineering and Physical Sciences
Closing date

About the project

This project explores developing robust machine learning algorithms to detect, regulate, and mitigate risks associated with the use of Artificial Intelligence systems, such as overconfident predictions, bias, time varying distributions and privacy violations. 

It emphasizes the importance of quantifying uncertainty in model-based predictions to address these issues.

Making over-confident predictions, amplifying societal bias inherent in their training data and the misuse of data without respecting privacy and ownership are dangers in the widespread deployment of artificial intelligence systems.

Algorithms used in Machine Learning, the core technology underpinning AI systems, based on the extraction of useful information from large and complex datasets, need to guard against such undesirable outcomes, by having the capability to detect inherent bias in data and prevent their amplification in the learning process, and detect and mitigate against variations and drifts in populations that form their data sources. The core technical need here is quantifying uncertainty of predictions made by a learning system, which should encapsulate techniques to detect any systematic variability, including anomalies, within a training dataset, account for uncertainty arising from parameter estimation and quantify how these translate into uncertainty in any prediction made from the system.

The project will explore the developments of such algorithms, both from a Bayesian statistical inference perspective for prediction from an individual input data, and at the population level of guaranteeing a level of performance on a new dataset at deployment.    

Training to achieve a high level of competence in Bayesian inference and modern neural network architectures will be given in the early stages of the project.