Postgraduate research project

Privacy attacks and defenses in Federated Learning systems

Funding
Fully funded (UK and international)
Type of degree
Doctor of Philosophy
Entry requirements
2:1 honours degree View full entry requirements
Faculty graduate school
Faculty of Engineering and Physical Sciences
Closing date

About the project

This project aims to dive into the dynamics of attack methodologies and defensive mechanisms. The outcomes will enhance the security, dependability and trustworthiness of AI applications. 

In the wake of growing data privacy concerns and the enactment of the GDPR, Federated Learning (FL) has emerged as a leading privacy-preserving technology in Machine Learning. 

Despite its advancements, FL systems are not immune to privacy breaches due to the inherent memorisation capabilities of deep learning models. Such vulnerabilities expose FL systems to various privacy attacks, making the study of privacy in distributed settings increasingly complex and vital. 

This project aims to dive into the dynamics of attack methodologies (e.g., Membership Inference, Property Inference) and defensive mechanisms (e.g., Differential Privacy, Machine Unlearning) within FL environments, highlighting potential cross-disciplinary synergies. The outcomes will enhance the security, dependability and trustworthiness of AI applications.

The project will be conducted in collaboration with an interdisciplinary team, including academics from the University of Birmingham, Newcastle University, University of Cambridge, National University of Singapore, and industry experts.

Candidates may choose from, but are not limited to, the following research topics:

Prospective candidates are invited to apply promptly as selections will be made on a rolling basis.