Postgraduate research project

Privacy/Security Risks in Machine/Federated Learning systems

Funding
Fully funded (UK and international)
Type of degree
Doctor of Philosophy
Entry requirements
2:1 honours degree View full entry requirements
Faculty graduate school
Faculty of Engineering and Physical Sciences
Closing date

About the project

In the wake of growing data privacy concerns and the enactment of the GDPR, Federated Learning (FL) has emerged as a leading privacy-preserving technology in Machine Learning. Despite its advancements, FL systems are not immune to privacy breaches due to the inherent memorisation capabilities of deep learning models. Such vulnerabilities expose FL systems to various privacy attacks, making the study of privacy in distributed settings increasingly complex and vital.

This project explores the dynamics of attack methods, such as Membership Inference and Property Inference, and defensive techniques, like Differential Privacy and Machine Unlearning, in Federated Learning environments. It also identifies potential synergies across disciplines. The project's outcomes will improve the security, dependability, and trustworthiness of AI applications.

The project will be conducted in collaboration with an interdisciplinary team, including industry experts and academics from the following universities:

  • University of Birmingham
  • Newcastle University
  • University of Cambridge
  • National University of Singapore

Candidates may choose from, but are not limited to, the following research topics: