About the project
This project aims to dive into the dynamics of attack methodologies and defensive mechanisms. The outcomes will enhance the security, dependability and trustworthiness of AI applications.
In the wake of growing data privacy concerns and the enactment of the GDPR, Federated Learning (FL) has emerged as a leading privacy-preserving technology in Machine Learning.
Despite its advancements, FL systems are not immune to privacy breaches due to the inherent memorisation capabilities of deep learning models. Such vulnerabilities expose FL systems to various privacy attacks, making the study of privacy in distributed settings increasingly complex and vital.
This project aims to dive into the dynamics of attack methodologies (e.g., Membership Inference, Property Inference) and defensive mechanisms (e.g., Differential Privacy, Machine Unlearning) within FL environments, highlighting potential cross-disciplinary synergies. The outcomes will enhance the security, dependability and trustworthiness of AI applications.
The project will be conducted in collaboration with an interdisciplinary team, including academics from the University of Birmingham, Newcastle University, University of Cambridge, National University of Singapore, and industry experts.
Candidates may choose from, but are not limited to, the following research topics:
- Machine Unlearning for AI applications based on tabular data
- Machine Unlearning for Federated Learning systems
- Privacy attacks in Machine/Federated Learning, if you are interested in conducting attacks
- Federated Learning for Smart Home applications
- Adversarial attacks on Large Language Models
Prospective candidates are invited to apply promptly as selections will be made on a rolling basis.