Postgraduate research project

Formal verification of AI interfaces

Funding
Fully funded (UK and international)
Type of degree
Doctor of Philosophy
Entry requirements
2:1 honours degree View full entry requirements
Faculty graduate school
Faculty of Engineering and Physical Sciences
Closing date

About the project

The field of computing is facing a conundrum caused by a clash in two opposing trends: on the one hand, the growth and proliferation of machine learning (ML) in software, and on the other hand, ever-growing concerns that, with ML models being a black-box technology, the safety, security and explainability of software that uses ML diminish. 

The field of computing is facing a conundrum caused by a clash in two opposing trends: on the one hand, the growth and proliferation of machine learning (ML) in software, and on the other hand, ever-growing concerns that, with ML models being a black-box technology, the safety, security and explainability of software that uses ML diminish.

To address these concerns, we need tools and languages that can serve as safe interfaces to ML components. Such safe ML interfaces will allow to specify the desired properties of ML models, train ML models to satisfy such properties, and verify that these desired properties do in fact hold in the final artifact. For example, one language that supports the safe ML interfaces approach is the Haskell DSL Vehicle [1]; and one iconic application for safe ML interfaces is in verifying autonomous car controllers [1].

At the moment, we use the Coq proof assistant and the recent MathComp-Analysis library to study the formalization of Differentiable Logics that allow to specify certain safety properties of ML models, and then compile them down into loss functions for training. We are looking for a PhD applicant with keen interest in mathematics, logic and/or Coq programming to join this team, to extend the initial study of [2] to a richer language such as [1]

The conditions of this PhD funding come with no restrictions on nationality but assume that a successful PhD candidate will have a competitive CV.

Supervisory team: 

 

[1] Matthew L. Daggitt, Wen Kokke, Robert Atkey, Natalia Slusarz, Luca Arnaboldi, Ekaterina Komendantskaya.
Vehicle: Bridging the Embedding Gap in the Verification of Neuro-Symbolic Programs. CoRR abs/2401.06379 , 2024.

[2] R. Affeldt, A. Bruni, E. Komendantskaya, N. Slusarz, and K. Stark. Taming Differentiable Logics with Coq Formalisation. In Interactive Theorem Proving (ITP) 2024, 2024.