Postgraduate research project

GATE: Generative AI Trust Evaluation - Assessing the influence of large language models on human behaviour and decision-making

Funding
Competition funded View fees and funding
Type of degree
Doctor of Philosophy
Entry requirements
2:1 honours degree View full entry requirements
Faculty graduate school
Faculty of Engineering and Physical Sciences
Closing date

About the project

The project aims to improve AI-literacy, distinguish AI from human content, and propose responsible LLM usage guidelines. 

In this project, you will:

  • use mixed methods, including participatory workshops and controlled experiments
  • investigate how advice generated by Large Language Models (LLMs) affects human behaviour, trust, and their reliance on generative technology focusing on both low-risk (e.g., recipe advice or general consumer advice) and high-risk (e.g., legal, medical, finance) domains
  • aim to better understand the implications of LLMs on society and propose guidelines for the responsible use of LLMs.
  • identify predictors of LLM-generated content and develop tools to enhance the general public’s ability to distinguish it from human-authored content, thereby creating positive impact by improving public AI-literacy
  • you will work with researchers in computer science while using a mixed-method approach
  • you will understand the problem domain and its context, using qualitative methods such as participatory workshops and focus groups
  • finally, having defined your research questions, you will shift towards a stronger reliance on quantitative methods, e.g., controlled experiments, to evaluate the impact of LLM-generated content.

This project is particularly timely and relevant, as LLMs, like ChatGPT and Gemini, can generate text that is virtually indistinguishable from human-authored content. This presents rich opportunities for positive impact – such as levelling the playing field for non-native speakers and offering personalised learning materials. 

However, LLMs also come with significant risks, including the intentional and accidental spread of misinformation, job loss, and an overreliance on LLM-generated advice. While LLMs have received growing attention in industry and academia, human trust and reliance on LLM-provided content remains underexplored.