Postgraduate research project

Quantization effects on training and inference in Large Language Models

Funding
Competition funded View fees and funding
Type of degree
Doctor of Philosophy
Entry requirements
2:1 honours degree View full entry requirements
Faculty graduate school
Faculty of Engineering and Physical Sciences
Closing date

About the project

This project aims to systematically investigate the effects of various quantization methods on the training and inference of Large Language Models (LLMs).

LLMs have revolutionized natural language processing by achieving unprecedented levels of understanding and generation capabilities. However, their substantial computational and memory demands pose significant challenges in resource-constrained environments. Quantization techniques, which reduce the precision of model parameters and activations, offer a promising avenue to enhance computational efficiency. 

Large Language Models (LLMs) have revolutionized natural language processing by achieving unprecedented levels of understanding and generation capabilities. However, their substantial computational and memory demands pose significant challenges for training and deployment, especially in resource-constrained environments. Quantization techniques, which reduce the precision of model parameters and activations, offer a promising avenue to enhance computational efficiency and reduce memory footprint. Despite their potential, the impact of quantization on the training dynamics and inference performance of LLMs is not fully understood.

We will explore uniform and non-uniform quantization schemes, analyze their influence on model convergence during training by differential testing and verification, and assess the trade-offs between computational efficiency and performance metrics such as accuracy and generalization by hyper-parameter optimization. By conducting extensive experiments across different model architectures and datasets, we seek to identify optimal quantization strategies that minimize performance degradation while maximizing efficiency gains.

The outcomes of this research will provide valuable insights into the practical implementation of guaranteed quantization in LLMs, guiding both academia and industry in developing more efficient models. This will facilitate the broader deployment of advanced language technologies, making them accessible and sustainable across diverse computational platforms.