Research themeMulti-agent Systems and Data IntelligenceAimTo devise novel, explainable, and robust speech deepfake detection methods that go beyond current and next-generation techniques.Objectives Understand the vulnerabilities of State-of-the-Art deepfake detectors when faced with the latest methods for speech synthesis.Construct purposely-designed attacks and simulations of beyond-next-generation synthetic speech to further explore these vulnerabilities.Devise novel approaches to constructing a deepfake detector that address the identified vulnerabilities, and go beyond binary fake vs. bonafide decisions by providing measures along multiple dimensions.Collaborate with HMGCC engineers and scientists on the accuracy, utility, and usability of the detector and its explanations, when applied to their use-cases in the field.DescriptionDigitally created or manipulated synthetic speech of remarkably high naturalness is a reality. Beneficial uses include text-to-speech for people who cannot speak, and privacy-preserving identity protection. But speech deepfakes enable deception, from scam calls to large-scale election interference on social media. Current automatic detection methods learn to rely on specific low-level artefacts in the synthetic speech used to train them, making them vulnerable to newly-emerging deepfakes, whilst incorrectly classifying some natural speech as fake.In this project, you will rethink deepfake detection. For example, can we take inspiration from human judgments? Humans do not need to have previously heard a specific type of deepfake, presumably because we have a strong internal model of natural speech. Can we decompose “naturalness” into useful dimensions that enable explainable deepfake detection, whether fully-automated or human-AI collaboration (e.g., tools for speech forensic scientists). Better methods of speech evaluation will also advance research in speech generation for beneficial use-cases. Closing date:  Sat, 31/01/2026 - 12:00 Apply now Principal Supervisor Prof Simon King Assistant Supervisor Dr Cassia Valentini Eligibility Minimum entry qualification - an Honours degree at 2:1 or above (or International equivalent) in a relevant science or engineering discipline, possibly supported by an MSc Degree. Further information on English language requirements for EU/Overseas applicants.This project combines elements of audio engineering, linguistics, speech science, forensics, and AI, so we are looking for applicants who have a background in one or more of those areas and the aptitude to acquire the necessary skills in the other areas during the PhD.Note: we do not require an undergraduate degree in a STEM subject but would, for example, consider an applicant with an undergraduate degree in linguistics plus a Masters in Speech & Language Processing. Funding Full funding is available for this position.