Bayesian Machine Learning on Strategically Manipulated Data

Research Theme

Sensor Signal Processing

Aim

To develop a framework for training adversarially robust machine learning models that addresses the uncertainty of adversaries’ capabilities and the difficulties of data collection in defence contexts.

Objectives

  • Develop a learning algorithm for training adversarially robust Bayesian deep neural networks in the case where we have a pre-specified threat model.
  • Show how to infer a threat model by analysing the distribution shift caused by deploying a system based on Bayesian neural networks.
  • Use techniques from active learning and experimental design to minimise the amount of data required to accurately infer a threat model.

Description

Conventional machine learning methods are not designed to deal with the fog of war. Concerns from security and defence researchers have therefore lead to the development of adversarial machine learning methods. However, most prior work in this area focuses on very naïve threat models and make the assumption that the model trainer has perfect knowledge of the adversary’s capabilities. These naïve threat models often assume the adversary is only capable of manipulating some small number of pixels in an image, or that they can only add interference with small magnitude.

This project will develop general-purpose Bayesian machine learning algorithms for training robust models. The focus will be scenarios where the model trainer has minimal information about an adversary’s ability to manipulate sensor observations. We will show how analysing sequences or sets of distribution shifts induced by model deployments enables one to infer these unknown capabilities.  This will enable the deployment of adversarially robust models across a wide array of defence-relevant prediction problems (e.g., classification, detection, tracking) and data modalities (e.g., natural images, EO, SAR, acoustic, RF).

 

Closing date: 
Apply now

Principal Supervisor

Eligibility

Minimum entry qualification - an Honours degree at 2:1 or above (or International equivalent) in a relevant science or engineering discipline, possibly supported by an MSc Degree. 

Prior knowledge of algorithmic game theory and probability theory would be advantageous.

Further information on English language requirements for EU/Overseas applicants.

Funding

Full funding is available for this position.