Causal AI in understanding medical images

We are seeking two highly motivated PhD students to work at the intersection of academia and industry through the CHAI Hub (Causality in Healthcare AI), a newly founded EPSRC AI hub.

The CHAI Hub is a consortium of six UK universities, the NHS, industry partners, and government bodies, focused on revolutionising healthcare through causal AI.

These PhDs are in collaboration with Canon Medical Research Europe and focus on developing causal AI solutions for rethinking how we develop and use medical imaging, and contribute to an exciting area where AI meets imaging.

The candidates will join Prof. Sotirios Tsaftaris’ team [1] and CHAI [2], with regular visits to Canon Medical's Edinburgh-based R&D center [6] (part of the Canon Inc conglomerate) and potentially visit other CHAI partners. You’ll gain valuable experience working across both academic and industry environments, with strong mentorship and training opportunities, from leading experts.

Projects overview

The appearance of a medical image depends on acquisition factors such as scan modality (e.g. CT, MRI, X-Ray), scanner properties (e.g. detector size and characteristics), scan acquisition parameter choices (e.g. radiation dose), any tissue enhancement techniques (e.g. injected contrast), any phenomena giving rise to artefacts (e.g. metal implants causing streak artefact), and the position and pose of the patient. Thus, even for the same patient at the same timepoint, one image may have a different appearance to another; this variation makes it challenging for both human experts and automatic algorithms to interpret a scan.

Causal theory concerns the problem of modelling variables and their directional relationships, helping to answer questions such as: “If I change (intervene on) X, what will be the (size of) the effect on Y?” Causal models have been demonstrated in computer vision for scene understanding, to allow domain generalisation when there are changes in generative factors e.g. camera viewpoint, spatial object configuration [3]. Specifically, they have been studied in the context of deep learning on medical images, focussing on data collection, annotation, preprocessing, and learning strategies [4] with some preliminary investigation of robust learning in the presence of causal and domain-related factors [5]. In these projects we aim to model the causal relationships between scanner acquisition parameters, the subsequent acquired images, and the predictions of deep learning models trained or deployed on these images. We will additionally consider patient-related factors where available, such as prior images and clinical information.

Modelling causal relationships will enable simulations to test the robustness of deep learning solutions, as well as guiding the development of methods to mitigate the effect of these changes, either during training or deployment. Methods should be designed to learn from retrospective data; there may be opportunity to acquire new data under new conditions e.g. new combinations of scan acquisition parameter values.

Please note that this advert might close once the positions are filled. Please apply as soon as possible to avoid disappointment.

Further Information: 

[1] VIOS website https://vios.science

[2] CHAI website https://chai.ac.uk

[3] Anciukevicius, T., Fox-Roberts, P., Rosten, E. and Henderson, P., 2022. Unsupervised causal generative understanding of images. Advances in Neural Information Processing Systems35, pp.37037-37054.

[4] Castro, D.C., Walker, I. and Glocker, B., 2020. Causality matters in medical imaging. Nature Communications11(1), p.3673.

[5] Carloni, G., Tsaftaris, S.A. and Colantonio¹, S., CROCODILE: Causality Aids Robustness via Contrastive Disentangled LEarning. In Uncertainty for Safe Utilization of Machine Learning in Medical Imaging: 6th International Workshop, UNSURE 2024, Held in Conjunction with MICCAI 2024, Marrakesh, Morocco, October 10, 2024, Proceedings (p. 105). Springer Nature.

[6] Canon Medical Research Europe website https://research.eu.medical.canon

Candidates must apply via the UoE system “see the click here to apply”. Ensure that you mention the project title within your statement and in your research proposal and mention your earliest start date. The proposal does not need to be longer than two pages. The candidate can email Prof. Tsaftaris for inquiries (include a CV and mention this position in the email subject) but note that this does not constitute a formal application.

The University of Edinburgh is committed to equality of opportunity for all its staff and students, and promotes a culture of inclusivity. Please see details here: https://www.ed.ac.uk/equality-diversity

Closing Date: 

Friday, January 31, 2025

Principal Supervisor: 

Assistant Supervisor: 

Eligibility: 

A first class (or strong 2:1) honours degree or Distinction Masters level degree in Engineering, Computing, Mathematics, Physics, or relevant discipline is required. Candidates with an MSc equivalent training will be preferred. Demonstratable evidence of knowledge of AI (e.g. via coursework, projects, publications, work experience), and computational frameworks such as PyTorch, TensorFlow (e.g. via coursework or public repositories) are required. Evidence of prior publications of high caliber (e.g. in computer vision, image analysis or processing or machine learning) are desired but not essential criteria. The candidate should have a high level of analytical and investigative skills and a strong mathematical background. Ability to work within a team, collaborate and inspire others are essential criteria; thus, good communication and desire to own the project are sought-after abilities.

Further information on English language requirements for EU/Overseas applicants.

Funding: 

Tuition fees + stipend are available for Home/EU and International students

This position is fully funded for 42 months (3.5 years) and is open to all students with a preference for UK/EU nationals. International tuition fees can be covered for exceptional candidates.

Funding source: Canon Medical.

Further information and other funding options.

Informal Enquiries: 

Professor Sotos Tsaftaris (S.Tsaftaris@ed.ac.uk)