The goal of the project is to recover 3D information from the visual information collected by a drone or similar autonomous agent. ObjectivesBe able to leverage knowledge from other modalities (eg, self-driving car datasets) to train models that can reconstruct 3D from a different view-point, such as drones.Be able to deploy these models efficiently on low-compute embedded devices.Be able to address scenarios with moving objects (eg, 4D), and incomplete views.Description3D reconstruction models have progressed greatly over the last few years as a result of core advances such as gaussian splatting. However, most of the existing work is trained using data from similar view-points, such as self-driving cars or indoor cameras. We propose to leverage the embedded knowledge that these models have of what the 3D world looks like to be able to recover 3D information from dramatically different viewpoints such as drones. This will bring interesting challenges including fast computation to be deployable as well as 4D reconstruction and occlusion among others. Research theme Autonomous Sensing PlatformsPrincipal supervisorDr Laura SevillaUniversity of Edinburgh, School of Informaticslsevilla@ed.ac.uk This article was published on 2025-10-31