To develop a framework for monitoring and understanding neural network behaviour in production environments through learned analysis of internal activation patterns, enabling reliable deployment of AI systems in critical manufacturing applications. ObjectivesDevelop neural network-based methods to learn and recognize patterns in CNN activation spaces that indicate normal operation versus drift or anomalous behaviour in production environments Create efficient system architectures for real-time capture, storage, and analysis of high-dimensional activation data from production ML models without impacting inference Design and validate synthesis mechanisms that transform multiple monitoring signals (statistical, neural, and manifold-based) into interpretable, actionable insights for operators with varying technical backgrounds Demonstrate the framework's effectiveness through deployment in STMicroelectronics' semiconductor manufacturing environment, establishing best practices for ML observability.DescriptionWhen neural networks make millions of critical decisions in semiconductor manufacturing, how do we know they're working correctly? This project tackles an unsolved problem: monitoring AI systems when we can't check their answers. During my Master's at STMicroelectronics, I discovered that some production CNNs operate essentially blind. We deploy them and hope for the best.The situation is further complicated by the fact that ML models are static once deployed, and the real world is anything but. We need a way to monitor the models’ performance as their environment and stimuli change and adapt.My research flips this approach by teaching systems to monitor themselves through their internal "thought patterns”. I'm developing methods to track neural activation patterns that reveal when AI models drift from normal behaviour. The framework combines statistical detection with neural networks that learn what "healthy" model behaviour looks like. An ideal solution combines novel technical mechanisms for this kind of anomaly detection with human factors research to determine how to present this information in a way that appeals to stakeholders and operators.This perfectly fits SPADS' mission of trustworthy AI for critical applications. The work extends into anomaly detection for many types of complex signals produced by interweaved systems – for example, these detection techniques could be adapted to analyse mechanical, biological, and sociological systems, as the concept of high-dimensional signal analysis is very versatile. The goal is fundamentally to transform abstract activation patterns---whether on a manufacturing line or any other source ---into clear signals that operators can trust and act upon. Research ThemeMulti-agent Systems and Data IntelligenceSensor Signal ProcessingIndustrial PartnerST MicroelectronicsPrincipal SupervisorProf Rebecca CheungUniversity of Edinburgh, School of EngineeringR.Cheung@ed.ac.ukAssistant SupervisorDr Joao MotaHeriot-Watt University, EPSj.mota@hw.ac.ukAssistant SupervisorDr Steven McDonaghUniversity of Edinburgh, School of Engineeringsmcdonag@ed.ac.uk This article was published on 2025-11-12