Our performance in everyday noisy situations is known to depend on aural and visual senses. The ‘multi-modal’ nature of speech perception has been confirmed by research, which has established that listeners unconsciously lip-read to improve the intelligibility of signals amid background noise. The Cognitively-Inspired, 5G-IoT Enabled Multi‐Modal Hearing Aids (COG-MHEAR) will harness these insights to create “transformative, privacy-preserving multi-modal hearing aids”, which are hoped to be ready by 2050. The new hearing aids will seamlessly mimic the unique human cognitive ability to focus on hearing a single talker, effectively phasing out background distraction sounds regardless of their nature. This ambitious project will use innovative data science related to machine learning and privacy algorithms, while integrating enabling technologies such as the Internet of Things (IoT) and 5G wireless technology. Cognitively-Inspired, 5G-IoT Enabled Multi‐Modal Hearing Aids (COG-MHEAR) University of Edinburgh researchers will be working on the next generation of low-power high-performance computing architectures for off-chip and on-chip machine learning, to enable IoT and 5G-connected hearing aids.They will also use revolutionary low-power radio frequency sensing technologies for ‘cognitive load sensing’, enabling researchers to create intelligent hearing devices which take into account emotional stress in the brain alongside a range of other factors that a user experiences.Research will be carried out in consultation and collaboration with clinical partners and end-users including:Sonova, the leading global hearing aid manufacturersNokia Bell-Labs, an R&D company which drives wireless research and standardisationAlpha Data, high performance computing SMEDigital Health & Care Institute and The Data Lab, both centres for national innovationHearing loss charities Deaf Scotland and Action on Hearing Loss This article was published on 2024-09-15