Since the 1940s, computing has used separable chips, causing a "Von Neumann bottleneck" in AI. CEF is countering this with neuromorphic architectures that merge computation and memory, mimicking the brain to speed up processing and reduce power use. Ever since the 1940’s computing has relied on separable computing and memory chips; the Von Neumann architecture. However, AI workloads are extremely memory-intensive. The serial link between processors and memory is so overburdened by this that it has a special name: the “Von Neumann bottleneck”.CEF researchers are addressing this issue by developing a new class of architectures called “neuromorphic” that enmesh computation and memory in a highly parallel fabric – alike the neurons and synapses in biological brains.The key feature of our neuromorphic designs is that AI workloads can be massively accelerated, and power can drop enormously as data travels shorter distances on average.Related CEF ProjectsProject and link This article was published on 2025-06-23