LUT-Based Neural Networks: The Ideal Architectures for FPGA-Accelerated Inference?

Location: 

Lecture Theatre G.26, Murchison House

Date: 

Tuesday, October 18, 2022 - 11:00 to 13:00

It is our pleasure to invite all to a talk by Dr James Davis to be held at Murchison House, Lecture Theatre G.26, on Tuesday October 18th, 2022 at 11:00 am. This is intended primarily as an in-person event, however for those unable to attend at Murchison House, you can join via Teams.

Abstract

In this talk, I will introduce LUTNet: a bottom-up approach to neural network design targeting FPGAs. A binary neural network (BNN) superset, LUTNet dispenses with the 1-bit multipliers used in BNNs and replaces them with K-input, 1-output Boolean functions that are learnt to classify a training dataset. When K is small (typically 4--6), these functions are directly mappable into the primitive lookup tables (LUTs) of the target device. In their standard configuration, LUTNet-based networks classify at their clock rate (several hundred MHz ≡ several hundred million classifications per second) and are highly energy efficient. In our initial work, my colleagues and I showed that the LUTNet architecture is more expressive than that of BNNs, allowing for heavier pruning, and packs more readily into FPGA resources. Since then, we have added support for layer folding, enabling ImageNet-scale classification in reasonable area, and have made steps towards the automated determination of inter-LUT connectivity. I will touch on all of these aspects, and will also give a “sneak peek” into some of our currently unpublished work, including the first on-board prototypes of the LUTNet architecture as well as a chip-filling systolic array-based redesign that regularly closes timing at 1 GHz.

Biography

James Davis is a Research Fellow in the Department of Electrical and Electronic Engineering’s Circuits and Systems group at Imperial College London. He completed his MEng and PhD studies in the same department in 2011 and 2015, respectively. For over a decade, James has driven algorithmic, circuit design and tooling advancements focussed on increasing the performance, energy efficiency, productivity and reliability gains realisable via field programmable gate array (FPGA) acceleration. These cover design and automation spanning gate level to high-level synthesis, and touch on topics including radiation hardening and dynamic reconfiguration. His major contributions include KAPow, the first fine-grained power measurement approach for FPGAs, ARCHITECT, the first hardware-based arbitrary-precision iterative solver, and LUTNet, the first neural network architecture specifically designed for FPGA implementation. James’ work regularly appears in the proceedings of the top-tier reconfigurable computing conferences (FPGA, FCCM, FPL and FPT), and has also featured in journals including TC, TVLSI and TRETS. James serves on the technical program committees of the four aforementioned conferences and is a multi-best paper award recipient.

Event Contact Name: 

Event Contact Email: