Location:
Alrick Classroom 10
Date:
Abstract:
To meet the escalating demands for expansive computing power, mobile edge computing has undergone significant advancements in recent decades. The processors of today, including multi-core processors, graphics processing units (GPUs) and tensor processing units (TPUs), are better equipped to support parallel computing. This enhancement is crucial for efficiently managing large-scale data and executing complex tasks. For instance, GPUs have played a prominent role in accelerating the training and inference of neural networks due to their advantage in parallel computing. These computing resources are now widely deployed across various levels—locally, on edge servers, and on cloud servers, providing computing services that facilitate ubiquitous access to intelligence at any time and from anywhere. In this talk, we will introduce our recent progresses on the dynamic batching scheme for edge computing, aiming to strike a delicate balance between responsiveness and energy efficiency. We will discuss how mobility of can be exploited to enhance the performance of edge computing under energy constraints.
Biography:
Sheng Zhou received the B.E. and Ph.D. degrees in electronic engineering from Tsinghua University, Beijing, China, in 2005 and 2011, respectively. In 2010, he was a Visiting Student with the Wireless System Lab, Department of Electrical Engineering, Stanford University, Stanford, CA, USA. From 2014 to 2015, he was a Visiting Researcher with the Central Research Lab, Hitachi Ltd., Japan. He is currently an Associate Professor with the Department of Electronic Engineering, Tsinghua University. His research interests include cross-layer design for multiple antenna systems, mobile edge computing, vehicular networks, and green wireless communications. He received the IEEE ComSoc Asia–Pacific Board Outstanding Young Researcher Award in 2017, and IEEE ComSoc Wireless Communications Technical Committee Outstanding Young Researcher Award in 2020.