Title: DeepBurning2.0: An Automatic End-to-end Neural Network Acceleration System on FPGAs
Lecturer: Dr. Cheng Liu
Host: Dr. Junying Chen
Time: 14:00-16:00 November 5, 2019
Venue: Academic Report Hall in B8 Building, South Campus of SCUT
Abstract:
Deep learning neural networks have shown superior performance and been adopted in massive domains of applications, which now become key workloads in modern computing systems. Compared to general processors like CPU and GPU, customized deep learning accelerators (DLA) are demonstrated to be more efficient in terms of performance, latency and energy efficiency. However, the design of DLA remains challenging. 1) Many new neural network models are continuously proposed. The new features of these models pose great challenge to the DLA architecture design that evolves much slower. 2) The distinct applications may have diverse requirements on latency, throughput and energy efficiency, it is difficult to design an DLA architecture that fits all requirements. To address the DLA design challenge, we developed an automatic DLA design framework named DeepBurning targeting at FPGAs. In DeepBurning 1.0, we mainly offer a set of independent neural network operators and generate the controlling sequences according to the given neural network architecture. In DeepBurning2.0, we define a set of instructions to fit various fine-grained computing in neural networks and offer multiple configurable DLA templates to enable rapid FPGA-based DLA generation. In addition, we also developed a full-stack compilation tools including model quantization and pruning, model compilation, and instruction generation to provide an end-to-end deep learning acceleration solution on FPGAs. Currently, it supports more than 30 main-stream neural network models and allows flexible customization. The generated accelerators can be seamlessly deployed on commodity Xilinx FPGAs.
Short bio of Lecturer:
Cheng Liu is an associate professor in Institute of Computing Technology, Chinese Academy of Sciences. He got both B.E. and M.E. degrees from Harbin Institute of Technology in 2007 and 2009 respectively, and graduated from The University of Hong Kong in 2016. Then he worked as a research fellow in National University of Singapore. Afterwards, he joined ThingsLab (https://lab.things.ac.cn/) in Institute of Technology in June 2018. His research mainly includes FPGA-based reconfigurable computing, in-storage computing, and customized hardware acceleration of large-scale graph processing and deep learning. He has published around 15 papers on important FPGA and CAD conferences like FPL, FPT, and ICCAD.