Enter your keyword

2-s2.0-85076372502

[vc_empty_space][vc_empty_space]

GEMM-Based Quantized Neural Network FPGA Accelerator Design

Sudrajat M.R.D.a, Adiono T.a, Syafalni I.a

a Microelectronics Center, Bandung Institute of Technology, Bandung, Indonesia

[vc_row][vc_column][vc_row_inner][vc_column_inner][vc_separator css=”.vc_custom_1624529070653{padding-top: 30px !important;padding-bottom: 30px !important;}”][/vc_column_inner][/vc_row_inner][vc_row_inner layout=”boxed”][vc_column_inner width=”3/4″ css=”.vc_custom_1624695412187{border-right-width: 1px !important;border-right-color: #dddddd !important;border-right-style: solid !important;border-radius: 1px !important;}”][vc_empty_space][megatron_heading title=”Abstract” size=”size-sm” text_align=”text-left”][vc_column_text]© 2019 IEEE.In this study, we will explore Neural Network based FPGA acceleration based on accelerating General Matrix Multiplication (GEMM). GEMM acceleration allows regularized and modular implementation of accelerator design, as well as providing the benefits of scalability. GEMM based designs also offer a degree of functional flexibility which is a key benefit to understand the highly dynamic architectural developments in Deep Learning algorithms. We quantify the theoretical performance model and tradeoffs of a GEMM accelerator along with exploration of the design space. Moreover, we propose a design for an accelerator exploiting 8-bit quantization to increase bandwidth while preserving model accuracy, exploiting FPGAs for model parallelization and data re-use for high performance and low latency neural network inference. Lastly, we test and evaluate our design on the MNIST dataset. The proposed method is useful to optimize the hardware area in Deep Learning systems without sacrificing performance.[/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”Author keywords” size=”size-sm” text_align=”text-left”][vc_column_text]Accelerator design,Fpga accelerators,MAtrix multiplication,Modular implementation,Network inference,Parallelizations,Quantization,Theoretical performance[/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”Indexed keywords” size=”size-sm” text_align=”text-left”][vc_column_text]General Matrix Multiplication,Neural Network,Quantization[/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”Funding details” size=”size-sm” text_align=”text-left”][vc_column_text][/vc_column_text][vc_empty_space][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_empty_space][megatron_heading title=”DOI” size=”size-sm” text_align=”text-left”][vc_column_text]https://doi.org/10.1109/ISESD.2019.8909538[/vc_column_text][/vc_column_inner][vc_column_inner width=”1/4″][vc_column_text]Widget Plumx[/vc_column_text][/vc_column_inner][/vc_row_inner][/vc_column][/vc_row][vc_row][vc_column][vc_separator css=”.vc_custom_1624528584150{padding-top: 25px !important;padding-bottom: 25px !important;}”][/vc_column][/vc_row]