Complexity-Aware Layer-Wise Mixed-Precision Schemes With SQNR-Based Fast Analysis
- 주제(키워드) Hardware , Quantization (signal) , Artificial neural networks , Sensitivity , Computational modeling , Training , Q-factor , Deep neural network (DNN) , mixed-precision , signal to quantization noise ratio (SQNR) , complexity-awareness
- 주제(기타) Computer Science, Information Systems; Engineering, Electrical & Electronic; Telecommunications
- 설명문(일반) [Kim, Hana; Kim, Ji-Hoon] Ewha Womans Univ, Dept Elect & Elect Engn, Seoul 03760, South Korea; [Kim, Hana; Kim, Ji-Hoon] Ewha Womans Univ, Grad Program Smart Factory, Seoul 03760, South Korea; [Eun, Hyun; Choi, Jung Hwan] OPENEDGES Technol Inc, Seoul 03063, South Korea
- 등재 SCIE, SCOPUS
- OA유형 Gold Open Access
- 발행기관 IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
- 발행년도 2023
- 총서유형 Journal
- URI http://www.dcollection.net/handler/ewha/000000213727
- 본문언어 영어
- Published As https://doi.org/10.1109/ACCESS.2023.3325402
초록/요약
Recently, deep neural network (DNN) acceleration has been critical for hardware systems from mobile/edge devices to high-performance data centers. Especially, for on-device AI, there have been many studies on hardware numerical precision reduction considering the limited hardware resources of mobile/edge devices. Although layer-wise mixed-precision leads to computational complexity reduction, it is not straightforward to find a well-balanced layer-wise precision scheme since it takes a long time to determine the optimal precision for each layer due to the repetitive experiments and the model accuracy, the fundamental measure of deep learning quality, should be considered as well. In this paper, we propose the layer-wise mixed precision scheme which can significantly reduce the time required to determine the optimal hardware numerical precision with Signal-to-Quantization Noise Ratio (SQNR)-based analysis. In addition, the proposed scheme can take the hardware complexity into consideration in terms of the number of operations (OPs) or weight memory requirement of each layer. The proposed method can be directly applied to inference, meaning that users can utilize well-trained neural network models without the need for additional training or hardware units. With the proposed SQNR-based analysis, for SSDlite and YOLOv2 networks, the analysis time required for layer-wise precision determination is reduced by more than 95% compared to conventional mean Average Precision(mAP)-based analysis. Also, with the proposed complexity-aware schemes, the number of OPs and weight memory requirement can be reduced by up to 86.14% and 78.03%, respectively, for SSDlite, and by up to 51.93% and 50.62%, respectively, for YOLOv2, with negligible model accuracy degradation.
more