Quantization error-based regularization for hardware-aware neural network training

Access this Article

Author(s)

    • Hirose Kazutoshi
    • Graduate School of Information Science and Technology, Hokkaido University
    • Uematsu Ryota
    • Graduate School of Information Science and Technology, Hokkaido University
    • Ando Kota
    • Graduate School of Information Science and Technology, Hokkaido University
    • Ueyoshi Kodai
    • Graduate School of Information Science and Technology, Hokkaido University
    • Ikebe Masayuki
    • Graduate School of Information Science and Technology, Hokkaido University
    • Asai Tetsuya
    • Graduate School of Information Science and Technology, Hokkaido University
    • Motomura Masato
    • Graduate School of Information Science and Technology, Hokkaido University

Abstract

<p>We propose "QER", a novel regularization strategy for hardware-aware neural network training. Although quantized neural networks reduce computation power and resource consumption, it also degrades the accuracy due to quantization errors of the numerical representation, which are defined as differences between original numbers and quantized numbers. The QER solves such the problem by appending an additional regularization term based on quantization errors of weights to the loss function. The regularization term forces the quantization errors of weights to be reduced as well as the original loss. We evaluate our method by using MNIST on a simple neural network model. The evaluation results show that the proposed approach achieves higher accuracy than the standard training approach with quantized forward propagation.</p>

Journal

  • Nonlinear Theory and Its Applications, IEICE

    Nonlinear Theory and Its Applications, IEICE 9(4), 453-465, 2018

    The Institute of Electronics, Information and Communication Engineers

Codes

Page Top