Optimized Quantization for Convolutional Deep Neural Networks in Federated Learning

DOI

抄録

Federated learning is a distributed learning method that trains a deep network on user devices without collecting data from central server. It is useful when the central server can?ft collect data. However, the absence of data on central server means that deep network compression using data is not possible. Deep network compression is very important because it enables inference even on device with low capacity. In this paper, we proposed a new quantization method that significantly reduces FPROPS(floatingpoint operations per second) in deep networks without leaking user data in federated learning. Quantization parameters are trained by general learning loss, and updated simultaneously with weight. We call this method as OQFL(Optimized Quantization in Federated Learning). OQFL is a method of learning deep networks and quantization while maintaining security in a distributed network environment including edge computing. We introduce the OQFL method and simulate it in various Convolutional deep neural networks. We shows that OQFL is possible in most representative convolutional deep neural network. Surprisingly, OQFL(4bits) can preserve the accuracy of conventional federated learning(32bits) in test dataset.

収録刊行物

  • IEICE Proceeding Series

    IEICE Proceeding Series 62 150-154, 2020-09-22

    The Institute of Electronics, Information and Communication Engineers

詳細情報 詳細情報について

  • CRID
    1390286981361377664
  • NII論文ID
    230000011980
  • DOI
    10.34385/proc.62.ts7-3
  • ISSN
    21885079
  • 本文言語コード
    en
  • データソース種別
    • JaLC
    • CiNii Articles
  • 抄録ライセンスフラグ
    使用不可

問題の指摘

ページトップへ