Block-Based Neural Network Optimization with Manageable Problem Space

Access this Article

Search this Article

Author(s)

Abstract

<p>In this paper, a simple method based on Genetic Algorithm (GA) is proposed to evolve Block-Based Neural Network (BbNN) model. A BbNN consists of a 2-D array of memory-based modular component NNs with flexible structures and internal configuration that can be implemented in reconfigurable hardware such as a field programmable gate array (FPGA). The network structure and the weights are encoded in bit strings and globally optimized using the genetic operators. Asynchronous BbNN (ABbNN), which is a new model of BbNN, suggests high-performance BbNN by utilizing parallel computation and pipeline architecture. ABbNN's operating frequency is stable for all scales of the network, while conventional BbNN's is decreasing according to the network size. However, optimization by the genetic algorithm requires more iterations to find a solution with increasing problem space and the memory access in GA operation is one of the causes degrading the performance. ABbNN optimized with the proposed evolutionary algorithm is applied on general classifiers to verify the effectiveness with increasing problem space. The proposed method is confirmed by experimental investigations and compared with the conventional genetic algorithm.</p>

Journal

  • IEEJ Transactions on Electronics, Information and Systems

    IEEJ Transactions on Electronics, Information and Systems 140(1), 68-74, 2020

    The Institute of Electrical Engineers of Japan

Codes

  • NII Article ID (NAID)
    130007779188
  • NII NACSIS-CAT ID (NCID)
    AN10065950
  • Text Lang
    ENG
  • ISSN
    0385-4221
  • NDL Article ID
    030204148
  • NDL Call No.
    Z16-795
  • Data Source
    NDL  J-STAGE 
Page Top