Rectifying Transformation Networks for Transformation-Invariant Representations with Power Law

  • FAN Chunxiao
    School of Electronic Engineering, Beijing University of Posts and Telecommunications
  • LI Yang
    School of Electronic Engineering, Beijing University of Posts and Telecommunications Department of Electronic Engineering, Tsinghua University
  • TIAN Lei
    School of Electronic Engineering, Beijing University of Posts and Telecommunications
  • LI Yong
    School of Electronic Engineering, Beijing University of Posts and Telecommunications

Abstract

<p>This letter proposes a representation learning framework of convolutional neural networks (Convnets) that aims to rectify and improve the feature representations learned by existing transformation-invariant methods. The existing methods usually encode feature representations invariant to a wide range of spatial transformations by augmenting input images or transforming intermediate layers. Unfortunately, simply transforming the intermediate feature maps may lead to unpredictable representations that are ineffective in describing the transformed features of the inputs. The reason is that the operations of convolution and geometric transformation are not exchangeable in most cases and so exchanging the two operations will yield the transformation error. The error may potentially harm the performance of the classification networks. Motivated by the fractal statistics of natural images, this letter proposes a rectifying transformation operator to minimize the error. The proposed method is differentiable and can be inserted into the convolutional architecture without making any modification to the optimization algorithm. We show that the rectified feature representations result in better classification performance on two benchmarks.</p>

Journal

References(15)*help

See more

Details 詳細情報について

Report a problem

Back to top