Bag-of-Bounding-Boxes: An Unsupervised Approach for Object-Level View Image Retrieval

この論文をさがす

抄録

<p>We propose a novel bag-of-words (BoW) framework for building and retrieving a compact database of view images for use in robotic localization, mapping, and SLAM applications. Unlike most previous methods, our method does not describe an image based on its many small local features (e.g., bag-of-SIFT-features). Instead, the proposed bag-of-bounding-boxes (BoBB) approach attempts to describe an image based on fewer larger object patterns, which leads to a semantic and compact image descriptor. To make the view retrieval system more practical and autonomous, the object pattern discovery process is unsupervised through a common pattern discovery (CPD) between the input and known reference images without requiring the use of a pre-trained object detector. Moreover, our CPD subtask does not rely on good image segmentation techniques and is able to handle scale variations by exploiting the recently developed CPD technique, i.e., a spatial random partition. Following a traditional bounding-box based object annotation and knowledge transfer, we compactly describe an image in a BoBB form. Using a slightly modified inverted file system, we efficiently index and/or search for the BoBB descriptors. Experiments using the publicly available “RobotCar” dataset show that the proposed method achieves accurate object-level view image retrieval using significantly compact image descriptors, e.g., 20 words per image.</p>

収録刊行物

被引用文献 (1)*注記

もっと見る

参考文献 (20)*注記

もっと見る

関連プロジェクト

もっと見る

詳細情報 詳細情報について

問題の指摘

ページトップへ