Verification of photo-model-based pose estimation and handling of unique clothes under illumination varieties

  • PHYU Khaing Win
    Graduate School of Natural Science and Technology, Division of Mechanical and Systems Engineering, Okayama University
  • FUNAKUBO Ryuki
    Graduate School of Natural Science and Technology, Division of Mechanical and Systems Engineering, Okayama University
  • HAGIWARA Ryota
    Graduate School of Natural Science and Technology, Division of Mechanical and Systems Engineering, Okayama University
  • TIAN Hongzhi
    Graduate School of Natural Science and Technology, Division of Mechanical and Systems Engineering, Okayama University
  • MINAMI Mamoru
    Graduate School of Natural Science and Technology, Division of Mechanical and Systems Engineering, Okayama University

抄録

<p>Human can recognize and handle (pick and place) easily the objects with a variety of different shapes, colors, sizes, and humans’ eyes are adaptable to various light environments with a certain tolerance. However, it is difficult for robots to recognize deformable objects such as cloth, string, etc., especially if an object is unique. Additionally, there have been difficulties for robots with vision sensors (cameras) to accurately detect and handle objects under various light environments. This paper proposes a cloth handling system that recognizes an unique cloth appeared in front of a robot by a photo-model-based approach. The photo-model-based approach has been adopted since the photo-model can be made at once by taking a photo of the unique cloth. In proposed cloths’ pose estimation method, a photo-model projected from 3D to 2D is used, where this system does not need defining the object’s size, shape, design, color and weight. It detects the cloth through model-based matching method and Genetic Algorithm (GA). The handling performance by the proposed method with dual-eyes cameras has been verified, revealing that the proposed system has leeway to recognize and handle the unique cloth in lighting varieties from 100 lx to 1300 lx. In addition, 3D recognition and handling accuracy have been confirmed to be practically effective by conducting the recognition/handling experiments under different light conditions.</p>

収録刊行物

被引用文献 (1)*注記

もっと見る

参考文献 (9)*注記

もっと見る

詳細情報 詳細情報について

問題の指摘

ページトップへ