RGB-D画像を用いたEnd-to-End学習による移動ロボットの動作計画法

書誌事項

タイトル別名
  • End-to-End Learning based on RGB-D Images for Mobile Robot Motion Planning

抄録

<p>For mobile robots, collision avoidance is an essential capability. For this issue, model-based motion planner have been proposed so far. A robot based on these motion planners is allowed to exhibit continuous colision avoidance behavior. On the other hand, we have noticed that a mobile robot is able to avoid collisions by switching discrete behaviors, such as go straight, turn right, and turn left, throught real robot experiments via human operators. In addition, the operators are able to change the robot behavior depending on the preceding obstacle, i.e static or dynamic. In this paper, therefore, we propose an end-to-end motion planner based on the human operation. Given a stereo camera sensor, convolutional neural network, CNN, is used for a classification problem. For the network training, the input image composed of RGB-D and the annotated discrete behavior are used. In the experiments, we show that the robot is thus enabled to determine discrete control output depending on obstacles from sensor input.</p>

収録刊行物

被引用文献 (1)*注記

もっと見る

詳細情報 詳細情報について

問題の指摘

ページトップへ