Presentation of Human Action Information via Avatar: From the Viewpoint of Avatar-Based Communication

HANDLE オープンアクセス
  • 有田, 大作
    九州大学システム情報科学研究院知能システム学部門
  • 谷口, 倫一郎
    九州大学システム情報科学研究院知能システム学部門

抄録

This paper describes techniques to present human action information on an avatar-based interaction system, using real-time motion sensing and human action symbolization. Avatar-based interaction systems with computer-generated virtual environments have difficulties in acquiring user’s information, i.e., enough information to represent the user as if he/she were in the environment. This mainly comes of high degrees of freedom of human body and causes the lack of reality. Since it is almost impossible to acquire all the detailed information of human actions or activities, we, instead, recognize, or estimate, what kind of actions have occurred from sensed human motion information and other available information and re-generate detailed and natural actions from the estimated results. In this paper, we describe our approach, Real-time Human Proxy, especially on representing human actions. Also we present experimental results.

Knowledge-Based Intelligent Information and Engineering Systems 9th International Conference, KES 2005, Melbourne, Australia, September 14-16, 2005, Proceedings, Part III

収録刊行物

キーワード

詳細情報

  • CRID
    1050580007680142720
  • NII論文ID
    120006655179
  • HANDLE
    2324/5875
  • 本文言語コード
    en
  • 資料種別
    conference paper
  • データソース種別
    • IRDB
    • CiNii Articles

問題の指摘

ページトップへ