Context aware human-robot and human-agent interaction
著者
書誌事項
Context aware human-robot and human-agent interaction
(Human-computer interaction series / editors-in-chief, John Karat, Jean Vanderdonckt)
Springer, c2016
大学図書館所蔵 全1件
  青森
  岩手
  宮城
  秋田
  山形
  福島
  茨城
  栃木
  群馬
  埼玉
  千葉
  東京
  神奈川
  新潟
  富山
  石川
  福井
  山梨
  長野
  岐阜
  静岡
  愛知
  三重
  滋賀
  京都
  大阪
  兵庫
  奈良
  和歌山
  鳥取
  島根
  岡山
  広島
  山口
  徳島
  香川
  愛媛
  高知
  福岡
  佐賀
  長崎
  熊本
  大分
  宮崎
  鹿児島
  沖縄
  韓国
  中国
  タイ
  イギリス
  ドイツ
  スイス
  フランス
  ベルギー
  オランダ
  スウェーデン
  ノルウェー
  アメリカ
注記
Includes bibliographical references
内容説明・目次
内容説明
This is the first book to describe how Autonomous Virtual Humans and Social Robots can interact with real people, be aware of the environment around them, and react to various situations. Researchers from around the world present the main techniques for tracking and analysing humans and their behaviour and contemplate the potential for these virtual humans and robots to replace or stand in for their human counterparts, tackling areas such as awareness and reactions to real world stimuli and using the same modalities as humans do: verbal and body gestures, facial expressions and gaze to aid seamless human-computer interaction (HCI).
The research presented in this volume is split into three sections:
*User Understanding through Multisensory Perception: deals with the analysis and recognition of a given situation or stimuli, addressing issues of facial recognition, body gestures and sound localization.
*Facial and Body Modelling Animation: presents the methods used in modelling and animating faces and bodies to generate realistic motion.
*Modelling Human Behaviours: presents the behavioural aspects of virtual humans and social robots when interacting and reacting to real humans and each other.
Context Aware Human-Robot and Human-Agent Interaction would be of great use to students, academics and industry specialists in areas like Robotics, HCI, and Computer Graphics.
目次
Preface.- Introduction.- Part I User Understanding through Multisensory Perception.- Face and Facial Expressions Recognition and Analysis.- Body Movement Analysis and Recognition.- Sound Source Localization and Tracking.- Modelling Conversation.- Part II Facial and Body Modelling Animation.- Personalized Body Modelling.- Parameterized Facial modelling and Animation.- Motion Based Learning.- Responsive Motion Generation.- Shared Object Manipulation.- Part III Modelling Human Behaviours.- Modelling Personality, Mood and Emotions.- Motion Control for Social Behaviours.- Multiple Virtual Humans Interactions.- Multi-Modal and Multi-Party Social Interactions.
「Nielsen BookData」 より