表情認知における視聴覚情報の相互規定性

書誌事項

タイトル別名
  • Mutual regulation of audio-visual emotional information in the recognition of facial expression and voice tone
  • ヒョウジョウ ニンチ ニ オケル シチョウカク ジョウホウ ノ ソウゴ キテイセイ

この論文をさがす

抄録

In everyday life, we communicate each other not only with verbal cue but also nonverbal information of multi-modality such as facial and vocal expression. But, it has not been studied enough how we combine those nonverbal information.<BR>So, we investigated the mutual regulation rule between facial and vocal emotional expression. Any of seven kinds of emotional expressions (happiness, neutral, surprise, sadness, fear, disgust and anger) were presented to the subjects visually and vocally at the same time by still figure with facial expression and by his or her voice tone of short message. They judged the stimulus person's emotion by using both information.<BR>In the condition that visual and audio emotion were the same, we found that correction rate of judgements was high (87.63%), response time was short (4.20sec) and confidence level was high (4.35/5.0) compare with incongruent condition in which visual and audio emotions were different. As to the incongruent condition, we had two main results: (1) Basically, visual information was more dominant than visual information except disgust. (2) Many fused responses were also found, which mean the person's emotion was interpreted as the third one different from visual and audio emotion. Fused responses of "disgust" appeared most frequently. We considered it as the following: when two kinds of conflict unpleasant emotions were expressed simultaneously, we may interpret the true emotion with the bias toward "disgust" to make the accounts balance because disgust is rather ambiguous displeasure emotion..

収録刊行物

被引用文献 (3)*注記

もっと見る

詳細情報 詳細情報について

問題の指摘

ページトップへ