Multimodal interface for human - machine communication
著者
書誌事項
Multimodal interface for human - machine communication
(Series in machine perception and artificial intelligence / editors, H. Bunke, P.S.P. Wang, v. 48)
World Scientific, c2002
大学図書館所蔵 件 / 全10件
-
該当する所蔵館はありません
- すべての絞り込み条件を解除する
注記
Includes bibriographical references
内容説明・目次
内容説明
With the advance of speech, image and video technology, human-computer interaction (HCI) will reach a new phase.In recent years, HCI has been extended to human-machine communication (HMC) and the perceptual user interface (PUI). The final goal in HMC is that the communication between humans and machines is similar to human-to-human communication. Moreover, the machine can support human-to-human communication (e.g. an interface for the disabled). For this reason, various aspects of human communication are to be considered in HMC. The HMC interface, called a multimodal interface, includes different types of input methods, such as natural language, gestures, face and handwriting characters.The nine papers in this book have been selected from the 92 high-quality papers constituting the proceedings of the 2nd International Conference on Multimodal Interface (ICMI '99), which was held in Hong Kong in 1999. The papers cover a wide spectrum of the multimodal interface.
目次
- Introduction to multimodal interface for human-machine communication, P.C. Yuen et al. Algorithms - a face location and recognition system based on tangent distance, R. Mariani
- recognizing action units for facial expression analysis, Y.-L. Tian et al
- view synthesis under perspective projection, G.C. Feng et al. Single modality systems: sign language recognition, W. Gao and C. Wang
- helping designers create recognition-enabled interfaces, A.C. Long et al. Information retrieval: cross-language text retrieval by query translation using term re-weighting, I. Kang et al
- direct feature extraction in DCT domain and its applications in online web image retrieval for JPEG compressed images, G. Feng et al. Multimodality systems: advances in the robust processing of multimodal speech and pen systems, S. Oviatt
- information-theoretic fusion for multimodal interfaces, J.W. Fisher III and T. Darrell
- using virtual humans for multimodal communication in virtual reality and augmented reality, D. Thalmann.
「Nielsen BookData」 より