Machine Learning for Multimodal Interaction : 4th International Workshop, MLMI 2007, Brno, Czech Republic, June 28-30, 2007 : revised selected papers
Author(s)
Bibliographic Information
Machine Learning for Multimodal Interaction : 4th International Workshop, MLMI 2007, Brno, Czech Republic, June 28-30, 2007 : revised selected papers
(Lecture notes in computer science, 4892)
Springer, c2008
Available at 3 libraries
  Aomori
  Iwate
  Miyagi
  Akita
  Yamagata
  Fukushima
  Ibaraki
  Tochigi
  Gunma
  Saitama
  Chiba
  Tokyo
  Kanagawa
  Niigata
  Toyama
  Ishikawa
  Fukui
  Yamanashi
  Nagano
  Gifu
  Shizuoka
  Aichi
  Mie
  Shiga
  Kyoto
  Osaka
  Hyogo
  Nara
  Wakayama
  Tottori
  Shimane
  Okayama
  Hiroshima
  Yamaguchi
  Tokushima
  Kagawa
  Ehime
  Kochi
  Fukuoka
  Saga
  Nagasaki
  Kumamoto
  Oita
  Miyazaki
  Kagoshima
  Okinawa
  Korea
  China
  Thailand
  United Kingdom
  Germany
  Switzerland
  France
  Belgium
  Netherlands
  Sweden
  Norway
  United States of America
Note
Includes bibliographical references and index
Description and Table of Contents
Description
This book contains a selection of revised papers from the 4th Workshop on Machine Learning for Multimodal Interaction (MLMI 2007), which took place in Brno, Czech Republic, during June 28-30, 2007. As in the previous editions of the MLMI series, the 26 chapters of this book cover a large area of topics, from multimodal processing and human-computer interaction to video, audio, speech and language processing. The application of machine learning techniques to problems arising in these ?elds and the design and analysis of software s- portingmultimodalhuman-humanandhuman-computerinteractionarethetwo overarching themes of this post-workshop book. The MLMI 2007 workshop featured 18 oral presentations-two invited talks, 14 regular talks and two special session talks-and 42 poster presentations. The participants were not only related to the sponsoring projects, AMI/AMIDA (http://www.amiproject.org) and IM2 (http://www.im2.ch), but also to other largeresearchprojects onmultimodalprocessingand multimedia browsing,such as CALO and CHIL.
Local universities were well represented, as well as other European, US and Japanese universities, research institutions and private c- panies, from a dozen countries overall.
Table of Contents
Invited Paper.- Robust Real Time Face Tracking for the Analysis of Human Behaviour.- Multimodal Processing.- Conditional Sequence Model for Context-Based Recognition of Gaze Aversion.- Meeting State Recognition from Visual and Aural Labels.- Object Category Recognition Using Probabilistic Fusion of Speech and Image Classifiers.- HCI, User Studies and Applications.- Automatic Annotation of Dialogue Structure from Simple User Interaction.- Interactive Pattern Recognition.- User Specific Training of a Music Search Engine.- An Ego-Centric and Tangible Approach to Meeting Indexing and Browsing.- Integrating Semantics into Multimodal Interaction Patterns.- Towards an Objective Test for Meeting Browsers: The BET4TQB Pilot Experiment.- Image and Video Processing.- Face Recognition in Smart Rooms.- Gaussian Process Latent Variable Models for Human Pose Estimation.- Discourse and Dialogue Processing.- Automatic Labeling Inconsistencies Detection and Correction for Sentence Unit Segmentation in Conversational Speech.- Term-Weighting for Summarization of Multi-party Spoken Dialogues.- Automatic Decision Detection in Meeting Speech.- Czech Text-to-Sign Speech Synthesizer.- Speech and Audio Processing.- Using Prosodic Features in Language Models for Meetings.- Posterior-Based Features and Distances in Template Matching for Speech Recognition.- A Study of Phoneme and Grapheme Based Context-Dependent ASR Systems.- Transfer Learning for Tandem ASR Feature Extraction.- Spoken Term Detection System Based on Combination of LVCSR and Phonetic Search.- Frequency Domain Linear Prediction for QMF Sub-bands and Applications to Audio Coding.- Modeling Vocal Interaction for Segmentation in Meeting Recognition.- Binaural Speech Separation Using Recurrent Timing Neural Networks for Joint F0-Localisation Estimation.- PASCAL Speech Separation Challenge II.- To Separate Speech.- Microphone Array Beamforming Approach to Blind Speech Separation.
by "Nielsen BookData"