Real-time vision for human-computer interaction

Bibliographic Information

Real-time vision for human-computer interaction

edited by Branislav Kisačanin, Vladimir Pavlović, Thomas S. Huang

Springer, c2005

  • : HB

Available at  / 8 libraries

Search this Book/Journal

Note

Includes bibliographical references and index

Description and Table of Contents

Description

200Ts Vision of Vision One of my formative childhood experiences was in 1968 stepping into the Uptown Theater on Connecticut Avenue in Washington, DC, one of the few movie theaters nationwide that projected in large-screen cinerama. I was there at the urging of a friend, who said I simply must see the remarkable film whose run had started the previous week. "You won't understand it," he said, "but that doesn't matter. " All I knew was that the film was about science fiction and had great special eflPects. So I sat in the front row of the balcony, munched my popcorn, sat back, and experienced what was widely touted as "the ultimate trip:" 2001: A Space Odyssey. My friend was right: I didn't understand it. . . but in some senses that didn't matter. (Even today, after seeing the film 40 times, I continue to discover its many subtle secrets. ) I just had the sense that I had experienced a creation of the highest aesthetic order: unique, fresh, awe inspiring. Here was a film so distinctive that the first half hour had no words whatsoever; the last half hour had no words either; and nearly all the words in between were banal and irrelevant to the plot - quips about security through Voiceprint identification, how to make a phonecall from a space station, government pension plans, and so on.

Table of Contents

RTV4HCI: A Historical Overview.- Real-Time Algorithms: From Signal Processing to Computer Vision.- Advances in RTV4HCI.- Recognition of Isolated Fingerspelling Gestures Using Depth Edges.- Appearance-Based Real-Time Understanding of Gestures Using Projected Euler Angles.- Flocks of Features for Tracking Articulated Objects.- Static Hand Posture Recognition Based on Okapi-Chamfer Matching.- Visual Modeling of Dynamic Gestures Using 3D Appearance and Motion Features.- Head and Facial Animation Tracking Using Appearance-Adaptive Models and Particle Filters.- A Real-Time Vision Interface Based on Gaze Detection - EyeKeys.- Map Building from Human-Computer Interactions.- Real-Time Inference of Complex Mental States from Facial Expressions and Head Gestures.- Epipolar Constrained User Pushbutton Selection in Projected Interfaces.- Looking Ahead.- Vision-Based HCI Applications.- The Office of the Past.- MPEG-4 Face and Body Animation Coding Applied to HCI.- Multimodal Human-Computer Interaction.- Smart Camera Systems Technology Roadmap.

by "Nielsen BookData"

Details

Page Top