Video content analysis using multimodal information : for movie content extraction, indexing, and representation

書誌事項

Video content analysis using multimodal information : for movie content extraction, indexing, and representation

Ying Li, C.-C. Jay Kuo

Kluwer Academic, c2003

大学図書館所蔵 件 / 3

この図書・雑誌をさがす

注記

Includes index

Bibliography: p. [179]-192

内容説明・目次

内容説明

With the fast growth ofmultimedia information, content-based video anal- ysis, indexing and representation have attracted increasing attention in re- cent years. Many applications have emerged in these areas such as video- on-demand, distributed multimedia systems, digital video libraries, distance learning/education, entertainment, surveillance and geographical information systems. The need for content-based video indexing and retrieval was also rec- ognized by ISOIMPEG, and a new international standard called "Multimedia Content Description Interface" (or in short, MPEG-7)was initialized in 1998 and finalized in September 2001. In this context, a systematic and thorough review ofexisting approaches as well as the state-of-the-art techniques in video content analysis, indexing and representation areas are investigated and studied in this book. In addition, we will specifically elaborate on a system which analyzes, indexes and abstracts movie contents based on the integration ofmultiple media modalities. Content ofeach part ofthis book is briefly previewed below. In the first part, we segment a video sequence into a set ofcascaded shots, where a shot consistsofone or more continuouslyrecorded image frames. Both raw and compressedvideo data will beinvestigated. Moreover, consideringthat there are always non-story units in real TV programs such as commercials, a novel commercial break detection/extraction scheme is developed which ex- ploits both audio and visual cues to achieve robust results. Specifically, we first employ visual cues such as the video data statistics, the camera cut fre- quency, and the existenceofdelimiting black frames between commercials and programs, to obtain coarse-level detection results.

目次

Dedication. List of Figures. List of Tables. Preface. Acknowledgments. 1: Introduction. 1. Audiovisual Content Analysis. 1.1. Audio Content Analysis. 1.2. Visual Content Analysis. 1.3. Audiovisual Content Analysis. 2. Video Indexing, Browsing and Abstraction. 3. MPEG-7 Standard. 4. Roadmap of The Book. 4.1. Video Segmentation. 4.2. Movie Content Analysis. 4.3. Movie Content Abstraction. 2: Background And Previous Work. 1. Visual Content Analysis. 1.1. Video Shot Detection. 1.2. Video Scene and Event Detection. 2. Audio Content Analysis. 2.1. Audio Segmentation and Classification. 2.2. Audio Analysis for Video Indexing. 3. Speaker Identification. 4. Video Abstraction. 4.1. Video Skimming. 4.2. Video Summarization. 5. Video Indexing and Retrieval. 3: Video Content Pre-Processing. 1. Shot Detection in Raw Data Domain. 1.1. YUV Color Space. 1.2. Metrics for Frame Differencing. 1.3. Camera Break Detection. 1.4. Gradual Transition Detection. 1.5. Camera Motion Detection. 1.6. Illumination Change Detection. 1.7. A Review of the Proposed System. 2. Shot Detection in Compressed Domain. 2.1. DC-image and DC-sequence. 3. Audio Feature Analysis. 4. Commercial Break Detection. 4.1. Features of A Commercial Break. 4.2. Feature Extraction. 4.3. The Proposed Detection Scheme. 5. Experimental Results. 5.1. Shot Detection Results. 5.2. Commercial Break Detection Results. 4: Content-Based Movie Scene And Event Extraction. 1. Movie Scene Extraction. 1.1. Sink-based Scene Construction. 1.2. Audiovisual-based Scene Refinement. 1.3. User Interaction. 2. Movie Event Extraction. 2.1. Sink Clustering and Categorization. 2.2. Event Extraction and Classification. 2.3. Integrating Speech and Face Information. 3. Experimental Results. 3.1. Scene Extraction Results. 3.2. Event Extraction Results. 5: Speaker Identification For Movies. 1. Supervised Speaker Identification for Movie Dialogs. 1.1. Feature Selection and Extraction. 1.2. Gaussian Mixture Model. 1.3. Likelihood Calculation and Score Normalization. 1.4. Speech Segment Isolation. 2. Adaptive Speaker Identification. 2.1. Face Detection, Recognition and Mouth Tracking. 2.2. Speech Segmentation and Clustering. 2.3. Initial Speaker Modeling. 2.4. Likelihood-based Speaker Identification. 2.5. Audiovisual Integration for Speaker Identification. 2.6. Unsupervised Speaker Model Adaptation. 3. Experimental Results. 3.1. Supervised Speaker Identification Results. 3.2. Adaptive Speaker Identification Results. 3.3. An Example of Movie Content Annotation. 6: Scene-Based Movie Summarization. 1. An Overview of the Proposed System. 2. Hierarchical Keyframe Extraction. 2.1. Scene Importance Computation. 2.2. Sink Importance Computation. 2.3. Sh

「Nielsen BookData」 より

詳細情報

  • NII書誌ID(NCID)
    BA62870209
  • ISBN
    • 1402074905
  • LCCN
    2000352695
  • 出版国コード
    us
  • タイトル言語コード
    eng
  • 本文言語コード
    eng
  • 出版地
    Boston
  • ページ数/冊数
    xxi, 194 p.
  • 大きさ
    25 cm
ページトップへ