3次元CGコンテンツとその属性情報の自律的呈示方式  [in Japanese] Autonomous Presentation of 3 Dimensional CG Contents and Their Attribute Information  [in Japanese]

Access this Article

Search this Article

Author(s)

    • 灘本 明代 NADAMOTO AKIYO
    • 神戸大学大学院自然科学研究科情報メディア科学専攻 Division of Information and Media Sciences, Graduate School and Technology, Kobe University
    • 矢部 武志 YABE TAKESHI
    • 神戸大学大学院自然科学研究科情報知能工学専攻 Division of Computer and System Engineering, Graduate School and Technology, Kobe University
    • 四方 正輝 [他] SHIKATA MASAKI
    • 神戸大学大学院自然科学研究科情報知能工学専攻 Division of Computer and System Engineering, Graduate School and Technology, Kobe University
    • 田中 克己 TANAKA KATSUMI
    • 京都大学大学院情報学研究科社会情報学専攻 Department of Social Informatics, Graduate School of Informatics, Kyoto University

Abstract

近年,Web 上で3次元CGを実現するWeb3D技術の発展により,一般のユーザも,Web上の3次元CGを閲覧・操作をすることが可能になっている.これらは,ユーザに3次元CGを能動的に操作することを要求するため,ユーザは実際に付加されている情報を発見することが困難であったり,3 次元CGがどのような振舞いをするのか予測がつかない等,3次元CG作成者の意図する情報を取得できているとは限らない.今後携帯電話での3次元CGの利用も期待され,インタラクションの制限がある携帯環境においてはこのような能動的な操作を行うことは困難であると考えられる.一方,我々はこれまで,Webコンテンツの受動的視聴機構として,Web上のコンテンツを,音声と画像を用いて「見る」「聞く」といった受動的な操作により取得する方式を提案してきた.この「見る」「聞く」といった受動的視聴はWeb 上の3次元CGを呈示するのにもふさわしいと考え,本論文では,Web 上の3次元CGをその属性情報に基づきアニメーションを自動生成し,音声読み上げによって情報呈示を行う自律的呈示方式を提案する.具体的には,(1)3次元CGの属性情報に基づいた自律的呈示方式,(2)複数3次元CGの差異情報に基づいた自律的呈示方式を提案する.Recently,ordinary users can browse and manipulate 3D CG data on the Web according to the development of Web3D technique.These operations require users to behave actively in operations of 3D CG models.Therefore it is difficult for users to view the attribute information about on the 3D CG models and to predict attached behavior of the 3D CG models. The users don 't necessarily acquire the whole attribute information that the 3D CG creator's intends to present.In the future,it is expected to use the 3D CG data on the mobile equipment.It is difficult to use such active operations on the mobile equipment,because of its limitedinteractions.o cope with the problem,we have been developing the passive viewing of the Web contents.The concept is to change the Web contents for passive viewing,which is basedon listening andwatching.In this paper,we introduce a passive viewing system for 3D CG models and their attribute data.The proposed system automatically generates CG animation together with synthesizedspeech from 3D CG contents andtheir attribute data. Furthermore,we introduce a passive viewing method for multiple objects,which also enables users to view the difference of attribute information in a passive manner.

Recently, ordinary users can browse and manipulate 3D CG data on the Web according to the development of Web3D technique. These operations require users to behave actively in operations of 3D CG models. Therefore it is difficult for users to view the attribute information about on the 3D CG models and to predict attached behavior of the 3D CG models. The users don't necessarily acquire the whole attribute information that the 3D CG creator's intends to present. In the future, it is expected to use the 3D CG data on the mobile equipment. It is difficult to use such active operations on the mobile equipment, because of its limited interactions. To cope with the problem we have been developing the passive viewing of the Web contents. The concept is to change the Web contents for passive viewing, which is based on listening and watching. In this paper, we introduce a passive viewing system for 3D CG models and their attribute data. The proposed system automatically generates CG animation together with synthesized speech from 3D CG contents and their attribute data. Furthermore, we introduce a passive viewing method for multiple objects, which also enables users to view the difference of attribute information in a passive manner.

Journal

  • 情報処理学会論文誌データベース(TOD)

    情報処理学会論文誌データベース(TOD) 43(SIG02(TOD13)), 203-215, 2002-03-15

    Information Processing Society of Japan (IPSJ)

References:  22

Codes

  • NII Article ID (NAID)
    110002726312
  • NII NACSIS-CAT ID (NCID)
    AA11464847
  • Text Lang
    JPN
  • Article Type
    Article
  • ISSN
    1882-7799
  • NDL Article ID
    6127788
  • NDL Call No.
    Z74-C192
  • Data Source
    CJP  NDL  NII-ELS  IPSJ 
Page Top