Parallel architectures and parallel algorithms for integrated vision systems
著者
書誌事項
Parallel architectures and parallel algorithms for integrated vision systems
(The Kluwer international series in engineering and computer science, SECS 108 . Robotics)
Kluwer Academic Publishers, c1990
大学図書館所蔵 全15件
  青森
  岩手
  宮城
  秋田
  山形
  福島
  茨城
  栃木
  群馬
  埼玉
  千葉
  東京
  神奈川
  新潟
  富山
  石川
  福井
  山梨
  長野
  岐阜
  静岡
  愛知
  三重
  滋賀
  京都
  大阪
  兵庫
  奈良
  和歌山
  鳥取
  島根
  岡山
  広島
  山口
  徳島
  香川
  愛媛
  高知
  福岡
  佐賀
  長崎
  熊本
  大分
  宮崎
  鹿児島
  沖縄
  韓国
  中国
  タイ
  イギリス
  ドイツ
  スイス
  フランス
  ベルギー
  オランダ
  スウェーデン
  ノルウェー
  アメリカ
注記
Includes bibliographical references and index
内容説明・目次
内容説明
Computer vision is one of the most complex and computationally intensive problem. Like any other computationally intensive problems, parallel pro cessing has been suggested as an approach to solving the problems in com puter vision. Computer vision employs algorithms from a wide range of areas such as image and signal processing, advanced mathematics, graph theory, databases and artificial intelligence. Hence, not only are the comput ing requirements for solving vision problems tremendous but they also demand computers that are efficient to solve problems exhibiting vastly dif ferent characteristics. With recent advances in VLSI design technology, Single Instruction Multiple Data (SIMD) massively parallel computers have been proposed and built. However, such architectures have been shown to be useful for solving a very limited subset of the problems in vision. Specifically, algorithms from low level vision that involve computations closely mimicking the architec ture and require simple control and computations are suitable for massively parallel SIMD computers. An Integrated Vision System (IVS) involves com putations from low to high level vision to be executed in a systematic fashion and repeatedly. The interaction between computations and information dependent nature of the computations suggests that architectural require ments for computer vision systems can not be satisfied by massively parallel SIMD computers.
目次
1. Introduction.- 1.1. Computational Complexities in Vision.- 1.2. Review of Multiprocessor Architectures.- 1.2.1. Mesh connected computers.- 1.2.2. Pyramid computers.- 1.2.3. Hypercube multiprocessors.- 1.2.4. Shared memory machines.- 1.2.5. Systolic arrays.- 1.2.6. Partitionable and hierarchical architectures.- 1.2. Organization.- 2. Model of Computation.- 2.1. Parallelism in IVSs.- 2.2. Data Dependencies.- 2.3. Features and Capabilities of Parallel Architectures for IVSs.- 2.4. Examples of Integrated Vision Systems.- 2.4.1. Image understanding benchmark system.- 2.4.2. Motion estimation and object recognition.- 3. Architecture of NETRA.- 3.1. Processor Clusters.- 3.1.1. Crossbar design.- 3.1.2. Scalability of crossbar.- 3.2. The DSP Hierarchy.- 3.3. Global Memory.- 3.4. Global Interconnection.- 3.4.1. Interconnection network.- 3.4.2. Global bus.- 3.5. IVS Computation Requirements and NETRA.- 3.6. Comparison of NETRA with Other Architectures.- 4. Parallel Algorithms on a Cluster.- 4.1. Classification of Common Vision Algorithms.- 4.2. Issues in Mapping an Algorithm.- 4.3. Performance Evaluation of Parallel Algorithms.- 4.3.1. 2-D convolution.- 4.3.2. Separable convolution.- 4.3.3. Two-dimensional FFT.- 4.3.4. Hough transform.- 4.4. Parallel Implementation Results.- 4.4.1. 2-D FFT.- 4.4.2. Separable convolution.- 4.4.3. Benchmark Algorithms.- 4.5. Summary.- 5. Inter-Cluster Communication In NETRA.- 5.1. Alternatives for Inter-cluster Communication.- 5.1.1. Multistage interconnection network and global memory.- 5.1.2. DSP tree links.- 5.1.3. Global bus.- 5.2. Analysis of Inter-cluster Communication.- 5.3. Approach to Performance Evaluation.- 5.4. Performance of Parallel Algorithms on Multiple Clusters.- 5.4.1. Two-dimensional Fast Fourier Transform (2-D FFT).- 5.4.2. 2-D separable convolution.- 5.4.3. Hough transform.- 5.5. Summary.- 6. Load Balancing and Scheduling Techniques.- 6.1. Need for Efficient Load Balancing Techniques.- 6.2. Load Balancing and Scheduling Techniques for Parallel Implementation.- 6.2.1. Uniform partitioning.- 6.2.2. Static scheduling (First-order scheduling).- 6.2.3. Weighted static scheduling (Second-order scheduling).- 6.2.4. Dynamic.- 6.3. Parallel Implementation and Performance Evaluation.- 6.3.1. Feature extraction.- 6.3.2. Matching features.- 6.3.3. Time match.- 6.3.4. Second stereo match.- 6.3.5. Summary.- 7. Concluding Remarks.- 7.1. Summary and Discussion.- 7.2. Extensions.- References.
「Nielsen BookData」 より