Shared-memory parallelism can be simple, fast, and scalable
著者
書誌事項
Shared-memory parallelism can be simple, fast, and scalable
(ACM books, 15)
Association for Computing Machinery , , c2017
- : hardcover
大学図書館所蔵 全1件
  青森
  岩手
  宮城
  秋田
  山形
  福島
  茨城
  栃木
  群馬
  埼玉
  千葉
  東京
  神奈川
  新潟
  富山
  石川
  福井
  山梨
  長野
  岐阜
  静岡
  愛知
  三重
  滋賀
  京都
  大阪
  兵庫
  奈良
  和歌山
  鳥取
  島根
  岡山
  広島
  山口
  徳島
  香川
  愛媛
  高知
  福岡
  佐賀
  長崎
  熊本
  大分
  宮崎
  鹿児島
  沖縄
  韓国
  中国
  タイ
  イギリス
  ドイツ
  スイス
  フランス
  ベルギー
  オランダ
  スウェーデン
  ノルウェー
  アメリカ
注記
Includes bibliographical references (p. [379]-412) and index
内容説明・目次
内容説明
Parallelism is the key to achieving high performance in computing. However, writing efficient and scalable parallel programs is notoriously difficult, and often requires significant expertise. To address this challenge, it is crucial to provide programmers with high-level tools to enable them to develop solutions easily, and at the same time emphasize the theoretical and practical aspects of algorithm design to allow the solutions developed to run efficiently under many different settings. This thesis addresses this challenge using a three-pronged approach consisting of the design of shared-memory programming techniques, frameworks, and algorithms for important problems in computing. The thesis provides evidence that with appropriate programming techniques, frameworks, and algorithms, shared-memory programs can be simple, fast, and scalable, both in theory and in practice. The results developed in this thesis serve to ease the transition into the multicore era.
The first part of this thesis introduces tools and techniques for deterministic parallel programming, including means for encapsulating nondeterminism via powerful commutative building blocks, as well as a novel framework for executing sequential iterative loops in parallel, which lead to deterministic parallel algorithms that are efficient both in theory and in practice. The second part of this thesis introduces Ligra, the first high-level shared memory framework for parallel graph traversal algorithms. The framework allows programmers to express graph traversal algorithms using very short and concise code, delivers performance competitive with that of highly-optimized code, and is up to orders of magnitude faster than existing systems designed for distributed memory. This part of the thesis also introduces Ligra , which extends Ligra with graph compression techniques to reduce space usage and improve parallel performance at the same time, and is also the first graph processing system to support in-memory graph compression.
The third and fourth parts of this thesis bridge the gap between theory and practice in parallel algorithm design by introducing the first algorithms for a variety of important problems on graphs and strings that are efficient both in theory and in practice. For example, the thesis develops the first linear-work and polylogarithmic-depth algorithms for suffix tree construction and graph connectivity that are also practical, as well as a work-efficient, polylogarithmic-depth, and cache-efficient shared-memory algorithm for triangle computations that achieves a 2–5x speedup over the best existing algorithms on 40 cores.
This is a revised version of the thesis that won the 2015 ACM Doctoral Dissertation Award.
目次
Introduction
Preliminaries and Notation
Programming Techniques for Deterministic Parallelism
Internally Deterministic Parallelism: Techniques and Algorithms
Deterministic Parallelism in Sequential Iterative Algorithms
A Deterministic Phase-Concurrent Parallel Hash Table
Priority Updates: A Contention-Reducing Primitive for Deterministic Programming
Large-Scale Shared-Memory Graph Analytics
Ligra: A Lightweight Graph Processing Framework for Shared Memory
Ligra : Adding Compression to Ligra
Parallel Graph Algorithms
Linear-Work Parallel Graph Connectivity
Parallel and Cache-Oblivious Triangle Computations
Parallel String Algorithms
Parallel Cartesian Tree and Suffix Tree Construction
Parallel Computation of Longest Common Prefixes
Parallel Lempel-Ziv Factorization
Parallel Wavelet Tree Construction
Conclusion and Future Work
Bibliography
「Nielsen BookData」 より