Parallelism in matrix computations
著者
書誌事項
Parallelism in matrix computations
(Scientific computation)
Springer, c2016
大学図書館所蔵 全3件
  青森
  岩手
  宮城
  秋田
  山形
  福島
  茨城
  栃木
  群馬
  埼玉
  千葉
  東京
  神奈川
  新潟
  富山
  石川
  福井
  山梨
  長野
  岐阜
  静岡
  愛知
  三重
  滋賀
  京都
  大阪
  兵庫
  奈良
  和歌山
  鳥取
  島根
  岡山
  広島
  山口
  徳島
  香川
  愛媛
  高知
  福岡
  佐賀
  長崎
  熊本
  大分
  宮崎
  鹿児島
  沖縄
  韓国
  中国
  タイ
  イギリス
  ドイツ
  スイス
  フランス
  ベルギー
  オランダ
  スウェーデン
  ノルウェー
  アメリカ
注記
Includes bibliographical references and index
内容説明・目次
内容説明
This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations.
It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms.
The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of parallel iterative linear system solvers with emphasis on scalable preconditioners, (b) parallel schemes for obtaining a few of the extreme eigenpairs or those contained in a given interval in the spectrum of a standard or generalized symmetric eigenvalue problem, and (c) parallel methods for computing a few of the extreme singular triplets. Part IV focuses on the development of parallel algorithms for matrix functions and special characteristics such as the matrix pseudospectrum and the determinant. The book also reviews the theoretical and practical background necessary when designing these algorithms and includes an extensive bibliography that will be useful to researchers and students alike.
The book brings together many existing algorithms for the fundamental matrix computations that have a proven track record of efficient implementation in terms of data locality and data transfer on state-of-the-art systems, as well as several algorithms that are presented for the first time, focusing on the opportunities for parallelism and algorithm robustness.
目次
List of Figures.- List of Tables.- List of Algorithms.- Notations used in the book.- Part I Basics.- Parallel Programming Paradigms.- Computational Models.- Principles of parallel programming.- Fundamental kernels.- Vector operations.- Higher level BLAS.- General organization for dense matrix factorizations.- Sparse matrix computations.- Part II Dense and special matrix computations.- Recurrences and triangular systems.- Definitions and examples.- Linear recurrences.- Implementations for a given number of processors.- Nonlinear recurrences.- General linear systems.- Gaussian elimination.- Pair wise pivoting.- Block LU factorization.- Remarks.- Banded linear systems.- LUbased schemes with partial pivoting.- The Spike family of algorithms.- The Spike balance scheme.- A tearing based banded solver.- Tridiagonal systems.- Special linear systems.- Vandermonde solvers.- Banded Toeplitz linear systems solvers.- Symmetric and Anti symmetric Decomposition (SAS).- Rapid elliptic solvers.- Orthogonal factorization and linear least squares problems.- Definitions.- QR factorization via Givens rotations.- QR factorization via Householder reductions.- Gram Schmidt orthogonalization.- Normal equations vs. orthogonal reductions.- Hybrid algorithms when m>n.- Orthogonal factorization of block angular matrices.- Rank deficient linear least squares problems.- The symmetric eigenvalue and singular value problems.- The Jacobi algorithms.- Tridiagonalization based schemes.- Bidiagonalization via Householder reduction.- Part III Sparse matrix computations.- Iterative schemes for large linear systems.- An example.- Classical splitting methods.- Polynomial methods.- Preconditioners.- A tearing based solver for generalized banded preconditioners.- Row projection methods for large non symmetric linear systems.- Multiplicative Schwarz preconditioner with GMRES.- Large symmetric eigenvalue problems.- Computing dominant eigenpairs and spectral transformations.- The Lanczos method.- A block Lanczos approach for solving symmetric perturbed standard eigenvalue problems.- The Davidson methods.- The trace minimization method for the symmetric generalized eigenvalue problem.- The sparse singular value problem.- Part IV Matrix functions and characteristics.- Matrix functions and the determinant.- Matrix functions.- Determinants.- Computing the matrix pseudospectrum.- Grid based methods.- Dimensionality reduction on the domain: Methods based on path following.- Dimensionality reduction on the matrix: Methods based on projection.- Notes.- References.
「Nielsen BookData」 より