Using MPI : portable parallel programming with the message-passing interface

書誌事項

Using MPI : portable parallel programming with the message-passing interface

William Gropp, Ewing Lusk, Anthony Skjellum

(Scientific and engineering computation)

MIT Press, c1994

大学図書館所蔵 件 / 31

この図書・雑誌をさがす

注記

Includes bibliographical references (p. [295]-299) and indexes

内容説明・目次

内容説明

The parallel programming community recently organized an effort to standardize the communication subroutine libraries used for programming on massively parallel computers such as the Connection Machine and Cray's new T3D, as well as networks of workstations. The standard they developed, Message-Passing Interface (MPI), not only unifies within a common framework programs written in a variety of existing (and currently incompatible) parallel languages but allows for future portability of programs between machines. Three of the authors of MPI have teamed up here to present a tutorial on how to use MPI to write parallel programs, particularly for large-scale applications. MPI, the long-sought standard for expressing algorithms and running them on a variety of computers, allows leveraging of software development costs across parallel machines and networks and will spur the development of a new level of parallel software. This book covers all the details of the MPI functions used in the motivating examples and applications, with many MPI functions introduced in context. The topics covered include issues in portability of programs among MPP systems, examples and counterexamples illustrating subtle aspects of the MPI definition, how to write libraries that take advantage of MPI's special features, application paradigms for large-scale examples, complete program examples, visualizing program behaviour with graphical tools, an implementation strategy and a portable implementation, using MPI on workstation networks and on MPPs (Intel, Thinking Machines, IBM), scalability and performance tuning, and how to convert existing codes to MPI.

目次

  • Part 1 Background: why parallel computing?
  • obstacles to progress
  • why message passing?
  • current message-passing systems
  • the MPI forum. Part 2 What's new about MPI?: a new point of view
  • what's not new?
  • basic MPI concepts
  • other interesting features of MPI
  • is MPI large or small?
  • decisions left to the implementor. Part 3 Using MPI in simple programs: a first MPI program
  • running your first MPI program
  • a first MPI program in C
  • timimg MPI programs
  • a self-sceduling example - matrix-vector multiplication
  • studying parallel performance
  • using communicators
  • a handy graphics library for parallel programs
  • application - determination of nuclear structures
  • summary of a simple subset of MPI. Part 4 Intermediate MPI: the poisson problem
  • topologies
  • a code for the poisson problem
  • using nonblocking communications
  • synchronous sends and "safe" programs
  • more on scalability
  • Jacobi with a 2-D decomposition
  • an MPI-derived datatype
  • overlapping communication and computation
  • more on timing programs
  • three dimensions
  • application - simulating vortex evolution in superconducting materials.Part 5 Advanced message passing in MPI: MPI datatypes
  • the N-body problem
  • visualizing the Mandelbrot set
  • gaps in datatypes. Part 6 Parallel libraries: motivation
  • a first MPI library
  • linear algebra on grids
  • the LINPACK benchmark in MPI
  • strategies for library building. Part 7 Other features of MPI: simulating shared-memory operations
  • application - full-configuration interaction
  • advanced collective operations
  • intercommunicators
  • heterogeneous computing
  • the MPI profiling interface
  • error handling
  • environmental inquiry
  • other functions in MPI
  • application - computational fluid dynamics. Part 8 Implementing MPI: sending and receiving
  • data transfer
  • message queuing
  • unexpected messages
  • device capabilities and the MPI library definition. Part 9 Dusty decks - porting existing message-passing programs to MPI: Intel NX
  • IBM EUI
  • TMC CMMD
  • Express
  • PVM 2.4.x
  • PVM 3.2.x
  • p4
  • PARMACS
  • TCGMSG
  • Chameleon
  • Zipcode
  • where to learn more. Part 10 Beyond message passing: dynamic processes
  • threads
  • action at a distance
  • parallel I/O
  • will there be an MPI-2?
  • final words. Appendices: summary of MPI routines and their arguments
  • the model MPI implementation
  • the MPE multiprocessing environment functions
  • MPI resources on the information superhighway
  • language details.

「Nielsen BookData」 より

関連文献: 1件中  1-1を表示

詳細情報

ページトップへ