Languages and compilers for parallel computing : 14th International Workshop, LCPC 2001, Cumberland Falls, KY, USA, August 1-3, 2001 : revised papers
Author(s)
Bibliographic Information
Languages and compilers for parallel computing : 14th International Workshop, LCPC 2001, Cumberland Falls, KY, USA, August 1-3, 2001 : revised papers
(Lecture notes in computer science, 2624)
Springer, c2003
Available at 30 libraries
  Aomori
  Iwate
  Miyagi
  Akita
  Yamagata
  Fukushima
  Ibaraki
  Tochigi
  Gunma
  Saitama
  Chiba
  Tokyo
  Kanagawa
  Niigata
  Toyama
  Ishikawa
  Fukui
  Yamanashi
  Nagano
  Gifu
  Shizuoka
  Aichi
  Mie
  Shiga
  Kyoto
  Osaka
  Hyogo
  Nara
  Wakayama
  Tottori
  Shimane
  Okayama
  Hiroshima
  Yamaguchi
  Tokushima
  Kagawa
  Ehime
  Kochi
  Fukuoka
  Saga
  Nagasaki
  Kumamoto
  Oita
  Miyazaki
  Kagoshima
  Okinawa
  Korea
  China
  Thailand
  United Kingdom
  Germany
  Switzerland
  France
  Belgium
  Netherlands
  Sweden
  Norway
  United States of America
-
Library, Research Institute for Mathematical Sciences, Kyoto University数研
L/N||LNCS||262403018474
-
INTERNATIONAL CHRISTIAN UNIVERSITY LIBRARY図
V.2624007.6/L507/v.262405956511,
007.6/L507/v.262405956511
Note
Includes bibliographical references and index
Description and Table of Contents
Description
This volume contains (revised versions) of papers presented at the 14th Wo- shop on Languages and Compilers for Parallel Computing. Parallel computing used to be nearly synonymous with supercomputing research, but as parallel processing technologies have become common features of commodity processors and systems,the focus of this workshopalsohas shifted. For example, this wo- shop marks the ?rst time that compiler technology for power management has been recognized as a key aspect of parallel computing. Another pattern visible in the research presented is the continuing shift in emphasis from simply ?nding potentialparallelismtobeingabletouseparallelisme?cientlyenoughtoachieve good speedup. The scope of languages and compilers for parallel computing has thus grown to encompass all relevant aspects of systems, ranging from abstract models to runtime support environments. As inpreviousyears,keyresearcherswereinvitedto participate.Everypaper submitted was reviewed in depth and quantitatively graded on originality, s- ni?cance, correctness, presentation, relevance, need to revise the write-up, and overall how appropriate it would be to accept the paper.
Any concerns raised werediscussedbythe programcommittee.Insummary,the papersincludedhere represent leading-edge work from North America, Europe, and Asia.
Table of Contents
Optimizing Compiler Design for Modularity and Extensibility.- Translation Schemes for the HPJava Parallel Programming Language.- Compiler and Middleware Support for Scalable Data Mining.- Bridging the Gap between Compilation and Synthesis in the DEFACTO System.- Instruction Balance and Its Relation to Program Energy Consumption.- Dynamic Voltage and Frequency Scaling for Scientific Applications.- Improving Off-Chip Memory Energy Behavior in a Multi-processor, Multi-bank Environment.- A Compilation Framework for Power and Energy Management on Mobile Computers.- Locality Enhancement by Array Contraction.- Automatic Data Distribution Method Using First Touch Control for Distributed Shared Memory Multiprocessors.- Balanced, Locality-Based Parallel Irregular Reductions.- A Comparative Evaluation of Parallel Garbage Collector Implementations.- STAPL: An Adaptive, Generic Parallel C++ Library.- An Interface Model for Parallel Components.- Tree Traversal Scheduling: A Global Instruction Scheduling Technique for VLIW/EPIC Processors.- MIRS: Modulo Scheduling with Integrated Register Spilling.- Strength Reduction of Integer Division and Modulo Operations.- An Adaptive Scheme for Dynamic Parallelization.- Probabilistic Points-to Analysis.- A Compiler Framework to Detect Parallelism in Irregular Codes.- Compiling for a Hybrid Programming Model Using the LMAD Representation.- The Structure of a Compiler for Explicit and Implicit Parallelism.- Coarse Grain Task Parallel Processing with Cache Optimization on Shared Memory Multiprocessor.- A Language for Role Specifications.- The Specification of Source-to-Source Transformations for the Compile-Time Optimization of Parallel Object-Oriented Scientific Applications.- Computing Array Shapes in MATLAB.- Polynomial Time Array Dataflow Analysis.- Induction Variable Analysis without Idiom Recognition: Beyond Monotonicity.
by "Nielsen BookData"