Exploring how concurrent programming can be assisted by language-level techniques, Introduction to Concurrency in Programming Languages presents high-level language techniques for dealing with concurrency in a general context. It provides an understanding of programming languages that offer concurrency features as part of the language definition.
The book supplies a conceptual framework for different aspects of parallel algorithm design and implementation. It first addresses the limitations of traditional programming techniques and models when dealing with concurrency. The book then explores the current state of the art in concurrent programming and describes high-level language constructs for concurrency. It also discusses the historical evolution of hardware, corresponding high-level techniques that were developed, and the connection to modern systems, such as multicore and manycore processors. The remainder of the text focuses on common high-level programming techniques and their application to a range of algorithms. The authors offer case studies on genetic algorithms, fractal generation, cellular automata, game logic for solving Sudoku puzzles, pipelined algorithms, and more.
Illustrating the effect of concurrency on programs written in familiar languages, this text focuses on novel language abstractions that truly bring concurrency into the language and aid analysis and compilation tools in generating efficient, correct programs. It also explains the complexity involved in taking advantage of concurrency with regard to program correctness and performance.
Introduction
Motivation
Where does concurrency appear?
Why is concurrency considered hard?
Timeliness
Approach
Concepts in Concurrency
Terminology
Concepts
Concurrency Control
Correctness
Techniques
The State of the Art
Limitations of libraries
Explicit techniques
Higher-level techniques
The limits of explicit control
Concluding remarks
High-Level Language Constructs
Common high-level constructs
Using and evaluating language constructs
Implications of concurrency
Interpreted languages
Historical Context and Evolution of Languages
Evolution of machines
Evolution of programming languages
Limits to automatic parallelization
Modern Languages and Concurrency Constructs
Array abstractions
Message passing
Control flow
Functional languages
Functional operators
Performance Considerations and Modern Systems
Memory
Amdahl’s law, speedup, and efficiency
Locking
Thread overhead
Introduction to Parallel Algorithms
Designing parallel algorithms
Finding concurrency
Strategies for exploiting concurrency
Algorithm patterns
Patterns supporting parallel source code
Demonstrating parallel algorithm patterns
Pattern: Task Parallelism
Supporting algorithm structures
Case study: Genetic algorithms
Case study: Mandelbrot set computation
Pattern: Data Parallelism
Case study: Matrix multiplication
Case study: Cellular automaton
Limitations of SIMD data parallel programming
Beyond SIMD
Geometric decomposition
Pattern: Recursive Algorithms
Recursion concepts
Case study: Sorting
Case study: Sudoku
Pattern: Pipelined Algorithms
Pipelining as a software design pattern
Language support for pipelining
Case study: Pipelining in Erlang
Case study: Visual cortex
Appendix A: OpenMP Quick Reference
Appendix B: Erlang Quick Reference
Appendix C: Cilk Quick Reference
References
Matthew J. Sottile is a research associate and adjunct assistant professor in the Department of Computer and Information Sciences at the University of Oregon. He has a significant publication record in both high performance computing and scientific programming. Dr. Sottile is currently working on research in concurrent programming languages and parallel algorithms for signal and image processing in neuroscience and medical applications.
Timothy G. Mattson is a principal engineer at Intel Corporation. Dr. Mattson’s noteworthy projects include the world’s first TFLOP computer, OpenMP, the first generally programmable TFLOP chip (Intel’s 80 core research chip), OpenCL, and pioneering work on design patterns for parallel programming.
Craig E Rasmussen is a staff member in the Advanced Computing Laboratory at Los Alamos National Laboratory (LANL). Along with extensive publications in computer science, space plasma, and medical physics, Dr. Rasmussen is the principal developer of PetaVision, a massively parallel, spiking neuron model of visual cortex that ran at 1.14 Petaflops on LANL’s Roadrunner computer in 2008.
… a clear focus in this book is on keeping the material accessible. The authors succeed at this brilliantly. … if you are just jumping into the world of concurrent programming, or taking a more theoretical look at the approaches we’ve all been taking for granted for the past 20 years in an attempt to make things better, then this book is a great start. The authors present a clear motivation for the relevance of continuing this work, and provide both the historical context and knowledge of present day practice that you’ll need to get off on the right foot. That they manage to do this while keeping the language clear and the text accessible is a tribute to the effort Sottile, Mattson, and Rasmussen put into the creation of the text.
—insideHPC.com, October 2010
Sottile, Mattson, and Rasmussen have successfully managed to provide a nice survey of the current state of the art of parallel algorithm design and implementation in this well-written 300-page textbook, suitable for undergraduate computer science students … this concise yet thorough book provides an outstanding introduction to the important field of concurrent programming and the techniques currently employed to design parallel algorithms. It is clearly written, well organized, and cuts to the point … It is an informative read that I highly recommend to those interested in the design and implementation of parallel algorithms.
—Fernando Berzal, Computing Reviews, May 2010
| Resource | OS Platform | Updated | Description | Instructions |
|---|---|---|---|---|
| Cross Platform | November 09, 2009 | click on http://www.parlang.com/ |