Parallel Supercomputing in MIMD Architectures is devoted to supercomputing on a wide variety of Multiple-Instruction-Multiple-Data (MIMD)-class parallel machines. This book describes architectural concepts, commercial and research hardware implementations, major programming concepts, algorithmic methods, representative applications, and benefits and drawbacks. Commercial machines described include Connection Machine 5, NCUBE, Butterfly, Meiko, Intel iPSC, iPSC/2 and iWarp, DSP3, Multimax, Sequent, and Teradata. Research machines covered include the J-Machine, PAX, Concert, and ASP. Operating systems, languages, translating sequential programs to parallel, and semiautomatic parallelizing are aspects of MIMD software addressed in Parallel Supercomputing in MIMD Architectures. MIMD issues such as scalability, partitioning, processor utilization, and heterogenous networks are discussed as well.This book is packed with important information and richly illustrated with diagrams and tables, Parallel Supercomputing in MIMD Architectures is an essential reference for computer professionals, program managers, applications system designers, scientists, engineers, and students in the computer sciences.
Table of Contents
Part 1: MIMD Computers: Commercial Machines 1. Thinking Machines Corporation CM-5 2.NCUBE 3. iWarp 4. iPSC and iPSC/2 5. The Paragon� XP/S System 6. Encore Multimax 7. AT&T DSP-3 8. The Meiko Computing Surface 9. BB&N Butterfly 10. Sequent 11. Teradata Research Machines 12. J-Machine: A Fine-Grain Concurrent Computer 13. PAX 14. Concert 15. Computer Vision Applications with the Associative String Processor Part 2: MIMD Software 16. Operating Systems: Trollius 17. Apply: A Programming Language for Low-level Vision on Diverse Parallel Architectures 18. Translating Sequential Programs to Parallel: Linda 19. PTOOL: A Semiautomatic Parallel Programming Assistant Part 3: MIMD Issues 20. A Scalability Analysis of the Butterfly Multiprocessor 21. Mathematical Model Partitioning and Packing for Parallel Computer Calculation 22. Increasing Processor Utilization During Parallel Computation Rundown 23. Solving Computational Grand Challenges Using a Network of Heterogeneous Supercomputers