Algorithmic Techniques for Networks of Processors
Russ Miller
Quentin F. Stout
Dept of Comp Sci & Eng, State University of New York at Buffalo
EECS Department, University of Michigan
The focus of this chapter is on algorithmic techniques. Initially, we define some basic terminology that is used to discuss parallel algorithms and parallel architectures. Following this introductory material, we define a variety of interconnection networks, including the mesh (chess board), which are used to allow processors to communicate with each other. We also define an abstract parallel model of computation, the PRAM, where processors communicate with memory instead of with each other. We then discuss several parallel programming paradigms, including the use of high-level data movement operations, divide-and-conquer, pipelining, and master-slave. Finally, we discuss the problem of mapping the structure of an inherently parallel problem onto a target parallel architecture. This mapping problem can arise in a variety of ways, and with a wide range of problem structures. In some cases, finding a good mapping is quite straightforward, but in other cases it is a computationally intractable NP-complete problem.
Keywords:
parallel algorithms, parallel computing, parallel architecture,
mesh computer, hypercube, pyramid, pipelineing, distributed memory, shared
memory, NUMA, global operations, stepwise simulation, mapping problem,
weak embedding, supercomputing, computer science
Russ Miller (miller@cse.buffalo.edu)