D.N. Jayasimha, The Ohio State University
PVM (Parallel Virtual Machine) is a portable message passing library which runs on a variety of computing platforms including NOWs (networks of workstations), massively parallel processors, and heterogeneous computers. With the increasing popularity of NOWs for viable and cost-effective parallel computing, the use of PVM has become widespread. This proposal offers a half day tutorial on the message passing paradigm and programming with PVM. The first part of the lecture will be a brief introduction to parallel architectures, the shared memory and message passing programming models, and the data parallel, control parallel, and SPMD styles of programming. The main part of the lecture will describe the message passing programming paradigm and will discuss the use of PVM to write programs using this paradigm. The last part of the lecture will detail how the virtual machine can be specified, how the PVM environment can be managed through specific commands, and what PVM does not provide (this part might also include a discussion on MPI depending on the level of maturity of the audience). In addition, there will be discussion on guidelines for writing efficient parallel programs and simple mapping issues. The tutorial is intended for scientists and engineers (including graduate students) who wish to learn message passing programming to parallelize their applications. The tutorial assumes knowledge of FORTRAN or C, some programming experience, and rudimentary knowledge of Unix.
D. N. Jayasimha obtained his Ph.D. from the Center for Supercomputing Research and Development, University of Illinois in 1988. He has been on the faculty at The Ohio State University in Columbus, Ohio since then. During 1993-94, he was a Visiting Senior Research Associate at the NASA Lewis Research Center, Cleveland, OH where he worked on parallelizing applications using PVM and other message passing libraries. He has offered tutorials on message passing and PVM at the Ohio Aerospace Institute and NASA, and at 1994 International Workshop on Parallel Processing. His research interests are in the areas of communication and synchronization in parallel computation, parallel architectures, and parallel applications.