Assuming that the multicore revolution plays out the way the microprocessor industry expects, it seems that within a decade most programming will involve parallelism at some level. One needs to ask how this affects the the way we teach computer science, or even how we have people think about computation. With regards to teaching there seem to be three basic choices: (1) we only train a small number of experts in parallel computation who develop a collection of libraries, and everyone else just uses them; (2) we leave our core curriculum pretty much as is, but add some advanced courses on parallelism or perhaps tack on a few lectures at the end of existing courses; or (3) we start teaching parallelism from the start and embed it throughout the curriculum with the idea of getting students to think about parallelism as the most natural form of computation and sequential computation as a special case.
This talk will examine some of the implications of the third option. It will argue that thinking about parallelism, when treated in an appropriate way, might be as easy or easier that thinking sequentially. A key prerequisite, however, is to identify what the core ideas in parallelism are and how they might be layered and integrated with existing concepts. Another more difficult issue is how to cleanly integrate these ideas among courses. After all much of the success of sequential computation follows from the concept of a random access machine and its ability to serve as a simple, albeit imperfect, interface between programming languages, algorithm analysis, and hardware design. The talk will go through an initial list of some core ideas in parallelism, and an approach to integrating these ideas between parallel algorithms, programming languages, and, to some extent, hardware. This requires, however, moving away from the concept of a machine model as a interface for thinking about computation.
Guy Blelloch is a Professor of Computer Science and Associate Dean of Planning at Carnegie Mellon. He received a BA from Swarthmore College in 1983 and a PhD degree from MIT in 1988. His research interests are in programming languages and algorithms and how they interact with an emphasis on parallel computation. He worked on one of the early Parallel Machines, the Thinking Machines Connection Machine, where he developed several of the parallel primitives for the machine. At Carnegie Mellon Blelloch designed and implemented the parallel programming language NESL, a language designed for easily expressing parallel algorithms. The main ideas of the language are to support nested data parallelism and to supply a cost model in the semantics of the language. Other work on parallelism has addressed issues in scheduling, algorithm design, cache efficiency, garbage collection, and synchronization primitives. Blelloch currently is co-director of the ALADDIN Center a center for studying applied algorithms. The center was funded as one of the large NSF ITR Centers from 2001-2007.