PPoPP 2009 Workshops and Tutorials

PPoPP 2009 (in conjunction with HPCA-15) will host a number of workshops and tutorials on Saturday and Sunday, February 14-15, the two days before the main conference.

Please note that attendance at these workshops requires an extra fee. The Tutorial and Workshop Passport allows the participant to attend any workshop or tutorial that day, including those that are part of HPCA-15. Both single-day passports and a two-day passport are available through the registration page.

The general schedule for all workshops will be the same:

8:00am - 8:30 am

Breakfast

8:30am - 10:00am

First Morning Session

10:00am - 10:30am

Break

10:30am - Noon

Second Morning Session

Noon - 1:00pm

Lunch

1:00pm - 2:30pm

First Afternoon Session

2:30pm - 3:00pm

Break

3:00pm - 5:00pm

Second Afternoon Session

Saturday (February 14) Workshops and Tutorials

HPCA workshops and tutorials only. See the HPCA Workshops and tutorials page for more information.

Sunday (February 15) Workshops and Tutorials

4th ACM SIGPLAN Workshop on Transactional Computing (TRANSACT 2009)

http://transact09.cs.washington.edu/

State D Ballroom

The past few years have seen an explosion of interest in programming languages, systems, and hardware to support transactions, speculation, and related alternatives to classical lock-based concurrency. This workshop, the fourth in its series, will provide a forum for the presentation of research on all aspects of transactional computing.

Cetus: A Source-to-Source Compiler Infrastructure for Multi-cores

Tutorial by Rudi Eigenmann, Sam Midkiff and Chirag Dave (Purdue University)
(Morning only)

State E Ballroom

This tutorial will introduce Cetus, a source-to-source restructuring compiler infrastructure for C programs. The Cetus is a community resource developed in support by the National Science Foundation. The infrastructure is available at cetus.ecn.purdue.edu. Cetus is already used by a number of research projects in the U.S. and in other countries. Its main distinction from related infrastructure efforts is its focus on high-level source-to-source translation for C programs and abstract internal representation. These features have already proven to enable highly efficient design and implementation of new compilation techniques. The tutorial aims to reach a wider audience and provide guidance for the use of the resource and its advanced optimization techniques. These techniques include new symbolic analysis methods, such as range analysis, automatic parallelization for multicores, and optimizations for heterogeneous multicores.

We will present a half-day tutorial, divided into the sections: (i) Overview of Cetus as a compiler infrastructure, (ii) Internal abstract program representation, (iii) Optimization and analysis passes currently available in Cetus, and (iv) Ongoing developments. We will close with an open discussion, soliciting community feedback.

Programming Models and Compiler Optimizations for GPUs and Multi-Core Processors

Tutorial by J. (Ram) Ramanujam (Louisiana State University) and P. (Saday) Sadayappan (The Ohio State University)
(Afternoon only)

State E Ballroom

On-chip parallelism with multiple cores is now ubiquitous. Because of power and cooling constraints, recent performance improvements in both general-purpose and special-purpose processors have come primarily from increased on-chip parallelism rather than increased clock rates. Parallelism is therefore of considerable interest to a much broader group than developers of parallel applications for high-end supercomputers. Several programming environments have recently emerged in response to the need to develop applications for GPUs, the Cell processor, and multi-core processors from AMD, IBM, Intel etc. As commodity computing platforms all go parallel, programming these platforms in order to attain high performance has become an extremely important issue. There has been considerable recent interest in two complementary approaches:

This tutorial will provide an introductory survey covering both these aspects. In contrast to conventional multicore architectures, GPUs and the Cell processor have to exploit parallelism while managing the physical memory on the processor (since there are no caches) by explicitly orchestrating the movement of data between large off-chip memories and the limited on-chip memory. This tutorial will address the issue of explicit memory management in detail.

HPCA Activities

There are also several HPCA workshops and tutorials. See the HPCA Workshops and tutorials page for more information.