|
Lectures: |
Friday 10:15-12:00 and 13:15-15:00, room INM010 |
Lab: |
Friday 10:15-12:00, room BC 07/08, and 13:15-15:00, room INF1 |
Newsgroup: |
news://epfl.ic.cours.act |
The student will learn about techniques used to implement high
level languages, and compilation techniques used to obtain high
performance on modern computer architectures. He or she will also get the
opportunity to study one of these techniques in depth and gain
experience with implementation issues through a project in the context
of an actual compiler.
- Optimization techniques
- Data-flow analysis, program optimization, and code generation
across basic blocks, procedures, and complete programs.
- Interprocedural and intraprocedural analysis, intermediate
representations, register allocation, and instruction scheduling.
- Dependence analysis and loop transformations.
- Implementation of high level languages
- Implementation of higher order functions, coroutines, and
processes.
- Uniprocessor garbage collector techniques.
- Virtual machines and the efficient implementation of their
interpreters.
The course will include two practical projects.
Lecturer: Dr. Erik Stenman,
none,
+46-702705615
Assistant: Iulian Dragos,
INR321,
36864
Course schedule
Date |
10:15-12:00 and 13:15-15:00
room INM010 (Lecture) |
10:15-12:00 and 13:15-15:00
room INF1, BC 07/08 (Project) |
11.03.05 |
Lecture 1 & 2 |
-- |
18.03.05 |
Lecture 3 & 4 -- Project 1 spec |
-- |
25.03.05 |
Easter Break |
|
01.04.05 |
Easter break |
|
08.04.05 |
Lecture 5 & 6 |
|
15.04.05 |
-- |
Project 1 |
22.04.05 |
Lecture 7 & 8 |
|
29.04.05 |
-- |
Project 2 |
06.05.05 |
Lecture 9 & 10 |
-- |
13.05.05 |
-- |
Project 2 |
20.05.05 |
-- |
Project 2 |
27.05.05 |
-- |
Project 2 |
03.06.05 |
-- |
Project 2 |
10.06.05 |
Lecture 11 & 12 |
-- |
17.06.05 |
Wrap-up @ 10:15 |
-- |
Exams: 13.06.05 -- 16.06.05 in Room BC 344
Exam schedule (paper chosen in brackets)
|
(Mon) 13.06.2005 |
(Tu) 14.06.2005 |
(Wed) 15.06.2005 |
(Thu) 16.06.2005 |
09:00-09:20 |
--- |
Jerome Caffaro [8] |
Pascal Jermini [8] |
Marc Torruella [8] |
09:20-09:40 |
--- |
|
Philippe Moret [8] |
Martin Rubli [8] |
09:40-10:00 |
--- |
|
Lukas Rytz [4] |
Claudia Rappo [2] |
10:00-10:20 |
Ali Al-Shabibi [2] |
Laurent Haan [8] |
Florian Hof [3] |
Laurent Grangier [8] |
10:20-10:40 |
Cedric Jeanneret [8] |
|
Jean-Philippe Pellet [4] |
Jérôme Rousselot [8] |
|
14:00-14:20 |
Eyal Kaspi [3] |
|
Thierry Monney [7] |
Marc Moser [4] |
14:20-14:40 |
David Wenger [2] |
|
Sébastien Cuendet [4] |
Ouafi Khaled [8] |
14:40-15:00 |
Ariane Pasquier [4] |
|
Nils Raning [3] |
Pascal Perez [8] |
15:00-15:20 |
Jérôme Pasquier [8] |
|
Matthieu Dietrich [8] |
|
- Introduction to Optimizing Compilers.
Slides as PDFs 6/page, 1/page.
Read: M: Ch 1, A: Ch 1, E: Ch 8.1,8.2,8.4
- Terminology, foundations of dataflow analysis
(partial orders and lattices) &
introduction to abstract interpretation.
Slides as PDFs 6/page, 1/page.
Read: M: Ch 8.2, 17.1, A: Ch 7.1-7.2, 8.1-8.2, E: Ch 9.1-9.2.
- Local (one basic block) optimizations: CSE, constant propagation,
copy propagation, dead code elimination,
(algebraic simplification, strength reduction).
Using dataflow analysis (abstract interpretation) for
global optimizations (one CFG): reaching definitions,
available expressions, and liveness analysis.
Slides as PDFs 6/page, 1/page.
Read: M: Ch 17.2-17.5, A: Ch 7.3-7.5, 8.3-8.5.
E: Ch 8.3, Ch. 8.5, Ch 8.6.
- Static Single Assignment Form (SSA) & Dominators & SSA-based Dead Code Elimination
Slides as PDFs 6/page,
1/page.
Read: M: Ch 18.0-18.1, 19.0-19.3, A: Ch 8.11.
E: Ch 9.3. Ch. 10.3.3, Paper 1 & 2.
-
Sparse Conditional Constant Propagation.
Slides as PDF
6/p or
1/p.
Read: M: Ch. 19.3, 19.5,
A: Ch 12.6. E: Ch 10.3, 10.4.1., Paper 3.
-
(Global) Register Allocation
Slides as PDF
6/p or
1/p.
Read: M: Ch 11 A: Ch 16.
E: Ch 13.
-
PART 1: Partial Redundancy Elimination.
Read: A: Ch 13.0-13.2. E: Ch 10.3.5.
PART 2: Loop Optimizations.
Read: M: Ch. 18 A: Ch 13.2.
.
PART 3: Lazy Code Motion.
Read: A: Ch 13.3.
E: Ch 10.3.2., Papers 4,5 & 6.
Slides as PDF
6/p or
1/p.
- Code Scheduling
Slides as PDF
6/p or
1/p.
Read: M: Ch 20 A: Ch 17. E: Ch 12.
- Summary of part 1 of the course: "Optimizations techniques".
Introduction to part 2: "Implementation of high level languages"
Implementation of Objects and FPL (higher order functions, laziness).
Slides as PDF
6/p or
1/p.
Read: M: Ch 14, Ch 15
E: Ch 7.10.
- Implementation of Concurrency
Slides as PDF
6/p or
1/p.
- (Automatic) Memory Management
Slides as PDF
6/p or
1/p.
Read: M: Ch 13, (Ch 21)
A: (Ch 20.)
E: Ch 6.7.
- Virtual Machines, Interpretation Techniques, and Just-In-Time Compilers.
Slides as PDF
6/p or
1/p.
- Bits and pieces, such as implementation of exceptions, linkers and loaders.
Q&A.
Summary.
Slides as PDF
6/p or
1/p.
You can download the solution to the exercise session of April 22 (pdf).
And Exercise3.pdf, and
Exercise4.pdf.
There will be two projects during the course. The performance
on these projects will directly influence the grade. Students are
allowed and encouraged to work in groups of two.
Like for the first compilation course, we will use Sygeco
to manage the course. You are kindly asked to register yourself
and your group on this system before the deadline for the first
project.
The first project will be to implement a simple register
allocator using Sethi-Ullman register allocation for trees.
(See Chapter 11.5 in "Modern Compiler Implementation").
The second project will be to implement any number of
optimizations in an existing compiler framework. To pass
the performance on the benchmark-suite has to be improved
by a certain percentage.
Suggested optimizations: see the project detail page.
The exam will take place in room BC 344.
There will be an oral exam during the last week of the course.
This exam will cover the lecture notes (the slides) and the
corresponding information that can be found in all three books.
(I.e., the intersection of the information in the books. The
lazy student can try to find the minimal set to read, while the
highly motivated student can read all three books to get three
different views on the material.) It will also cover one of the
articles mentioned below (the handouts), which you will need to
understand.
You will have to register for a 20 minute slot for the
individual exam, by sending an email to Iulian or Erik.
In order to encourage you all to come to the last lecture
there will be one written question during that lecture that
will also count towards your final grade.
Course book
Reference books (can also be used instead of the official course book)
The slides
The slides from all the lectures are available as one big (ca 5 MB)
pdf file.
Handouts
- Ron Cytron, Jeanne Ferrante, Barry K. Rosen, Mark N. Wegman,
and F. Kenneth Zadeck.
Efficiently Computing Static Single Assignment Form
and the Control Dependence Graph.
ACM Transactions on Programming Languages and Systems (TOPLAS),
13(4):451-490, October 1991.
- Preston Briggs, Keith D. Cooper, Timothy J. Harvey, and L. Taylor Simpson.
Practical improvements to the construction and destruction of static single assignment form.
Software: Practice & Experience, 28(8):859-881, July 1998.
- Mark N. Wegman and F. Kenneth Zadeck.
Constant Propagation with Conditional Branches.
ACM Transactions on Programming Languages and Systems (TOPLAS),
13(2):181-210, April 1991.
- Jens Knoop, Oliver Rüthing, and Bernhard Steffen.
Lazy Code Motion.
Proceedings of the SIGPLAN Conference on Programming Language
Design and Implementation (PLDI), pages 212-223, July 1992.
- Karl-Heinz Drechsler and Manfred P. Stadel.
A Variation of Knoop, Rüthing, and Steffen's Lazy Code Motion.
SIGPLAN Notices 28(5), pages 29-38, 1993.
- Robert Kennedy, Sun Chan, Shin-Ming Liu, Raymond Lo, Peng Tu, and Fred Chow.
Partial redundancy elimination in SSA form.
ACM Transactions on Programming Languages and Systems (TOPLAS),
21(3):627-676, May 1999.
- Paul Wilson.
Uniprocessor Garbage Collection Techniques.
(Read only Sections 1, 2, 3.1-3.3, 4.1-4.3, 4.5-4.7, 8, and 9.)
- Anton Ertl and David Gregg.
Branch prediction and VM interpreters.
- Xavier Leroy.
The
ZINC experiment: an economical implementation of the ML
language.
The Deadly Sins from P. J. Brown's
Writing Interactive Compilers and Interpreters, Wiley 1979.
- The first deadly sin is to code before you think.
- The second deadly sin is to assume the user has all the knowledge the compiler writer has.
- The third deadly sin is not to write proper documentation.
- The fourth deadly sin is to ignore language standards.
- The fifth deadly sin is to treat error diagnosis as an afterthought.
- The sixth deadly sin is to equate the unlikely with the impossible.
- The seventh deadly sin is to make the encoding of the compiler dependent on its data formats.
- The eighth deadly sin is to use numbers for objects that are not numbers.
- The ninth deadly sin is to pretend you are catering for everyone at the same time.
- The tenth deadly sin is to have no strategy for processing break-ins.
- The eleventh deadly sin is to rate the beauty of mathematics above the usability of your compiler.
- The twelfth deadly sin is to let any error go undetected.
- The thirteenth deadly sin is to leave users to find the errors in your compiler.
|