Monday, December 14, 2020

TypeFoundry: new ERC Consolidator Grant

I am very pleased to have received an ERC Consolidator Grant for my TypeFoundry proposal.

This will be a five year project to develop the foundations of bidirectional type inference. If you are interested in pursuing a PhD in this area, or conversely, are finishing a PhD in this area, please get in touch!

Many modern programming languages, whether developed in industry, like Rust or Java, or in academia, like Haskell or Scala, are typed. All the data in a program is classified by its type (e.g., as strings or integers), and at compile-time programs are checked for consistent usage of types, in a process called type checking. Thus, the expression 3 + 4 will be accepted, since the + operator takes two numbers as arguments, but the expression 3 + "hello" will be rejected, as it makes no sense to add a number and a string. Though this is a simple idea, sophisticated type system can track properties like algorithmic complexity, data-race freedom, differential privacy, and data abstraction.

In general, programmers must annotate programs to tell compilers the types to check. In theoretical calculi, it is easy to demand enough annotations to trivialize typechecking, but this can make the annotation burden unbearable: often larger than the program itself! So, to transfer results from formal calculi to new programming languages, we need type inference algorithms, which reconstruct missing data from partially-annotated programs.

However, the practice of type inference has outpaced its theory. Compiler authors have implemented many type inference systems, but the algorithms are often ad-hoc or folklore, and the specifications they are meant to meet are informal or nonexistent. The makes it hard to learn how to implement type inference, hard to build alternative implementations (whether for new compilers or analysis engines for IDEs), and hard for programmers to predict if refactorings will preserve typability.

In TypeFoundry, we will use recent developments in proof theory and semantics (like polarized type theory and call-by-push-value) to identify the theoretical structure underpinning type inference, and use this theory to build a collection of techniques for type inference capable of scaling up to the advanced type system features in both modern and future languages.

One of the things that makes me happy about this (beyond the obvious benefits of research funding and the recognition from my peers) is that it shows off the international character of science. I'm an Indian-American researcher, working in the UK, and being judged and funded by researchers in Europe. There has been a truly worrying amount of chauvinism and jingoism in public life recently, and the reminder that cosmopolitanism and universalism are real things too is very welcome.

Also, if you are planning on submitting a Starting Grant or Consolidator proposal to the ERC in the coming year about programming languages, verification or the like, please feel free to get in touch, and I'll be happy to share advice.

Wednesday, December 2, 2020

Church Encodings, Inductive Types, and Relational Parametricity

My blogging has been limited this past year due to RSI, but I do not want to leave things entirely fallow, and last year I wrote an email which can be edited into a decent enough blog post.

Quite often, people will hear that System F, the polymorphic lambda calculus, satisfies a property called relational parametricity. We also often hear people say that the parametricity property of System F lets you prove that a Church encoding of an inductive datatypes actually satisfies all the equational properties we expect of inductive types.

But it's surprisingly hard to find an explicit account of how you go from the basic properties of parametricy (the relational interpretation of System F and the identity extension principle) to the proof that inductive types are definable. I learned how this works from Bob Atkey's paper Relational Parametricity for Higher Kinds, but the proofs are kind of terse, since his page count was mostly focused on the new stuff he had invented.

In the sequel, I'm going to assume that you know what the relational model of system F looks like, that the term calculus satisfies the abstraction theorem (i.e., the fundamental theorem of logical relations), and also that the model satisfies the identity extension property -- if you take the relational interpretation of a type B(α), and fill in the type variable α with the equality relation for the type A, then the relational interpretation of B[A/α] will the equality relation for the type B[A/α]. In what follows I will often write Id[X] to mean the equality relation for the type X.

For completeness' sake, I also give a quick summary of the model at the end of the post.

Haskell-style functors in System F

Given all this preface, we now define a functor F as a type expression F(-) with a hole, along with an operation fmap : ∀a,b. (a → b) → F(a) → F(b), such that

fmap _ _ id = id
fmap _ _ f ∘ fmap _ _ g = fmap _ _ (f ∘ g)

I've written _ to indicate System F type arguments I'm suppressing. Basically think of these as Haskell-style functors.

Next, we can define the inductive type μF using the usual Church encoding as:

μF = ∀a. (F(a) → a) → a

foldF : ∀a. (F(a) → a) → μF → a
foldf = Λa λf : F(a) → a. λx:μF. x [a] f

inF : F(μF) → μF
inF x = Λa. λf : F(a) → a. 
          let g : μF → a       = fold [μF] [a] f in
          let h : F(μF) → F(a) = fmap [μF] [a] g in
          let v : F(a)          = h x in
          f v

I've written out inF with type-annotated local bindings to make it easier to read. If you inlined all the local bindings and suppressed type annotations, then it would read:

inF : F(μF) → μF
inF x = Λa. λf : F(a) → a. 
          f (fmap (fold f) x)

F-algebra Homomorphisms and the Graph Lemma

Now, we can prove a lemma about F-algebra homomorphisms, which is the tool we will use to prove initiality. But what are F-algebra homomorphisms?

An F-algebra is a pair (X, g : F(X) → X). An F-algebra homomorphism between two F-algebras (X, k : F(X) → X) and (Y, k' : F(Y) → Y) is a function f : X → Y such that the following ASCII-art diagram commutes:

 F(X) — k  → X
  |           |
 fmap X Y f   f
  ↓           ↓
 F(Y) — k' → Y

That is, for all u : F(X), we want f(k u) = k'(fmap [X] [Y] f u)

Before we prove something about homomorphisms, we need to prove a technical lemma called the graph lemma, which will let us connect parametricity with our functorial definitions.

Graph Lemma: Let (F, fmap) be a functor, and h : A → B be a function. Define the graph relation <h> to be the relation {(a,b) | b = f(a) }. Then F<h> ⊆ <fmap [A] [B] h>.

Proof:

  1. First, note that fmap has the type ∀a b. (a → b) → F(a) → F(b).
  2. Since fmap is parametric, we can choose the relations <h> and the identity to instantiate F with, giving us:

    (fmap [A] [B], fmap [B] [B]) ∈ (<h> → Id[B]) → (F<h> → Id<B>)
  3. Note that (h, id) ∈ <h> → Id[B].
  4. Hence (fmap [A] [B] h, fmap [B] [B] id) ∈ (F<h> → Id<B>)
  5. By functoriality, (fmap [A] [B] h, id) ∈ (F<h> → Id<B>)
  6. Assume (x, y) ∈ F<h>.
    1. Hence (fmap [A] [B] h x, id y) ∈ Id<B>)
    2. So fmap [A] [B] h x = y.
  7. Therefore (x, y) ∈ <fmap [A] [B] h>.
  8. Therefore F<h> ⊆ <fmap [A] [B] h>.

Graph relations are just functions viewed as relations, and the graph lemma tells us that the relational semantics of a type constructor applied to a graph relation <h> will behave like the implementation of the fmap term. In other words, it connects the relational semantics to the code implementing functoriality. (As an aside, this feels like it should be an equality, but I only see how to prove the inclusion.)

We use this lemma in the proof of the homomorphism lemma, which we state below:

Homomorphism Lemma: Let (F, fmap) be a functor. Given two F-algebras (A, k : F(A) → A) and (B, k' : F(B) → B), and an F-algebra homomorphism h : (A,k) → (B,k'), then for all e : μF, we have

e [B] k' = h (e [A] k)

Proof:

  1. First, by the parametricity of e, we know that

     (e [A], e [B]) ∈ (F<h> → <h>) → <h>
  2. We want to apply the arguments (k, k') to (e [A], e [B]), so we have to show that (k, k') ∈ F<h> → <h>.
  3. To show this, assume that (x, y) ∈ F<h>.

    1. Now we have to show that (k x, k' y) ∈ <h>.
    2. Unfolding the definition of <h>, we need (h (k x), k' y) ∈ Id[B].
    3. Since h is an F-algebra homomorphism, we have that h (k x) = k' (fmap [A] [B] h x).
    4. So we need to show (k' (fmap A B h x), k' y) ∈ Id[B].
    5. Now, we know that (x, y) ∈ F<h>.
    6. By the graph lemma, we know F<h> ⊆ <fmap [A] [B] h>.
    7. So (x, y) ∈ <fmap [A] [B] h>.
    8. Unfolding the definition, we know y = fmap [A] [B] h x.
    9. So we want (k' (fmap [A] [B] h x), k' (fmap [A] [B] h x)) ∈ Id[B].
    10. Since Id[B] is an equality relation, this is immediate.
  4. Hence (k, k') ∈ F<h> → <h>.
  5. Therefore (e [A] k, e [B] k') ∈ <h>.
  6. Unfolding the definition of <h>, we know e [B] k' = h(e [A] k). This is what we wanted to show.

Discussion:

The whole machinery of F-algebra homomorphisms basically exists to phrase the commuting conversions in a nice way. We just proved that for e : μF, we have

 e [B] k' = h (e [A] k)

Recall that for Church encoded inductive types, the fold is basically the identity, so this result is equivalent (up to beta) to:

 foldF [B] k' e = h (foldF [A] k' e)

So this lets us shifts contexts out of iterators if they are F-algebra homomorphisms. Note that this proof was also the one where the graph lemma is actually used.

The Beta and Eta Rules for Inductive Types

Next, we'll prove the beta- and eta-rules for inductive types. I'll do it in three steps:

  • The beta rule (eg, let (x,y) = (e1, e2) in e' == [e1/x, e2/y]e'),
  • The basic eta rule (eg, let (x,y) = e in (x,y) == e)
  • The commuting conversions (eg, C[e] = let (x,y) = e in C[(x,y)])

Theorem (beta rule): If k : F(A) → A and e : F(μF), then

 foldF [A] k (inF e) = k (fmap [μF] [A] (foldF [A] k) e)

Proof: This follows by unfolding the definitions and beta-reducing. (You don't even need parametricity for this part!)

This shows that for any F-algebra (A, k), foldF [A] k is an F-algebra homomorphism from (μF, inF) to (A, k).

Theorem (basic eta) For all e : μF, we have e = e [μF] inF

Proof:

  1. Assume an arbitrary B and k : F(B) → B.
    1. Note h = foldF [B] k is an F-algebra homomorphism from (μF, inF) to (B, k) by the beta rule.
    2. Then by our lemma, e [B] k = h (e [μF] inF).
    3. Unfolding, h (e [μF] inF) = foldF [B] k (e [μF] inF).
    4. Unfolding, foldF [B] k (e [μF] inF) = e [μF] inF [B] k.
    5. So e [B] k = e [μF] inF [B] k.
  2. So for all B and k : F(B) → B, we have e [B] k = e [μF] inF [B] k.
  3. By extensionality, e = e [μF] inF.

Theorem (commuting conversions): If k : F(A) → A and f : μF → A and f is an F-algebra homomorphism from (μF, inF) to (A, k), then f = foldF [A] k.

Proof:

  1. We want to show f = foldF [A] k.
  2. Unfolding foldF, we want to show f = λx:μF. x [A] k
  3. Assume e : μF.
    1. Now we will show f e = e [A] k.
    2. By the homomorphism lemma, and the fact that f : (μF, inF) → (A, k) we know that e [A] k = f (e [μF] inF).
    3. By the basic eta rule, e [A] k = f e
  4. So for all e, we have f e = e [A] k.
  5. Hence by extensionality f = λx:μF. x [A] k.

Note that what I (as a type theorist) called "commuting conversions" is exactly the same as "initiality" (to a category theorist), so now we have shown that the Church encoding of a functor actually lets us define the initial algebra.

Thus, we know that inductive types are definable!

Appendix: the Relational Model of System F

In this appendix, I'll quickly sketch one particular model of System F, basically inspired from the PER model of Longo and Moggi. What we will do is to define types as partial equivalence relations (PERs for short) on terms. Recall that a partial equivalence relation A on a set X is a relation on X, which is (a) symmetric (i.e., if (x, x') ∈ R then (x',x) ∈ R), and (b) transitive (if (x,y) ∈ X and (y,z) ∈ R then (x,z) ∈ R).

We take the set of semantic types to be the set of PERs on terms which are closed under evalation in either direction:

Type = { X ∈ PER(Term) | ∀x, x', y, y'. 
                           if x ↔∗ x' and y ↔∗ y' 
                           then (x,y) ∈ X ⇔ (x',y') ∈ X}

PERs form a complete lattice, with the meet given by intersection, which is the property which lets us interpret the universal quantifier of System F. We can then take a type environment θ to be a map from variables to semantic types, and interpret types as follows:

〚a〛θ ≜ θ(α)
〚A → B〛θ ≜ { (e, e') ∈ Term × Term | ∀ (t,t') ∈ 〚A〛θ. 
                                          (e t,e' t') ∈ 〚B〛θ }
〚∀a. A〛θ ≜ ⋂_{X ∈ Type} 〚A〛(θ, X/α) 

This is the PER model of System F, which is quite easy to work with but not quite strong enough to model parametricity. To model parametricity, we need a second semantics for System F, the so-called relational semantics.

A relation R between two PERs A and B over a set X, is a a relation on X which respects A and B. Suppose that (a,a') ∈ A and (b,b') ∈ B. Then, a relation R respects A and B when (a, b) ∈ R if and only if (a', b') ∈ R.

Now, suppose we have two environments θ₁ and θ₂ sending type variables to types, and a relation environment ρ that sends each type variable a to a relation respecting θ₁(a) and θ₂(a). Then we can define the relational interpretation of System F types as follows:

R〚a〛θ₁ θ₂ ρ ≜ θ(α)
R〚A → B〛θ ≜ { (e, e') ∈ Term × Term | ∀ (t,t') ∈ R〚A〛θ₁ θ₂ ρ. 
                                          (e t,e' t') ∈ R〚B〛θ₁ θ₂ ρ }
R〚∀a. A〛θ ≜ ⋂_{X, Y ∈ Type, R ∈ Rel(X,Y)} 
                  R〚A〛(θ₁, X/a) (θ₂, Y/a) (ρ, R/a)

Additionally, we need to redefine the PER model's interpretation of the ∀a.A case, to use the relational model:

〚∀a. A〛θ ≜ ⋂_{X, Y ∈ Type, R ∈ Rel(X,Y)} 
                 R〚A〛θ θ (Id[θ], R/a)

Here, to give a PER interpetation for the forall type, we use the relational interpretation, duplicating the type environment and using the identity relation for each variable (written Id(θ)). By identity relation, we mean that given a PER A, the PER A is a relation between between A and A which respects A.

This modified definition ensures the model satisfies the identity extension property -- if you take the relational interpretation of a type B(α), and fill in the type variable α with the equality relation for the type A, then the relational interpretation of B[A/α] will the equality relation for the type B[A/α]. In what follows I will often write Id[X] to mean the equality relation for the type X.

The term calculus for System F also satisfies the abstraction property, (aka the fundamental property of logical relations), which says that given a well typed term Θ; · ⊢ e : A and two type environments θ₁ and θ₂, and a relation environment ρ between them, then e is related to itself in R〚A〛θ θ ρ.

Friday, June 19, 2020

PLDI 2020 Conference Report

I just finished "attending" PLDI 2020, a programming languages conference. Like many conferences in computer science, due to COVID-19, on short notice the physical conference had to be cancelled and replaced with an online virtual conference. Talks were streamed to Youtube, questions were posted to a custom Slack channel, there was a startling variety of online chat applications to hold discussions with, and so on.

It was obviously an enormous amount of work to put together (there was even a custom chat application, Clowdr, written specifically for SIGPLAN(?) conferences), which makes me feel very unkind to report that for me, the online conference experience was a complete waste of time.

So, what went wrong?

To understand this, it is worth thinking about what the purpose of a conference is. The fundamental purpose of a conference is to meet people with shared interests and talk to them. The act of talking to people is fundamental, since it is how (a) you get to see new perspectives about subjects you are interested in, and (b) how you build the social networks that make it possible for you to become and remain an expert. (Recall the 19th century economist Alfred Marshall's description of this process: "The mysteries of the trade become no mysteries; but are as it were in the air, and children learn many of them unconsciously.”)

Even talks -- the things we build conferences around -- exist to facilitate this process of conversation. Talks at conferences really have two important functions: first, the topic of a talk is a filtering device, which helps identify the subset of the audience which is interested in this topic. This means that in the break after the talk, it is now much easier to find people to talk to who share interests with you.

Second, talks supply Schelling-style focal points: you and your interlocutor have common knowledge that you are both interested in the topic of the session, and you both also know that you saw the talk, which gives you a subject of conversation. (Note: as a speaker, you do communicate with your audience, but indirectly, as your talk becomes the seed of a conversation between audience members, which they will use to develop their own understandings.)

The fundamental problem with PLDI 2020 was that it was frustratingly hard to actually talk to people. The proliferation of timezones and chat applications meant that I found it incredibly difficult to actually find someone to talk to -- at least one of the apps (gather.town) just never worked for me, sending me into an infinite sign-up loop, and the others were either totally empty of people who were actually there, or did not seem suited for conversation. (One key issue is that many people log in to an app, and then stop paying attention to it, which makes presence indications useless.)

So if social distancing and travel restrictions due to COVID-19 remain in force (as seems likely), I think it would be better to simply convert our PL conferences fully into journals, and look for other ways to make online networking happen. The sheer amount of labour going into PLDI 2020 supplies strong evidence that simply trying to replicate a physical conference online cannot be made to work with any amount of effort.

However, before I convince everyone that I am totally right about everything, there are still online conferences that will happen -- for example, ICFP in August. So to make sure that I can look forward to the experience rather than dreading it, I will need to make a plan to actually make sure I talk to people.

So, if you are reading this, and are (or know) a researcher in PL who would like to talk, then give me a ping and we can arrange something for ICFP.

  • I am explicitly happy to talk to graduate students and postdocs about their research.

  • I know the most about dependent types, linear and modal types, type inference, separation logic, and program verification.

  • I know less about (but still like talking about) compiler stuff, theorem proving, SMT, Datalog, macros, and parsing.

  • This list is not an exhaustive list of things I'm interested in. One of the nice things about conversations is that you can use shared touchstones to discover how to like new things.

Once you get in touch, I'll put you on a list, and then once the conference talks are listed, we can organize a time to have a chat after we have all seen a talk we are interested in. (Having seen a talk means we will all have shared common knowledge fresh in our minds.) Assuming enough people are interested, I will aim for meetings of 3-5 people, including myself.

Thursday, February 13, 2020

Thought Experiment: An Introductory Compilers Class

Recently, I read a blog post in which Ben Karel summarized the reaction to a request John Regehr made about how to teach compilers, and as one might predict, the Internet was in agreement that the answer was "absolutely everything". Basically, everyone has a different perspective on what the most important thing is, and so the union of everyone's most important thing is everything.

In fact, from a certain point of view, I don't disagree. From parsing to typechecking to intermediate representations to dataflow analysis to register allocation to instruction selection to data representations to stack layouts to garbage collectors to ABIs to debuggers, basically everything in compilers is packed to the gills with absolutely gorgeous algorithms and proofs. The sheer density of wonderful stuff in compilers is just out of this world.

However, John specified an introductory course. Alas, this makes "everything" the wrong answer -- he is asking for a pedagody-first answer which fits coherently into a smallish number of lectures. This means that we have to start with the question of what we want the students to learn, and then build the curriculum around that.

The Goal of Teaching Compilers

So, what should students have learned after taking a compilers class?

The obvious (but wrong) answer is "how to write a compiler", because in all likelihood they will forget most of the techniques for implementing compilers soon after the course ends. This is because most of them – unless they catch the compiler bug – will not write very many more compilers. So if we want them to come away from a compilers class with some lasting knowledge, we have to think in terms of deeper programming techniques which (a) arise naturally when writing compilers, but (b) apply in more contexts than just compilers.

There are two obvious candidates for such techniques:

  1. Computing values by recursion over inductively-defined structures.
  2. Computing values least fixed points of monotone functions by bottom-up iteration.

Both of these are fundamental algorithmic techniques. However, I think that recursion over inductive structure is much more likely to be taught outside of a compilers course, especially considering that modern functional programming (i.e., datatypes and pattern matching) is now much more often taught elsewhere in the curriculum than it used to be.

This leaves fixed-point computations.

So let's choose the goal of an intro compilers class to be teaching fixed point computations many times over, in many different contexts, with the idea that the general technique of fixed point computation can be learned via generalizing from many specific examples. The fact that they are building a compiler is significant primarily because it means that the different examples can all be seen in a motivating context.

So, this drives some choices for the design of the course.

  1. Our goal is to show how fixed point computations arise in many different contexts, and the stages of the compilation pipeline naturally supply a non-contrived collection of very different-looking contexts. Consequently, we should design the course in a full front-to-back style, covering all the phases of the compilation pipeline.

    However, this style of course has the real weakness that it can result in shallow coverage of each topic. Mitigating this weakness drives the design of the rest of the course.

  2. In particular, the compiler structure should be very opinionated.

    The thing we want to do is to present a variety of options (LR versus LL parsing, graph colouring versus linear scan register allocation, CPS versus SSA for IRs, etc), and then present their strengths and weaknesses, and the engineering and design considerations which will lead you to favour one choice over the other.

    But for this course, we won't do any of that. As an instructor, we will simply choose the algorithm, and in particular choose the one that most benefits from fixed point computations.

    The existence of alternatives should be gestured at in lecture, but the student-facing explanation for why we are not exploring them is for their first compiler we are aiming for good choices but biased much more towards implementation simplicity than gcc or LLVM will. (This will be an easier pitch if there is an advanced compilers course, or if students have a final year project where they can write a fancy compiler.)

  3. We will also need to bite the bullet and de-emphasize the aspects of compilation where fixed point computations are less relevant.

    Therefore, we will not cover runtime systems and data structure layouts in any great detail. This substantially affects the design of the language to compile -- in particular, we should not choose a language that has closures or objects. Furthemore, we will just tell students what stack layout to use, and memory management wil be garbage collection via the Boehm gc.

Course/Compiler Structure

For the rest of this note, I'll call the language to compile Introl, because I always liked the Algol naming convention. Introl is a first-order functional language, basically what you get if you removed nested functions, lambda expressions, references, and exceptions from OCaml. I've attached a sketch of this language at the end of the post.

This language has two features which are "hard" -- polymorphism and datatypes. The reason for the inclusion of polymorphism is that it makes formulating type inference as a fixed point problem interesting, and the reason datatypes exist is because (a) match statements offer interesting control flow, and (b) they really show off what sparse conditional constrant propagation can do.

So the course will then follow the top-down lexing to code generation approach that so many bemoan, but which (in the context of our goals) is actually totally appropriate.

Lexing and Parsing

For lexing, I would start with the usual regular expressions and NFAs thing, but then take a bit of a left turn. First, I would show them that state set sizes could explode, and then introduce them to Brüggeman-Klein and Derick Wood's deterministic regular expressions as a way of preventing this explosion.

The reason for this is that the conditions essentially check whether a regular expression is parseable by recursive descent without backtracking -- i.e., you calculate NULLABLE, FIRST and (a variant of) FOLLOW sets for the regular expression. This lets you explain what these sets mean in a context without recursion or fixed points, which makes it easy to transition to LL(1) grammars, which are fundamentally just deterministic regular languages plus recursion.

So then the calculation of these sets as a fixed point equation is very easy, and using the deterministic regular languages means that the explanation of what these sets mean can be decoupled from how to compute via a fixed point computation.

Naturally, this means that the grammar of Introl must be LL(1).

Typechecking

Typechecking for this kind of language is pretty routine, but in this case it should be phrased as an abstract interpretation, in the style of Cousot's Types as Abstract Interpretation.

The interesting thing here is that polymorphism can be presented via what Cousot called "the Herbrand abstraction". The idea is that the abstract elements are monotypes with unification variables in them, with the partial order that t₂ ≥ t₁ if there is a substitution σ such that t₂ = σ(t₁), and the unification algorithm itself as an attempt to calculate the substitution witnessing the join of two terms. I say attempt, since unification can fail. So this is a kind of partial join operation -- in the case you cannot join the two terms, there must have been a type error!

In Introl as presented, top-level functions have a type annotation, and so it will work out that you end up not needing to do a serious fixed point computation to infer types. Indeed, even if you omitted annotations, the fact that unification is calculating a most general unifier means that the fixed point iteration for recursive functions terminates in one iteration. (This should not be too surprising since the Dama-Milner algorithm only needs one traversal of the syntax.)

This fact is worth working out, because a great deal of life in static analysis involves trying to find excuses to iterate less. Indeed, this is what motivates the move to SSA – which will be the very next phase of the compiler.

Conversion to SSA/CPS

The choice of IR is always a fraught one in a compiler class. In this course, I would use a static single assignment (SSA) representation. SSA is a good choice because (a) it simplifies implementing all kinds of dataflow optimizations, (b) generating it also needs dataflow analyses, and (c) it is the IR of choice for serious compilers. (a) and (b) mean we get to do lots of fixed point computations, and (c) means it won't feel artificial to today's yoof.

However, I wouldn't go directly to SSA, since I find φ-functions very difficult to explain directly. IMO, it is worth pulling a trick out of Richard Kelsey's hat, and exploiting the correspondence between continuation-passing style (CPS), and static single assignment form (SSA).

CPS often comes with all sorts of mysticism and higher-order functions, but in this case, it works out very simply. Basically, we can let-normalize the program, and then transform the surface language into basic blocks with arguments (or if you prefer, a bunch of functions tail-calling each other). This is the version of SSA that the MLton compiler used, and which (if memory serves) the Swift SIL interemediate representation uses as well.

Roughly speaking, each basic block in the program will end up have zero or more formal arguments, and jumps to that block have arguments which fill in the arguments. This ends up making it super easy to explain, because the program just goes into let-normal form, and all tail calls get compiled to jumps-with-arguments, and non-tail calls use a call IR instruction.

Now, if we do this maximally naively – which we absolutely want to do – we will get nearly pessimal SSA, since each basic block will take as arguments all the variables in its scope. The reason we do this is to make the need for SSA minimization obvious: we just want to shrink each block's parameter lists so it doesn't take parameters for variables which either don't vary or don't get used.

Doing this will take us to something close to "pruned SSA" form, which would normally be overkill for a first compiler, except that we want to use it to motivate the computation of the dominator tree. Because of the either-or nature of the shrinking, we can do this with two analyses, a liveness analysis and a constancy analysis.

Both of these are dataflow analyses, and we can show how to use the dominator tree to order the work to calculate the calculate the fixed point faster. This will justify doing the fixed point computation to calculate the dominance information we need.

There is a bit of redundancy here, which is deliberate – I think it's important to have done two analyses before calculating dominators, because doing one fixed point computation to speed up one fixed point computation may feel artificial. But doing one computation that speeds up two computations is easy to see the benefit of, especially if we phrase the fixed point computation as taking the transfer function plus the dominance information.

I would avoid using one of the fast dominance algorithms, because pedagogically it's easier to explain the calculation when it is close to the definition.

The other nice thing here is that typechecking got accelerated by a good choice of abstract domain, and flow analysis gets accelerated by a good choice of iteration order.

Once the students have some experience manipulating this representation, I would probably switch to the traditional SSA form. The main benefit of this is just teaching the ability to read the existing literature more.

High-level Optimizations

One thing worth spelling out is that this language (like MLton's SSA) has a high-level SSA representation – switches, data constructors, and field selectors and things like that will all be part of the SSA IR. This makes it possible to do optimizations while thinking at the language level, rather than the machine level. And furthermore, since students have a decent SSA representation, we certainly should use it to do all the "easy" SSA optimizations, like copy propagation and loop-invariant code motion.

All of these are unconditional rewrites when in SSA form, so they are all easy to implement, and will get the students comfortable with manipulating the SSA representation. The next step is to implement dead code elimination, which is both a nice optimization, and also requires them to re-run liveness analysis. This will open the door to understanding that compilation passes may have to be done repeatedly.

Once the students have done that, they will be warmed up for the big "hero optimization" of the course, which will be sparse conditional constant propagation. Because we are working with a high-level IR, sparse conditional constant propagation ought to yield even more impressive results than usual, with lots of tuple creation/pattern matching and cases on constructors disappearing in a really satisfying way.

In particular, good examples ought to arise from erasing the None branches from code using the option monad for safe division safediv : int → int → option int, if there is a test that the divisors are nonzero dominating the divisions.

Lowering and Register Allocation

As I mentioned before, the big optimization was performed on a high-level SSA form: switch statements and data constructors and selectors are still present. Before we can generate machine code, they will have to be turned into lower-level operations. So we can define a "low-level" SSA, where the selections and so on are turned into memory operations, and then translate the high-level IR into the low-level IR.

This ought to be done pretty naively, with each high-level operation turning into its low-level instruction sequence in a pretty direct way. Since the course design was to do as many dataflow optimizations as we could, to bring out the generic structure, a good test the development is smooth enough is whether the flow analyses are sufficiently parameterized that we can change the IR and lattices and still get decent code reuse. Then we can lean on these analyses to clean up the low-level instruction sequences.

If more abstract code is too hard to understand, I would skip most of the low-level optimizations. The only must-do analysis at the low-level representation is to re-reun liveness analysis, so that we can do register allocation using the SSA-form register allocation algorithms of Hack. This (a) lets us avoid graph colouring, (b) while still getting good results, and (c) depends very obviously on the results of a dataflow analysis. It also makes register coalescing (normally an advanced topic) surprisingly easy.

Code generation is then pretty easy – once we've done register allocation, we can map the low-level IR operations to loads, stores, and jumps in the dirt-obvious way; calls and returns doing the obvious stack manipulations; and foreign calls to the Boehm gc to allocate objects. This is not optimal, but most of the techniques I know of in this space don't use fixed points much, and so are off-piste relative to the course aim.

Perspectives

The surprising (to me, anyway) thing is that the choice of focusing on fixed point computations plus the choice to go SSA-only, lays out a path to a surprisingly credible compiler. Before writing this note I would have thought this was definitely too ambitious a compiler for a first compiler! But now, I think it only may be too ambitious.

Obviously, we benefit a lot from working with a first-order functional langage: in many ways, is just an alternate notation for SSA. However, it looks different from SSA, and it looks like a high-level language, which means the students ought to feel like they have accomplished something.

But before I would be willing to try teaching it this way, I would want to program up a compiler or two in this style. This would either confirm that the compiler is tractable, or would show that the compiler is just too big to be teachable. If it is too big, I'd probably change the surface language to only support booleans and integers as types. Since these values all fit inside a word, we could drop the lowering pass altogether.

Another thing I like is that we are able to do a plain old dataflow analysis to parse, and then a dataflow analysis with a clever representation of facts when typechecking, and then dataflow analyses with a clever iteration scheme when doing optimization/codegen.

However, if this ends up being too cute and confusion-prone, another alternative would be to drop type inference by moving to a simply typed language, and then drop SSA in favour of the traditional Kildall/Allen dataflow approach. You could still teach the use of acceleration structures by computing def-use chains and using them for (e.g.) liveness. This would bring out the parallels between parsing and flow analysis.

Relative to an imaginary advanced compilers course, we're missing high-level optimizations like inlining, partial redundancy elimination, and fancy loop optimizations, as well as low-level things like instruction scheduling. We're also missing an angle for tackling control-flow analysis for higher-order language featuring objects or closures.

But since we were able to go SSA straight away, the students will be in a good position to learn this on their own.

Introl sketch

Type structure

The type language has

  • Integers int
  • Unit type unit
  • Tuples T_1 * ... * T_n
  • Sum types via datatype declarations:

    datatype list a = Nil | Cons of a * list a 

Program structure

Top level definitions

All function declarations are top level. Nested functions and anonymous lambda-expressions are not allowed. Function declarations have a type scheme, which may be polymorphic:

    val foo : forall a_1, ..., a_n. (T_1, ..., T_n) -> T 
    def foo(x_1, ..., x_n) = e 

Expressions

Expressions are basically the usual things:

  • variables x
  • constants 3
  • arithmetic e_1 + e_2
  • unit ()
  • tuples (e_1, ..., e_n)
  • data constructors Foo(e)
  • variable bindings let p = e_1; e_2
  • direct function calls f(e_1, ..., e_n), where f is a top-level function name
  • match statements match e { p_1 -> e_1 | ... p_n -> e_n }

Note the absence of control structures like while or for; we will use tail-recursion for that. Note also that there are no explicit type arguments in calls to polymorphic functions.

Patterns

Patterns exist, but nested patterns do not, in order to avoid having to deal with pattern compilation. They are:

  • variables x
  • constructor patterns Foo(x_1, ..., x_n)
  • tuple patterns (x_1, ..., x_n)

Example

Here's a function that reverses a list:

val reverse_acc : forall a. (list a, list a) -> list a 
def reverse_acc(xs, acc) = 
  match xs {
   | Nil -> acc
   | Cons(x, xs) -> reverse_acc(xs, Cons(x, acc))
  }

val reverse : forall a. list a -> list a 
def reverse(xs) = reverse_acc(xs, Nil)

Here's a function to zip two lists together:

val zip : forall a, b. (list a, list b) -> list (a * b)
def zip(as, bs) = 
  match as { 
   | Nil -> Nil
   | Cons(a, as') -> 
      match bs { 
       | Nil -> Nil
       | Cons(b, bs') -> Cons((a,b), zip(as',bs'))
      }
  }

Note the lack of nested patterns vis-a-vis ML.