Friday, June 19, 2020

PLDI 2020 Conference Report

I just finished "attending" PLDI 2020, a programming languages conference. Like many conferences in computer science, due to COVID-19, on short notice the physical conference had to be cancelled and replaced with an online virtual conference. Talks were streamed to Youtube, questions were posted to a custom Slack channel, there was a startling variety of online chat applications to hold discussions with, and so on.

It was obviously an enormous amount of work to put together (there was even a custom chat application, Clowdr, written specifically for SIGPLAN(?) conferences), which makes me feel very unkind to report that for me, the online conference experience was a complete waste of time.

So, what went wrong?

To understand this, it is worth thinking about what the purpose of a conference is. The fundamental purpose of a conference is to meet people with shared interests and talk to them. The act of talking to people is fundamental, since it is how (a) you get to see new perspectives about subjects you are interested in, and (b) how you build the social networks that make it possible for you to become and remain an expert. (Recall the 19th century economist Alfred Marshall's description of this process: "The mysteries of the trade become no mysteries; but are as it were in the air, and children learn many of them unconsciously.”)

Even talks -- the things we build conferences around -- exist to facilitate this process of conversation. Talks at conferences really have two important functions: first, the topic of a talk is a filtering device, which helps identify the subset of the audience which is interested in this topic. This means that in the break after the talk, it is now much easier to find people to talk to who share interests with you.

Second, talks supply Schelling-style focal points: you and your interlocutor have common knowledge that you are both interested in the topic of the session, and you both also know that you saw the talk, which gives you a subject of conversation. (Note: as a speaker, you do communicate with your audience, but indirectly, as your talk becomes the seed of a conversation between audience members, which they will use to develop their own understandings.)

The fundamental problem with PLDI 2020 was that it was frustratingly hard to actually talk to people. The proliferation of timezones and chat applications meant that I found it incredibly difficult to actually find someone to talk to -- at least one of the apps (gather.town) just never worked for me, sending me into an infinite sign-up loop, and the others were either totally empty of people who were actually there, or did not seem suited for conversation. (One key issue is that many people log in to an app, and then stop paying attention to it, which makes presence indications useless.)

So if social distancing and travel restrictions due to COVID-19 remain in force (as seems likely), I think it would be better to simply convert our PL conferences fully into journals, and look for other ways to make online networking happen. The sheer amount of labour going into PLDI 2020 supplies strong evidence that simply trying to replicate a physical conference online cannot be made to work with any amount of effort.

However, before I convince everyone that I am totally right about everything, there are still online conferences that will happen -- for example, ICFP in August. So to make sure that I can look forward to the experience rather than dreading it, I will need to make a plan to actually make sure I talk to people.

So, if you are reading this, and are (or know) a researcher in PL who would like to talk, then give me a ping and we can arrange something for ICFP.

  • I am explicitly happy to talk to graduate students and postdocs about their research.

  • I know the most about dependent types, linear and modal types, type inference, separation logic, and program verification.

  • I know less about (but still like talking about) compiler stuff, theorem proving, SMT, Datalog, macros, and parsing.

  • This list is not an exhaustive list of things I'm interested in. One of the nice things about conversations is that you can use shared touchstones to discover how to like new things.

Once you get in touch, I'll put you on a list, and then once the conference talks are listed, we can organize a time to have a chat after we have all seen a talk we are interested in. (Having seen a talk means we will all have shared common knowledge fresh in our minds.) Assuming enough people are interested, I will aim for meetings of 3-5 people, including myself.

Thursday, February 13, 2020

Thought Experiment: An Introductory Compilers Class

Recently, I read a blog post in which Ben Karel summarized the reaction to a request John Regehr made about how to teach compilers, and as one might predict, the Internet was in agreement that the answer was "absolutely everything". Basically, everyone has a different perspective on what the most important thing is, and so the union of everyone's most important thing is everything.

In fact, from a certain point of view, I don't disagree. From parsing to typechecking to intermediate representations to dataflow analysis to register allocation to instruction selection to data representations to stack layouts to garbage collectors to ABIs to debuggers, basically everything in compilers is packed to the gills with absolutely gorgeous algorithms and proofs. The sheer density of wonderful stuff in compilers is just out of this world.

However, John specified an introductory course. Alas, this makes "everything" the wrong answer -- he is asking for a pedagody-first answer which fits coherently into a smallish number of lectures. This means that we have to start with the question of what we want the students to learn, and then build the curriculum around that.

The Goal of Teaching Compilers

So, what should students have learned after taking a compilers class?

The obvious (but wrong) answer is "how to write a compiler", because in all likelihood they will forget most of the techniques for implementing compilers soon after the course ends. This is because most of them – unless they catch the compiler bug – will not write very many more compilers. So if we want them to come away from a compilers class with some lasting knowledge, we have to think in terms of deeper programming techniques which (a) arise naturally when writing compilers, but (b) apply in more contexts than just compilers.

There are two obvious candidates for such techniques:

  1. Computing values by recursion over inductively-defined structures.
  2. Computing values least fixed points of monotone functions by bottom-up iteration.

Both of these are fundamental algorithmic techniques. However, I think that recursion over inductive structure is much more likely to be taught outside of a compilers course, especially considering that modern functional programming (i.e., datatypes and pattern matching) is now much more often taught elsewhere in the curriculum than it used to be.

This leaves fixed-point computations.

So let's choose the goal of an intro compilers class to be teaching fixed point computations many times over, in many different contexts, with the idea that the general technique of fixed point computation can be learned via generalizing from many specific examples. The fact that they are building a compiler is significant primarily because it means that the different examples can all be seen in a motivating context.

So, this drives some choices for the design of the course.

  1. Our goal is to show how fixed point computations arise in many different contexts, and the stages of the compilation pipeline naturally supply a non-contrived collection of very different-looking contexts. Consequently, we should design the course in a full front-to-back style, covering all the phases of the compilation pipeline.

    However, this style of course has the real weakness that it can result in shallow coverage of each topic. Mitigating this weakness drives the design of the rest of the course.

  2. In particular, the compiler structure should be very opinionated.

    The thing we want to do is to present a variety of options (LR versus LL parsing, graph colouring versus linear scan register allocation, CPS versus SSA for IRs, etc), and then present their strengths and weaknesses, and the engineering and design considerations which will lead you to favour one choice over the other.

    But for this course, we won't do any of that. As an instructor, we will simply choose the algorithm, and in particular choose the one that most benefits from fixed point computations.

    The existence of alternatives should be gestured at in lecture, but the student-facing explanation for why we are not exploring them is for their first compiler we are aiming for good choices but biased much more towards implementation simplicity than gcc or LLVM will. (This will be an easier pitch if there is an advanced compilers course, or if students have a final year project where they can write a fancy compiler.)

  3. We will also need to bite the bullet and de-emphasize the aspects of compilation where fixed point computations are less relevant.

    Therefore, we will not cover runtime systems and data structure layouts in any great detail. This substantially affects the design of the language to compile -- in particular, we should not choose a language that has closures or objects. Furthemore, we will just tell students what stack layout to use, and memory management wil be garbage collection via the Boehm gc.

Course/Compiler Structure

For the rest of this note, I'll call the language to compile Introl, because I always liked the Algol naming convention. Introl is a first-order functional language, basically what you get if you removed nested functions, lambda expressions, references, and exceptions from OCaml. I've attached a sketch of this language at the end of the post.

This language has two features which are "hard" -- polymorphism and datatypes. The reason for the inclusion of polymorphism is that it makes formulating type inference as a fixed point problem interesting, and the reason datatypes exist is because (a) match statements offer interesting control flow, and (b) they really show off what sparse conditional constrant propagation can do.

So the course will then follow the top-down lexing to code generation approach that so many bemoan, but which (in the context of our goals) is actually totally appropriate.

Lexing and Parsing

For lexing, I would start with the usual regular expressions and NFAs thing, but then take a bit of a left turn. First, I would show them that state set sizes could explode, and then introduce them to Brüggeman-Klein and Derick Wood's deterministic regular expressions as a way of preventing this explosion.

The reason for this is that the conditions essentially check whether a regular expression is parseable by recursive descent without backtracking -- i.e., you calculate NULLABLE, FIRST and (a variant of) FOLLOW sets for the regular expression. This lets you explain what these sets mean in a context without recursion or fixed points, which makes it easy to transition to LL(1) grammars, which are fundamentally just deterministic regular languages plus recursion.

So then the calculation of these sets as a fixed point equation is very easy, and using the deterministic regular languages means that the explanation of what these sets mean can be decoupled from how to compute via a fixed point computation.

Naturally, this means that the grammar of Introl must be LL(1).

Typechecking

Typechecking for this kind of language is pretty routine, but in this case it should be phrased as an abstract interpretation, in the style of Cousot's Types as Abstract Interpretation.

The interesting thing here is that polymorphism can be presented via what Cousot called "the Herbrand abstraction". The idea is that the abstract elements are monotypes with unification variables in them, with the partial order that t₂ ≥ t₁ if there is a substitution σ such that t₂ = σ(t₁), and the unification algorithm itself as an attempt to calculate the substitution witnessing the join of two terms. I say attempt, since unification can fail. So this is a kind of partial join operation -- in the case you cannot join the two terms, there must have been a type error!

In Introl as presented, top-level functions have a type annotation, and so it will work out that you end up not needing to do a serious fixed point computation to infer types. Indeed, even if you omitted annotations, the fact that unification is calculating a most general unifier means that the fixed point iteration for recursive functions terminates in one iteration. (This should not be too surprising since the Dama-Milner algorithm only needs one traversal of the syntax.)

This fact is worth working out, because a great deal of life in static analysis involves trying to find excuses to iterate less. Indeed, this is what motivates the move to SSA – which will be the very next phase of the compiler.

Conversion to SSA/CPS

The choice of IR is always a fraught one in a compiler class. In this course, I would use a static single assignment (SSA) representation. SSA is a good choice because (a) it simplifies implementing all kinds of dataflow optimizations, (b) generating it also needs dataflow analyses, and (c) it is the IR of choice for serious compilers. (a) and (b) mean we get to do lots of fixed point computations, and (c) means it won't feel artificial to today's yoof.

However, I wouldn't go directly to SSA, since I find φ-functions very difficult to explain directly. IMO, it is worth pulling a trick out of Richard Kelsey's hat, and exploiting the correspondence between continuation-passing style (CPS), and static single assignment form (SSA).

CPS often comes with all sorts of mysticism and higher-order functions, but in this case, it works out very simply. Basically, we can let-normalize the program, and then transform the surface language into basic blocks with arguments (or if you prefer, a bunch of functions tail-calling each other). This is the version of SSA that the MLton compiler used, and which (if memory serves) the Swift SIL interemediate representation uses as well.

Roughly speaking, each basic block in the program will end up have zero or more formal arguments, and jumps to that block have arguments which fill in the arguments. This ends up making it super easy to explain, because the program just goes into let-normal form, and all tail calls get compiled to jumps-with-arguments, and non-tail calls use a call IR instruction.

Now, if we do this maximally naively – which we absolutely want to do – we will get nearly pessimal SSA, since each basic block will take as arguments all the variables in its scope. The reason we do this is to make the need for SSA minimization obvious: we just want to shrink each block's parameter lists so it doesn't take parameters for variables which either don't vary or don't get used.

Doing this will take us to something close to "pruned SSA" form, which would normally be overkill for a first compiler, except that we want to use it to motivate the computation of the dominator tree. Because of the either-or nature of the shrinking, we can do this with two analyses, a liveness analysis and a constancy analysis.

Both of these are dataflow analyses, and we can show how to use the dominator tree to order the work to calculate the calculate the fixed point faster. This will justify doing the fixed point computation to calculate the dominance information we need.

There is a bit of redundancy here, which is deliberate – I think it's important to have done two analyses before calculating dominators, because doing one fixed point computation to speed up one fixed point computation may feel artificial. But doing one computation that speeds up two computations is easy to see the benefit of, especially if we phrase the fixed point computation as taking the transfer function plus the dominance information.

I would avoid using one of the fast dominance algorithms, because pedagogically it's easier to explain the calculation when it is close to the definition.

The other nice thing here is that typechecking got accelerated by a good choice of abstract domain, and flow analysis gets accelerated by a good choice of iteration order.

Once the students have some experience manipulating this representation, I would probably switch to the traditional SSA form. The main benefit of this is just teaching the ability to read the existing literature more.

High-level Optimizations

One thing worth spelling out is that this language (like MLton's SSA) has a high-level SSA representation – switches, data constructors, and field selectors and things like that will all be part of the SSA IR. This makes it possible to do optimizations while thinking at the language level, rather than the machine level. And furthermore, since students have a decent SSA representation, we certainly should use it to do all the "easy" SSA optimizations, like copy propagation and loop-invariant code motion.

All of these are unconditional rewrites when in SSA form, so they are all easy to implement, and will get the students comfortable with manipulating the SSA representation. The next step is to implement dead code elimination, which is both a nice optimization, and also requires them to re-run liveness analysis. This will open the door to understanding that compilation passes may have to be done repeatedly.

Once the students have done that, they will be warmed up for the big "hero optimization" of the course, which will be sparse conditional constant propagation. Because we are working with a high-level IR, sparse conditional constant propagation ought to yield even more impressive results than usual, with lots of tuple creation/pattern matching and cases on constructors disappearing in a really satisfying way.

In particular, good examples ought to arise from erasing the None branches from code using the option monad for safe division safediv : int → int → option int, if there is a test that the divisors are nonzero dominating the divisions.

Lowering and Register Allocation

As I mentioned before, the big optimization was performed on a high-level SSA form: switch statements and data constructors and selectors are still present. Before we can generate machine code, they will have to be turned into lower-level operations. So we can define a "low-level" SSA, where the selections and so on are turned into memory operations, and then translate the high-level IR into the low-level IR.

This ought to be done pretty naively, with each high-level operation turning into its low-level instruction sequence in a pretty direct way. Since the course design was to do as many dataflow optimizations as we could, to bring out the generic structure, a good test the development is smooth enough is whether the flow analyses are sufficiently parameterized that we can change the IR and lattices and still get decent code reuse. Then we can lean on these analyses to clean up the low-level instruction sequences.

If more abstract code is too hard to understand, I would skip most of the low-level optimizations. The only must-do analysis at the low-level representation is to re-reun liveness analysis, so that we can do register allocation using the SSA-form register allocation algorithms of Hack. This (a) lets us avoid graph colouring, (b) while still getting good results, and (c) depends very obviously on the results of a dataflow analysis. It also makes register coalescing (normally an advanced topic) surprisingly easy.

Code generation is then pretty easy – once we've done register allocation, we can map the low-level IR operations to loads, stores, and jumps in the dirt-obvious way; calls and returns doing the obvious stack manipulations; and foreign calls to the Boehm gc to allocate objects. This is not optimal, but most of the techniques I know of in this space don't use fixed points much, and so are off-piste relative to the course aim.

Perspectives

The surprising (to me, anyway) thing is that the choice of focusing on fixed point computations plus the choice to go SSA-only, lays out a path to a surprisingly credible compiler. Before writing this note I would have thought this was definitely too ambitious a compiler for a first compiler! But now, I think it only may be too ambitious.

Obviously, we benefit a lot from working with a first-order functional langage: in many ways, is just an alternate notation for SSA. However, it looks different from SSA, and it looks like a high-level language, which means the students ought to feel like they have accomplished something.

But before I would be willing to try teaching it this way, I would want to program up a compiler or two in this style. This would either confirm that the compiler is tractable, or would show that the compiler is just too big to be teachable. If it is too big, I'd probably change the surface language to only support booleans and integers as types. Since these values all fit inside a word, we could drop the lowering pass altogether.

Another thing I like is that we are able to do a plain old dataflow analysis to parse, and then a dataflow analysis with a clever representation of facts when typechecking, and then dataflow analyses with a clever iteration scheme when doing optimization/codegen.

However, if this ends up being too cute and confusion-prone, another alternative would be to drop type inference by moving to a simply typed language, and then drop SSA in favour of the traditional Kildall/Allen dataflow approach. You could still teach the use of acceleration structures by computing def-use chains and using them for (e.g.) liveness. This would bring out the parallels between parsing and flow analysis.

Relative to an imaginary advanced compilers course, we're missing high-level optimizations like inlining, partial redundancy elimination, and fancy loop optimizations, as well as low-level things like instruction scheduling. We're also missing an angle for tackling control-flow analysis for higher-order language featuring objects or closures.

But since we were able to go SSA straight away, the students will be in a good position to learn this on their own.

Introl sketch

Type structure

The type language has

  • Integers int
  • Unit type unit
  • Tuples T_1 * ... * T_n
  • Sum types via datatype declarations:

    datatype list a = Nil | Cons of a * list a 

Program structure

Top level definitions

All function declarations are top level. Nested functions and anonymous lambda-expressions are not allowed. Function declarations have a type scheme, which may be polymorphic:

    val foo : forall a_1, ..., a_n. (T_1, ..., T_n) -> T 
    def foo(x_1, ..., x_n) = e 

Expressions

Expressions are basically the usual things:

  • variables x
  • constants 3
  • arithmetic e_1 + e_2
  • unit ()
  • tuples (e_1, ..., e_n)
  • data constructors Foo(e)
  • variable bindings let p = e_1; e_2
  • direct function calls f(e_1, ..., e_n), where f is a top-level function name
  • match statements match e { p_1 -> e_1 | ... p_n -> e_n }

Note the absence of control structures like while or for; we will use tail-recursion for that. Note also that there are no explicit type arguments in calls to polymorphic functions.

Patterns

Patterns exist, but nested patterns do not, in order to avoid having to deal with pattern compilation. They are:

  • variables x
  • constructor patterns Foo(x_1, ..., x_n)
  • tuple patterns (x_1, ..., x_n)

Example

Here's a function that reverses a list:

val reverse_acc : forall a. (list a, list a) -> list a 
def reverse_acc(xs, acc) = 
  match xs {
   | Nil -> acc
   | Cons(x, xs) -> reverse_acc(xs, Cons(x, acc))
  }

val reverse : forall a. list a -> list a 
def reverse(xs) = reverse_acc(xs, Nil)

Here's a function to zip two lists together:

val zip : forall a, b. (list a, list b) -> list (a * b)
def zip(as, bs) = 
  match as { 
   | Nil -> Nil
   | Cons(a, as') -> 
      match bs { 
       | Nil -> Nil
       | Cons(b, bs') -> Cons((a,b), zip(as',bs'))
      }
  }

Note the lack of nested patterns vis-a-vis ML.

Wednesday, October 30, 2019

Every Finite Automaton has a Corresponding Regular Expression

Today I'd like to blog about the nicest proof of Kleene's theorem that regular expressions and finite automata are equivalent that I know. One direction of this proof -- that every regular expression has a corresponding finite automaton -- is very well-known to programmers, since Thompson's construction, which converts regular expressions to nondeterministic automta, and the subset construction, which turns nondeterministic automta into deterministic automta, are the basis of every lexer and regexp engine there is.

However, the proof of the other direction -- that every automaton has a corresponding regexp -- is often stated, but seldom proved. This is a shame, since it has an elegant formulation in terms of linear algebra(!).

The Construction

FIrst, let's assume we have a regular language \((R, 1, \cdot, 0, +, \ast)\). Now, define the category \(C\) as follows:

  • The objects of \(C\) are the natural numbers.
  • A morphism \(f : n \to m\) is an \(n \times m\) matrix of elements of \(R\). We will write \(f_{(i,j)}\) to get the element of \(R\) in the \(i\)-th row and \(j\)-th column.
  • The identity morphism \(id : n \to n\) is the \(n \times n\) matrix which is \(1\) on the diagonal and \(0\) off the diagonal
  • The composition \(f;g\) of \(f : n \to m\) and \(g : m \to o\) is given by matrix multiplication:

\[ (f;g)_{i, j} = \sum_{k \in 0\ldots m} f_{(i,k)} \cdot g_{(k,j)} \]

This may seem like a rather peculiar thing to do -- matrices filled with regular expressions? However, to understand why this is potentially a sensible idea, let's look at a generic automaton of dimension 2.

G 0 0 0->0 a 1 1 0->1 b 1->0 c 1->1 d

Given such a picture, we can give the transition matrix for this as follows:

\[ A = \begin{array}{l|cc} & 0 & 1 \\\hline 0 & a & b \\ 1 & c & d \\ \end{array} \]

So the \((i,j)\)-th entry tells us the label of the edge from node \(i\) to node \(j\). In our category, this is an endomap \(A : 2 \to 2\). What happens if we compose \(A\) with itself? Since composition is just matrix multiplication, we get:

\[ \left[\begin{array}{cc} a & b \\ c & d \end{array}\right] \cdot \left[\begin{array}{cc} a & b \\ c & d \end{array}\right] = \left[\begin{array}{cc} aa + bc & ab + bd \\ ca + dc & cb + dd \end{array}\right] \]

Notice that the \((i,j)\)-th entry is a regular expression describing the paths of length two! So composition of arrows corresponds to path composition. The identity matrix are the paths of length 0, the transition matrix gives the paths of length 1, and matrix multiplication gives path concatentation. So it's easy to get a regexp for any bounded length path. Indeed, it is easy to see that the endomaps of \(n\) form a semiring.

So the endomaps of \(n\) are \(n \times n\) matrixes, and we can turn them into a semiring \((S, 1_S, \cdot_S, 0_S, +_S)\) as follows:

  • The carrier \(S\) is \(\mathrm{Hom}(n,n)\), the set of \(n \times n\) matrices over \(R\).
  • The unit \(1_S\) is the identity map \(\mathrm{id} : n \to n\)
  • The multiplication \(\cdot_S\) is composition of morphisms (i.e., matrix multiplication). Since these are endomorphisms (ie, square matrices), the result of composition (i.e., multiplication of square matrices) is another endomorphism (ie, square matrix of the same dimension).
  • The zero \(0_S\) is the matrix with all 0 entries.
  • The addition \(f + g\) is the pointwise addition of matrices -- i.e., \((f + g)_{i, j} = f_{i,j} + g_{i,j}\)

Now we can ask if we can equip endomaps with a Kleene star operation as well? Recall that the defining equation for the Kleene star is

\[ s\ast = 1 + s\cdot s\ast \]

which says that we are looking for a transitive closure operation. That is, \(s\) consists of 0 length paths, or anything in \(s\) plus anything in \(s\ast\). So we want a definition of the Kleene star which satisfies this equation. Let's consider the case of the size 2 automaton again:

G 0 0 0->0 a 1 1 0->1 b 1->0 c 1->1 d

Now, let's think about the loop-free paths between 0 and 0. There are two classes of them:

  1. The single step \(a\).
  2. We can take a step from 0 to 1 via \(b\), take some number of steps from 1 to itself via \(d\), and then back to 0 via \(c\).

So we can describe the loop-free paths from 0 to 0 via the regular expression

\[ F \triangleq a + b(d\ast)c \]

Similarly, the loop-free paths from 1 to 1 are either:

  1. The single step \(d\).
  2. We can take a step from 1 to 0 via \(c\), take some number of steps from 1 to itself via \(a\), and then back to 1 via \(b\).

So these paths are described by \[ G \triangleq d + c(a\ast)b \]

So the set of all paths from 0 to 0 is \(F\ast\), and the set of all paths from \(1\) to \(1\) is \(G\ast\). This lets us write down the

\[ A\ast = \begin{array}{l|cc} & 0 & 1 \\\hline 0 & F\ast & {F\ast} \cdot b \cdot {G\ast} \\ 1 & {G\ast} \cdot c \cdot {F\ast} & G\ast \\ \end{array} \]

Proving that this is a correct definition requires a bit of work, so I will skip over it. That's because I want to get to the cool bit, which is that this construction is enough to define the Kleene star for any dimension \(n\)! We'll prove this by well-founded induction. First, suppose we have an \(n \times n\) transition matrix \(T\) for \(n > 2\). Then, we can divide the matrix into blocks, with \(A\) and \(D\) as square submatrices:

\[ T = \left[ \begin{array}{c|c} A & B \\\hline C & D \end{array} \right] \]

and then define \(T\ast\) as:

\[ T\ast = \left[ \begin{array}{c|c} F\ast & {F\ast} \cdot B \cdot {G\ast} \\ \hline {G\ast} \cdot C \cdot {F\ast} & G\ast \\ \end{array} \right] \]

where

\[ \begin{array}{lcl} F & \triangleq & A + B(D\ast)C \\ G & \triangleq & D + C(A\ast)B \\ \end{array} \]

By induction, we know that square submatrices of dimension smaller than \(n\) is a Kleene algebra, so we know that \(A\) and \(D\) have Kleene algebra structure. The reason for defining the categorical structure also becomes apparent here -- even though \(A\) and \(D\) are square, \(B\) and \(C\) may not be, and we will need to make use of the fact we defined composition on non-square matrices for them.

References and Miscellany

I have, as is traditional for bloggers, failed to prove any of my assertions. Luckily other people have done the work! Two papers that I particularly like are:

One fact which didn't fit into the narrative above, but which I think is extremely neat, is about the linear-algebraic view of the Kleene star. To see what it is, recall the definition of the matrix exponential for a matrix \(A\):

\[ e^A \triangleq \sum_{k = 0}^\infty \frac{A^k}{k!} = I + A + \frac{A^2}{2!} + \frac{A^3}{3!} + \ldots \]

Now, note that

  1. \(k!\) is an integer
  2. any integer can be expressed as a formal sum \(n = \overbrace{1 + \ldots + 1}^{n \,\mbox{times}}\),
  3. and in regular languages, we have \(r + r = r\).

Hence \(k! = 1\), and since \(A/1 = A\), we have

\[ e^A \triangleq \sum_{k = 0}^\infty A^k = I + A + A^2 + A^3 + \ldots \]

But this is precisely the definition of \(A\ast\)! In other words, for square matrices over Kleene algebras, Kleene iteration is precisely the same thing as the matrix exponential.

The Implementation

The nice thing about this construction is that you can turn it into running code more or less directly, which leads to an implementation of the DFA-to-regexp algorithm in less than 100 lines of code. I'll give an implementation in OCaml.

First, let's give a signature for Kleene algebras. All the constructions we talked about are parameterized in the Kleene algebra, so our implementation will be too.

module type KLEENE = sig
  type t 
  val zero : t 
  val ( ++ ) : t -> t -> t 
  val one : t
  val ( * ) : t -> t -> t 
  val star : t -> t 
end

You can see here that we have zero and one as the unit for the addition ++ and multiplication * respectively, and we also have a star operation representing the Kleene star.

Similarly, we can give a signature for matrices over an element type.

module type MATRIX = sig
  type elt
  type t 

  val dom : t -> int
  val cod : t -> int
  val get : t -> int -> int -> elt

  val id : int -> t 
  val compose : t -> t -> t 

  val zero : int -> int -> t 
  val (++) : t -> t -> t 

  val make : int -> int -> (int -> int -> elt) -> t 
end

So this signature requires an element type elt, a matrix type t, and a collection of operations. We can the number of rows in the matrix with dom and the columns with cod, with the names coming from the thought of dimensions and matrices forming a category. Similarly this gives us a notion of identity with id, which takes an integer and returns the identity matrix of that size. Matrix multiplication is just composition, which leads to the name compose.

We also want the ability to create zero matrices with the zero function, and the ability to do pointwise addition of matrices with the (++) operation. Neither of these operations need square matrices, but to add matrices pointwise requires compatible dimensions.

Finally we want a way to construct a matrix with make, which is given the dimensions and a function to produce the elements.

Now, since the matrix construction was parameterized by a Kleene algebra, we can write a functor Matrix in Ocaml which takes a module of type KLEENE to produce an implementation of the MATRIX signature with elements of the Kleene algebra as the element type:

module Matrix(R : KLEENE) : MATRIX with type elt = R.t = 
struct
  type elt = R.t 
  type t = {
      row : int; 
      col: int; 
      data : int -> int -> R.t
    }

We take the element type to be elements of the Kleene algebra, and take a matrix to be a record with the number of rows, the number of columns, and a function to produce the entries. This is not a high-performance choice, but it works fine for illustration purposes.

  let make row col data = {row; col; data}
  let dom m = m.row
  let cod m = m.col

  let get m i j = 
      assert (0 <= i && i < m.row);
      assert (0 <= j && j < m.col);
      m.data i j 

To make a matrix, we just use the data from the constructor to build the appropriate record, and to get the domain and codomain we just select the row or column fields. To get an element we just call the underlying function, with an assertion check to ensure the arguments are in bounds.

This is a point where a more dependently-typed language would be nice: if the dimensions of the matrix were tracked in the type then we could omit the assertion checks. (Basically, the element type would become a type constructor t : nat -> nat -> type, which would be a type constructor corresponding to the hom-set,)

  let id n = make n n (fun i j -> if i = j then R.one else R.zero)

  let compose f g = 
    assert (cod f = dom g);
    let data i j = 
      let rec loop k acc = 
        if k < cod f then
          loop (k+1) R.(acc ++ (get f i k * get g k j))
        else
          acc
      in
      loop 0 R.zero
    in
    make (dom f) (cod g) data

The identity function takes an integer, and returns a square matrix that is R.one on the diagonal and R.zero off-diagonal. If you want to compose two matrices, the routine checks that the dimension match, and then implements a very naive row-vector times column-vector calculation for each indexing. What makes this naive is that it will get recalculated each time!

  let zero row col = make row col (fun i j -> R.zero)

  let (++) m1 m2 = 
    assert (dom m1 = dom m2);
    assert (cod m1 = cod m2);
    let data i j = R.(++) (get m1 i j) (get m2 i j) in
    make (dom m1) (cod m1) data 
end                           

Finally we can implement the zero matrix operation, and matrix addition, both of which work the expected way.

We will use all these operations to show that square matrices form a Kleene algebra. To show this, we will define a functor Square which is parameterized by

  1. A Kleene algebra R.
  2. An implementation of matrices with element type R.t
  3. A size (i.e., an integer)
module Square(R : KLEENE)
             (M : MATRIX with type elt = R.t)
             (Size : sig val size : int end) : KLEENE with type t = M.t = 
struct
  type t = M.t 

The implementation type is the matrix type M.t, and the semiring operations are basically inherited from the module M:

  let (++) = M.(++)
  let ( * ) = M.compose
  let one = M.id Size.size
  let zero = M.zero Size.size Size.size

The only changes are that one and zero are restricted to square matrices of size Size.size, and so we don't need to pass them dimension arguments.

All of the real work is handled in the implementation of the Kleene star. We can follow the construction very directly. First, we can define a split operation which splits a square matrix into four smaller pieces:

  let split i m = 
    let open M in 
    assert (i >= 0 && i < dom m);
    let k = dom m - i in 
    let shift m x y = fun i j -> get m (i + x) (j + y) in 
    let a = make i i (shift m 0 0) in
    let b = make i k (shift m 0 i) in
    let c = make k i (shift m i 0) in
    let d = make k k (shift m i i) in 
    (a, b, c, d)

This divides a matrix into four, by finding the split point i, and then making a new matrix that uses the old matrix's data with a suitable offset. So b's x-th row and y-th column will be the x-th row and (y+i)-th column of the original, for exmample.

Symmetrically, we need a merge operation.

  let merge (a, b, c, d) = 
    assert (M.dom a = M.dom b && M.cod a = M.cod c);
    assert (M.dom d = M.dom c && M.cod d = M.cod b);
    let get i j = 
      match i < M.dom a, j < M.cod a with
      | true, true   -> M.get a i j 
      | true, false  -> M.get b i (j - M.cod a)
      | false, true  -> M.get c (i - M.dom a) j 
      | false, false -> M.get d (i - M.dom a) (j - M.cod a)
    in
    M.make (M.dom a + M.dom d) (M.cod a + M.cod d) get 

To merge four matrices, their dimensions need to match up properly, and then the lookup operation get needs to use the indices to decide which quadrant to look up the answer in.

This can be used to write the star function:

  let rec star m = 
    match M.dom m with
    | 0 -> m 
    | 1 -> M.make 1 1 (fun _ _ -> R.star (M.get m 0 0))
    | n -> let (a, b, c, d) = split (n / 2) m in
           let fstar = star (a ++ b * star d * c) in 
           let gstar = star (d ++ c * star a * b) in 
           merge (fstar, fstar * b * gstar,
                  gstar * c * fstar, gstar)
end

This function looks at the dimension of the matrix. If the matrix is dimension 1, we can just use the underlying Kleene star on the singleton entry in the matrix. If the square matrix is bigger, we can split the matrix in four and then directly apply the construction we saw earlier. We try to divide the four quadrants as nearly in half as we can, because this reduces the number of times we need to make a recursive call.

As an example of using this, let's see how we can convert automata given by state-transition functions to regular expressions.

First, we define a trivial Kleene algebra using a datatype of regular expressions, and use this to set up a module of matrices:

module Re = struct
  type t = 
    | Chr of char
    | Seq of t * t 
    | Eps
    | Bot 
    | Alt of t * t 
    | Star of t 

  let zero = Bot
  let one = Eps
  let ( * ) r1 r2 = Seq(r1, r2)
  let ( ++ ) r1 r2 = Alt(r1, r2)
  let star r = Star r 
end

module M = Matrix(Re)

Then, we can construct a Kleene algebra of matrices of size 2, and then give a 2-state automaton by its transition function.

module T = Square(Re)(M)(struct let size = 2 end)
let t = 
  M.make 2 2 (fun i j -> 
      Re.Chr (match i, j with
          | 0, 0 -> 'a'
          | 0, 1 -> 'b'
          | 1, 0 -> 'c'
          | 1, 1 -> 'd'
          | _ -> assert false))

Now, we can use the Kleene star on T to compute the transitive closure of this transition function:

# M.get tstar 0 1;;
- : M.elt =
Seq (Star (Alt (Chr 'a', Seq (Chr 'b', Seq (Star (Chr 'd'), Chr 'c')))),
 Seq (Chr 'b',
  Star (Alt (Chr 'd', Seq (Chr 'c', Seq (Star (Chr 'a'), Chr 'b'))))))

Voila! A regexp corresponding to the possible paths from 0 to 1.

The code is available as a Github gist. Bugfixes welcome!

Friday, August 23, 2019

New Draft Paper: Survey on Bidirectional Typechecking

Along with J. Dunfield, I have written a new survey paper on bidirectional typechecking.

Bidirectional typing combines two modes of typing: type checking, which checks that a program satisfies a known type, and type synthesis, which determines a type from the program. Using checking enables bidirectional typing to break the decidability barrier of Damas-Milner approaches; using synthesis enables bidirectional typing to avoid the large annotation burden of explicitly typed languages. In addition, bidirectional typing improves error locality. We highlight the design principles that underlie bidirectional type systems, survey the development of bidirectional typing from the prehistoric period before Pierce and Turner's local type inference to the present day, and provide guidance for future investigations.

This is the first survey paper I've tried to write, and an interesting thing about the process is that it forces you to organize your thinking, and that can lead to you changing your mind. For example, before we started this paper I was convinced that bidirectional typechecking was obviously a manifestation of focalization. When we attempted to make this argument clearly in the survey, that led me to realize I didn't believe it! Now I think that the essence of bidirectional typechecking arises from paying close attention to the mode discipline (in the logic programming sense) of the typechecking relations.

Also, if you think we have overlooked some papers on the subject, please let us know. Bidirectional typechecking is a piece of implementation folklore that is perhaps over three decades old, which makes it very hard for us to feel any certainty about having found all the relevant literature.

Wednesday, August 21, 2019

On the Relationship Between Static Analysis and Type Theory

Some months ago, Ravi Mangal sent me an email with a very interesting question, which I reproduce in part below:

The question that has been bothering me is, "How are static analyses and type systems related?". Besides Patrick Cousot's Types as abstract interpretations, I have failed to find other material directly addressing this. Perhaps, the answer is obvious, or maybe it has just become such a part of the community's folklore that it hasn't been really written down. However, to me, it's not clear how one might even phrase this question in a precise mathematical manner.

The reason this question is so interesting is because some very beautiful answers to it have been worked out, but have for some reason never made it into our community's folklore. Indeed, I didn't know the answers to this question until quite recently.

The short answer is that there are two views of what type systems are, and in one view type systems are just another static analysis, and in another they form the substrate upon which static analyses can be built -- and both views are true at the same time.

Static Analysis (including Types) as Properties

One valid view of a type system, is that it is just another species of static analysis. At a high level, the purpose of a static analysis is to answer the question, "Can I tell what properties this program satisfies without running it?"

Abstract interpretation, model checking, dataflow analysis and type inference are all different ways of answering this question, each with somewhat different emphases and priorities. IMO, to understand the relations between these styles of analysis, it's helpful to think about the two main styles of semantics: operational semantics and denotational semantics.

A denotational semantics turns a program into a mathematical object (sets, domains, metric spaces, etc). This has the benefit that it lets you turn the full power of mathematics towards reasoning about programs, but it has the cost that deducing properties of arbitrary mathematical gadgets can require arbitrary mathematical reasoning.

Operational semantics, on the other hand, turns a program into a state machine. This has the benefit that this is relatively easy to do, and close to how computers work. It has the cost that it only gives semantics to whole programs, and in reality people tend to write libraries and other partial programs. (Also, infinite-state operational semantics doesn't make things obviously easier to work with than a denotational semantics.)

Operational semantics gives rise to model checking: if you have a program, you can view the sequence of states it takes on as a concrete model of temporal logic. So model checking then checks whether a temporal properties overapproximates or refines a concrete model.

So basically all static analyses work by finding some efficiently representable approximation of the meaning of a program, and then using that to define an algorithm to analyse the program, typically by some form of model checking. For type systems, the approximations are the types, and in abstract interpretation, this is what the abstract domains are.

David A. Schmidt's paper Abstract Interpretation from a Denotational Semantics Perspective shows how you can see abstract domains as arising from domain theory, by noting that domains arise as the infinite limit of a series of finite approximations, and so you can get an abstract domain by picking a finite approximation rather than going all the way to the limit.

Thomas Jensen's PhD thesis, Abstract Interpretation in Logical Form, shows how an abstract interpretation of a higher-order functional language corresponds to a certain intersection type system for it. This helps illustrate the idea that abstract domains and types are both abstractions of a program. (FWIW, finding equivalences between abstract domains and type systems used to be big topic back in the 90s, but seems to have disappeared from view. I don't know why!)

Now, model checking an infinite-state system will obviously fail to terminate in general. However, it can then be made effective and algorithmic by using your abstractions (either from types or from abstract interpretation) to turn the model into a finitary one that can be exhaustively searched.

For dataflow analysis, David A. Schmidt works out this idea in detail in his paper, Data Flow Analysis is Model Checking of Abstract Interpretations, where he shows how if you build an abstraction of program traces and model-check against mu-calculus formulas, you can get back most classical dataflow analyses. (If one lesson you draw is that you should read everything David Schmidt has written, well, you won't be wrong.)

It's less common to formulate type-based analyses in this form, but some people have looked at it.

  1. There is a line of work on "higher-order model checking" which works by taking higher-order programs, using type inference for special type systems to generate abstractions for them which lie in special subclasses of pushdown systems which are decidable for the properties of interest for program analysis (eg, emptiness testing). Luke Ong wrote an overview of the area for LICS 2015, in his paper Higher-Order Model Checking: an Overview.

  2. Dave Clarke and Ilya Sergey have a nice IPL 2012 paper, A correspondence between type checking via reduction and type checking via evaluation, in which they show how you can view types as abstract values, and use this perspective to derive an abstract interpreter for a program which operates on these abstract values. They don't state it explicitly, since it was too obvious to mention, but you can check that a program e has the type int by asserting that e will eventually evaluate to the abstract value int. This is easy to check since every program in their abstract semantics terminates -- and so you can just evaluate e and see if the result is int.

Types as Grammar

However, while types can be seen as a way of stating program properties, they have a second life as a way of defining what the valid programs are. A type system can also be seen as a way of statically ruling out certain programs from consideration. That is, an expression like 3("hi"), which applies the argument "hi" to the number 3, is valid Javascript or Python code, but is not a valid program in OCaml or Java, as it would be rejected at compile time by the typechecker.

As a result, type systems can also be used to simplify the semantics: if a term is statically ruled out, you don't have to give semantics to it. The benefit of doing this is that this can dramatically simplify your static analyses.

For example, if you have a typed functional language with integers and function types, then building an abstract interpretation to do a sign analysis is simplified by the fact that you don't have to worry about function values flowing into the integer domain. You can just take the obvious abstract domains for each type and then rely on the type system to ensure that only programs where everything fits together the way you want are ever allowed, freeing you of the need for a jumbo abstract domain that abstracts all values.

For cultural reasons, this methodology is often used in type systems research, but relatively rarely in static analysis work. Basically, if you tell a type systems researcher that you are changing the language to make it easier to analyze, they will say "oh, okay", but if you tell a static analysis researcher the same thing, they may express a degree of skepticism about your ability to update all Java code everywhere. Naturally, neither reaction is right or wrong: it's contextual. After all, there are billions of lines of Java code that won't change, and also the Java language does change from version to version.

However, as researchers we don't just pick up context from our problems, but also from our peers, and so the culture of type systems research tends to privilege new languages and the culture of static analysis research tends to privilege applicability to existing codebases, even though there is no fundamental technical reason for this linkage. After all, all static analyses exploit the structure of the language whenever they can in order to improve precision and scalability.

For example, in a typical intraprocedural dataflow analysis, we initialize all the local variables to the bottom element of the dataflow lattice. This is a sound thing to do, because the language is constrained so that local variables are freshly allocated on each procedure invocation. If, on the other hand, we were doing dataflow analysis of assembly code, different subroutines would all simultaneously affect the dataflow values assigned to each register.

As a consequence, one thing I'm looking forward to is finding out what static analysis researchers will do when they start taking a serious look at Rust. Its type system enforces really strong invariants about aliasing and data races, which ought to open the door to some really interesting analyses.

The Relationship Between the Two Views

As you can see, grammaticality constraints and program analysis are not opposed, and can overlap in effect. Furthermore, this means people writing papers on type systems are often blurry about which perspective they are taking -- in one section of a paper they may talk about types-as-properties, and in another they may talk about types-as-grammar. This ambiguity is technically harmless, but can be confusing to a reader coming to type systems from other areas within formal methods.

The reason this ambiguity is mostly harmless is because for well-designed languages there will be a nice relationship between these two views of what types are. John C Reynolds explained this, in his lovely paper The Meaning of Types: From Intrinsic to Extrinsic Semantics. (In his language, "intrinsic types" are types-as-grammar, and "extrinsic types" are types-as-property.)

Basically, he noted that when you give a denotational semantics under a types-as-grammar view, you interpret typing derivations, and when you give a denotational semantics in the types-as-property view, you interpret program terms. For types-as-grammar to make sense, its semantics must be coherent: two typing derivations of the same program must have the same semantics. (Otherwise the semantics of a program can be ambiguous.) But once coherence holds, it becomes feasible to prove the equivalence between the intrinsic and extrinsic semantics, because there is a tight connection between the syntax of the language and the typing derivations.

Noam Zeilberger and Paul-Andre Melliès wrote a POPL 2015 paper, Functors are Type Refinement Systems, where they work out the general semantic structure underpinning Reynolds' work, in terms of the relationship between intersection types (or program logic) and a base language.

Compositionality

Another cultural effect is the importance type systems research places on compositionality. It is both absolutely the case that there is no a priori reason a type system has to be compositional, and also that most type systems researchers -- including myself! -- will be very doubtful of any type system which is not compositional.

To make this concrete, here's an example of a type system which is pragmatic, reasonable, and yet which someone like me would still find distasteful.

Consider a Java-like language whose type system explicitly marks nullability. We will say that method argument (x : C) means that x is a possibly null value of type C, and an argument x : C! means that x is definitely of class C, with no null value. Now, suppose that in this language, we decide that null tests should refine the type of a variable:

                       // x : C outside the if-block
  if (x !== null) { 
    doSomething(x);    // x : C! inside the if-block
  } 

Basically, we checked that x was not null, so we promoted its type from C to C!. This is a reasonable thing to do, which industrial languages like Kotlin support, and yet you will be hard-pressed to find many formal type theories supporting features like this.

The reason is that the culture of type theory says that type systems should satisfy a substitution principle: a variable stands in for a term, and so substituting a term of the same type as the variable should result in a typeable term. In this case, suppose that e is a very complicated expression of type C. Then if we do a substitution of e for x, we get:

  if (e !== null) { 
    doSomething(e);    // e : C! -- but how?
  } 

Now we somehow have to ensure that e has the type C! inside the scope of the conditional. For variables this requires updating the context, but for arbitrary terms it will be much harder, if it is even possible at all!

The culture of static analysis says that since it's easy for variables, we should handle that case, whereas the culture of type theory says that anything you do for variables should work for general terms. Again, which view is right or wrong is contextual: it's genuinely useful for Kotlin programmers to have dataflow-based null-tracking, but at the same time it's much easier to reason about programs if substitution works.

The reason type theorists have this culture is that type theory has two parents. On one side are Scott and Strachey, who consciously and explicitly based denotational semantics upon the compositionality principle; and the other side comes from the structural proof theory originated by Gerhard Gentzen, which was invented precisely to prove cut-eliminability (which basically just is a substitution principle). As a result, the type-theoretic toolbox is full of tools which both assume and maintain compositionality. So giving that up (or moving past it, depending on how you want to load the wording emotionally) is something which requires very careful judgement.

For example, the Typed Racket language extends the dynamically-typed Racket language with types, and to support interoperation with idiomatic Racket code (which uses control-flow sensitive type tests), they do flow-sensitive typing. At ESOP 2011, Arjun Guha, Claudiu Saftoiu, and Shriram Krishnamurthi wrote a paper about a particularly elegant way of combining flow analysis and typechecking, Typing Local Control and State Using Flow Analysis.

Conclusion

Finally, one might wonder whether it is worth it to try developing a unified perspective on type theory and static analysis. It's a lot of work to learn any one of these things, and (for example) David Schmidt's papers, while deeply rewarding, are not for the faint of heart. So maybe we should all specialize and not worry about what members of cousin tribes are doing?

All I can say is that I think I benefited tremendously from the fact that (largely by accident) I ended up learning separation logic, denotational semantics and proof theory as part of my PhD. There actually is a wholeness or consilience to the variety of techniques that the semantics community has devised, and insights apply across domains in surprising ways.

My recent PLDI paper with Jeremy Yallop is a nice example of this. Very little in this paper is hard, but what makes it easy is that we felt free to switch perspectives as we needed.

Because I had learned to like automata theory from reading papers on model checking, I ended up reading about Brüggemann-Klein and Wood's characterization of the deterministic regular languages. Then, because I share the aesthetics of denotational semantics, I realized that they invented a superior, compositional characterization of the FOLLOW sets used in LL(1) parsing. Then, because I knew about the importance of substitution from type theory, this lead me to switch from the BNF presentation of grammars, to the mu-regular expressions. This let us offer the traditional parser combinator API, while under the hood having a type system which identified just the LL(1)-parseable grammars. Moreoever, the type inference algorithm we use in the implementation is basically a dataflow analysis. And that set things up in a form to which multistage programming techniques were applicable, letting us derive parser combinators that run faster than yacc.

But beyond the direct applications like that, it is just good to be able to go to a conference and be interested by what other people are doing. Everyone attending has developed deep expertise in some very esoteric subjects, and learning enough to learn from them is valuable. Research communities are long-running conversations, and being able to hold up your end of the conversation makes the community work better.

Monday, August 12, 2019

Michael Arntzenius on the Future of Coding Podcast

My PhD student Michael Arntzenius is on the Future of Coding podcast, talking about the things he has learned while developing Datafun. In the course of a PhD, people tend to work on extremely refractory problems, which is a useful discipline for developing and sharpening your worldview. Since Michael is a particularly thoughtful person, he's developed a particularly thoughtful view of computing, and so the interview is well worth reading or listening to.

This episode explores the intersections between various flavors of math and programming, and the ways in which they can be mixed, matched, and combined. Michael Arntzenius, "rntz" for short, is a PhD student at the University of Birmingham building a programming language that combines some of the best features of logic, relational, and functional programming. The goal of the project is "to find a sweet spot of something that is more powerful than Datalog, but still constrained enough that we can apply existing optimizations to it and imitate what has been done in the database community and the Datalog community." The challenge is combining the key part of Datalog (simple relational computations without worrying too much underlying representations) and of functional programming (being able to abstract out repeated patterns) in a way that is reasonably performant.

This is a wide-ranging conversation including: Lisp macros, FRP, Eve, miniKanren, decidability, computability, higher-order logics and their correspondence to higher-order types, lattices, partial orders, avoiding logical paradoxes by disallowing negation (or requiring monotonicity) in self reference (or recursion), modal logic, CRDTS (which are semi-lattices), and the place for formalism is programming. This was a great opportunity for me to brush up on (or learn for the first time) some useful mathematical and type theory key words. Hope you get a lot out of it as well -- enjoy!

I'd like to thank Steve Krouse for interviewing Michael. This interview series is a really nice idea -- a lot of the intuition or worldview underpinning research programs never makes it into a paper, but can be brought out nicely in a conversation.

Also, if you prefer reading to listening, the link also has a a transcript of the podcast available.

Thursday, July 11, 2019

All Comonads, All The Time

(If you are on the POPL 2020 program committee, please stop reading until after your reviews are done.)

It's been a few months since I've last blogged, so I figure I should say what I have been up to lately. The answer seems to be comonads. I have two new draft papers, both of which involve comonadic type theories in a fairly critical way. (I suppose that this makes me a type-theoretic hipster, who doesn't do monads now that they're popular.)