## Wednesday, July 22, 2015

### How to implement a spreadsheet

My friend Lindsey Kuper recently remarked on Twitter that spreadsheets were commonly understood to be the most widely used dataflow programming model, and asked if there was a simple implementation of them.

As chance would have it, this was one of the subjects of my thesis work -- as part of it, I wrote and proved the correctness of a small dataflow programming library. This program has always been one of my favorite little higher-order imperative programs, and in this post I'll walk through the implementation. (You can find the code here.)

As for the proof, you can look at this TLDI paper for some idea of the complexities involved. These days it could all be done more simply, but the pressure of proving everything correct did have a very salutary effect in keeping everything as simple as possible.

The basic idea of a spreadsheet (or other dataflow engine) is that you have a collection of places called cells, each of which contains an expression. An expression is basically a small program, which has the special ability to ask other cells what their value is. The reason cells are interesting is because they do memoization: if you ask a cell for its value twice, it will only evaluate its expression the first time. Furthermore, it's also possible for the user to modify the expression a cell contains (though we don't want cells to modify their code as they execute).

So let's turn this into code. I'll use Ocaml, because ML modules make describing the interface particularly pretty, but it should all translate into Scala or Haskell easily enough. In particular, we'll start by giving a module signature writing down the interace.

 module type CELL = sig


We start by declaring two abstract types, the type 'a cell of cells containing a value of type 'a, and the type 'a exp of expressions returning a value of type 'a.

   type 'a cell
type 'a exp


Now, the trick we are using in implementing expressions is that we treat them as a monadic type. By re-using our host language as the language of terms that lives inside of a cell, we don't have to implement parsers or interpreters or anything like that. This is a familiar trick to Haskell programmers, but it's still a good trick! So we first give the monadic bind and return operators:


val return : 'a -> 'a exp
val (>>=) : 'a exp -> ('a -> 'b exp) -> 'b exp


And then we can specify the two operations that are unique to our monadic DSL: reading a cell (which we call get), and creating a new cell (which we call cell). It's a bit unusual to be able to create new cells as a program executes, but it's rather handy.

   val cell : 'a exp -> 'a cell exp
val get :  'a cell -> 'a exp


Aside from that, there are no other operations in the monadic expression DSL. Now we can give the operations that don't live in the monad. First is the update operation, which modifies the contents of a cell. This should not be called from within an 'a exp terms --- in Haskell, that might be enforced by giving update an IO type.


val set : 'a cell -> 'a exp -> unit


Finally, there's the run operation, which we use to run an expression. This is useful mainly for looking at the values of cells from the outside.

   val run : 'a exp -> 'a
end


Now, we can move on to the implementation.


module Cell : CELL = struct

The implementation of cells is at the heart of the dataflow engine, and is worth discussing in detail. A cell is a record with five fields:
   type 'a cell = {
mutable code      : 'a exp;
mutable value     : 'a option;
mutable reads     : ecell list;
mutable observers : ecell list;
id                : int
}

• The code field of this record is the pointer to the expression that the cell contains. This field is mutable because we can alter the contents of a cell!
• The value field is an option type, which is None if the cell has not been evaluated yet, and Some v if the code had evaluated to v.
• The reads field is a list containing all of the cells that were read when the code in the code field was executed. If the cell hasn't been evaluated yet, then this is the empty list.
• The observers field is a list containing all of the cells that have read this cell when they were evaluated. So the reads field lists all the cells this cell depends on, and the observers field lists all the cells which depend on this cell. If this cell hasn't been evaluated yet, then observers will of course be empty.
• The id contains an integer which is the unique id of each cell.

Both reads and observers store lists of dependent cells, and dependent cells can be of any type. In order to build a list of heterogenous cells, we need to introduce a type ecell, which just hides the cell type under an existential (using Ocaml's new GADT syntax):

   and ecell = Pack : 'a cell -> ecell


We can now also give the concrete type of expressions. We define an element of expression type 'a exp to be a thunk, which when forced returns (a) a value of type 'a, and (b) the list of cells that it read while evaluating:

   and 'a exp = unit -> ('a * ecell list)


Next, let's define a couple of helper functions. The id function just returns the id of an existentially packed ecell, and the union function merges two lists of ecells while removing duplicates.

   let id (Pack c) = c.id

let rec union xs ys =
match xs with
| [] -> []
| x :: xs' ->
if List.exists (fun y -> id x = id y) ys then
union xs' ys
else
x :: (union xs' ys)


The return function just produces a thunk which returns a value and an empty list of read dependencies, and the monadic bind (>>=) sequentially composes two computations, and returns the union of their read dependencies.

   let return v () = (v, [])
let (>>=) cmd f () =
let (a, cs) = cmd () in
let (b, ds) = f a () in
(b, union cs ds)


To implement the cell operator, we need a source of fresh id's. So we create an integer reference, and new_id bumps the counter before returning a fresh id.

   let r = ref 0
let new_id () = incr r; !r


Now we can implement cell. This function takes an expression exp, and uses new_id to create a unique id for a cell, and then intializes a cell with the appropriate values -- the code field is exp, the value field is None (because the cell is created in an unevaluated state), and the reads and observers fields are empty (because the cell is unevaluated), and the id is set to the value we generated.

This is returned with an empty list of read dependencies because we didn't read anything to construct a fresh cell!

   let cell exp () =
let n = new_id() in
let cell = {
code = exp;
value = None;
reads = [];
observers = [];
id = n;
} in
(cell, [])


To read a cell, we need to implement the get operation. This works a bit like memoization. First, we check to see if the value field already has a value. If it does, then we can return that. If it is None, then we have a bit more work to do.

First, we have to evaluate the expression in the code field, which returns a value v and a list of read dependencies ds. We can update the value field to Some v, and then set the reads field to ds. Then, we add this cell to the observers field of every read dependency in ds, because this cell is observing them now.

Finally, we return the value v as well as a list containing the current cell (which is the only dependency of reading the cell).

  let get c () =
match c.value with
| Some v -> (v, [Pack c])
| None ->
let (v, ds) = c.code ()  in
c.value <-Some v;
c.reads <- ds;
List.iter (fun (Pack d) -> d.observers <- (Pack c) :: d.observers) ds;
(v, [Pack c])


This concludes the implementation of the monadic expression language, but our API also includes an operation to modify the code in a cell. This requires more code than just updating a field -- we have to invalidate everything which depends on the cell, too. So we need some helper functions to do that.

The first helper is remove_observer o o'. This removes the cell o from the observers field of o'. It does this by comparing the id field (which was in fact put in for this very purpose).

  let remove_observer o (Pack c) =
c.observers <- List.filter (fun o' -> id o != id o') c.observers


This function is used to implement invalidate, which takes a cell, marks it as invalid, and then marks everything which transitively depends on it invalid too. It does this by we saving the reads and observers fields into the variables rs and os. Then, it marks the current cell as invalid by setting the value field to None, and setting the observers and reads fields to the empty list. Then, it removes the current cell from the observers list of every cell in the old read set rs, and then it calls invalidate recursively on every observer in os.

  let rec invalidate (Pack c) =
let os = c.observers in
let rs = c.reads in
c.observers <- [];
c.value <- None;
c.reads <- [];
List.iter (remove_observer (Pack c)) rs;
List.iter invalidate os


This then makes it easy to implement set -- we just update the code, and then invalidate the cell (since the memoized value is no longer valid).


let set c exp =
c.code <- exp;
invalidate (Pack c)


Finally, we can implement the run function by forcing the thunk and throwing away the read dependencies.

  let run cmd = fst (cmd ())
end


That's pretty much it. I think it's quite pleasant to see how little code it takes to implement such an engine.

One thing I like about this program is that it also shows off how gc-unfriendly dataflow is: we track dependencies in both directions, and as a result the whole graph is always reachable. As a result, the usual gc heuristis will collect nothing as long as anything is reachable. You can fix the problem by using weak references to the observers, but weak references are also horribly gc-unfriendly (usually there's a traversal of every weak reference on every collection).

So I think it's very interesting that there are a natural class of programs for which the reachability heuristic just doesn't work, and this indicates that some interesting semantics remains to be done to explain what the correct memory management strategy for these kinds of programs is.

## Tuesday, March 17, 2015

### Abstract Binding Trees, an addendum

It struck me after the last post that it might be helpful to give an example using abstract binding trees in a more nontrivial way. The pure lambda calculus has a very simple binding structure, and pretty much anything you do will work out. So I decided to show how ABTs can be used to easily support a much more involved form of binding -- namely, pattern matching.

This makes for a nice example, because it is a very modular and well-structured language feature, that naturally has a rather complex binding structure. However, as we will see, it very easily into an ABT-based API. I will do so assuming the implementation in the previous post, focusing only on the use of ABTs.

As is usual, we will need to introduce a signature module for the language.

 module Lambda =
struct

Since pattern matching really only makes sense with a richer set of types, let's start by adding sums and products to the type language.
   type tp = Arrow of tp * tp | One | Prod of tp * tp | Sum of tp * tp


Next, we'll give a datatype of patterns. The PWild constructor is the wildcard pattern $\_$, the PVar constructor is a variable pattern, the PPair(p, p') constructor is the pair pattern, and the Inr p and Inl p patterns are the patterns for the left and right injections of the sum type.

   type pat = PWild | PVar | PUnit | PPair of pat * pat
| PInl of pat | PInr of pat


Now, we can give the datatype for the language signature itself. We add pairs, units, and sums to the language, as well as a case expression which takes a scrutinee, and a list of branches (a list of patterns and the corresponding case arms).

   type 'a t = Lam of 'a | App of 'a * 'a | Annot of tp * 'a
| Unit | Pair of 'a * 'a | Inl of 'a | Inr of 'a
| Case of 'a * (pat * 'a) list


One thing worth noting is that the datatype of patterns has no binding information in it at all. The basic idea is that if we will represent (say) a pair elimination $\letv{(x,y)}{e}{e'}$ as the constructor Case(e, [PPair(PVar, PVar), Abs(x, Abs(y, e'))]) (suppressing useless Tm constructors). So the PVar constructor is merely an indication that the term be an abstraction, with the number of abstractions determined by the shape of the pattern. This representation is first documented in Rob Simmons's paper Structural Focalization.

This is really the key thing that gives the ABT interface its power: binding trees have only one binding form, and we never introduce any others.

Now we can give the map and join operations for this signature.


let map f = function
| Lam x        -> Lam (f x)
| App (x, y)   -> App(f x, f y)
| Annot(t, x)  -> Annot(t, f x)
| Unit         -> Unit
| Pair(x, y)   -> Pair(f x, f y)
| Inl x        -> Inl (f x)
| Inr x        -> Inr (f x)
| Case(x, pys) -> Case(f x, List.map (fun (p, y) -> (p, f y)) pys)

let join m = function
| Lam x -> x
| App(x, y) -> m.join x y
| Annot(_, x) -> x
| Unit        -> m.unit
| Pair(x,y)   -> m.join x y
| Inl x       -> x
| Inr x       -> x
| Case(x, pys) -> List.fold_right (fun (_, y) -> m.join y) pys x
end


As usual, we construct the syntax by applying the Abt functor.


module Syntax = Abt(Lambda)


We can now define a bidirectional typechecker for this language. Much of the infrastructure is the same as in the previous post.


module Bidir = struct
open Lambda
open Syntax
type ctx = (var * tp) list


We do, however, extend the is_check and is_synth functions to handle the new operations in the signature. Note that case statements are viewed as checking forms.

  let is_check = function
| Tm (Lam _) | Tm Unit | Tm (Pair(_, _))
| Tm (Inl _) | Tm (Inr _) | Tm (Case(_, _))-> true
| _ -> false
let is_synth e = not(is_check e)

let unabs e =
match out e with
| Abs(x, e) -> (x, e)
| _ -> assert false


When we reach the typechecker itself, most of it --- the check and synth functions --- are unchanged. We have to add new cases to check to handle the new value forms (injections, pairs, and units), but they are pretty straightforward.

  let rec check ctx e tp =
match out e, tp with
| Tm (Lam t), Arrow(tp1, tp') ->
let (x, e') = unabs t in
check ((x, tp1) :: ctx) e' tp'
| Tm (Lam _), _               -> failwith "expected arrow type"
| Tm Unit, One  -> ()
| Tm Unit, _ -> failwith "expected unit type"
| Tm (Pair(e, e')), Prod(tp, tp') ->
let () = check ctx e tp in
let () = check ctx e' tp' in
()
| Tm (Pair(_, _)), _ -> failwith "expected product type"
| Tm (Inl e), Sum(tp, _) -> check ctx e tp
| Tm (Inr e), Sum(_, tp) -> check ctx e tp
| Tm (Inl _), _
| Tm (Inr _), _          -> failwith "expected sum type"


The big difference is in checking the Case form. Now, we need to synthesize a type for the scrutinee, and then check that each branch is well-typed. This works by calling the check_branch function on each branch, passing it the pattern, arm, type and result type as arguments. For reasons that will become apparent, the pattern and its type are passed in as singleton lists. (I don't do coverage checking here, only because it doesn't interact with binding in any way.)

    | Tm (Case(e, branches)), tp ->
let tp' = synth ctx e in
List.iter (fun (p,e) -> check_branch ctx [p] e [tp'] tp) branches
| body, _ when is_synth body ->
if tp = synth ctx e then () else failwith "Type mismatch"
| _ -> assert false

and synth ctx e =
match out e with
| Var x -> (try List.assoc x ctx with Not_found -> failwith "unbound variable")
| Tm(Annot(tp, e)) -> let () = check ctx e tp in tp
| Tm(App(f, e))  ->
(match synth ctx f with
| Arrow(tp, tp') -> let () = check ctx e tp in tp'
| _ -> failwith "Applying a non-function!")
| body when is_check body -> failwith "Cannot synthesize type for checking term"
| _ -> assert false

The way that branch checking works is by steadily deconstructing a list of patterns (and their types) into smaller lists.
  and check_branch ctx ps e tps tp_result =
match ps, tps with


If there are no more patterns and their types, we are done, and can check that the arm has the right type.

    | [], []-> check ctx e tp_result


If we have a variable pattern, we unabstract the arm, and bind that variable to type, and recur on the smaller list of patterns and types. Note that this is the only place we have to do anything at all with binding, and it's trivial!

    | (PVar :: ps), (tp :: tps)
-> let (x, e) = unabs e in
check_branch ((x, tp) :: ctx) ps e tps tp_result


Wildcards and unit patterns work the same way, except that they don't bind anything.

    | (PWild :: ps), (tp :: tps)
-> check_branch ctx ps e tps tp_result
| (PUnit :: ps), (One :: tps)
-> check_branch ctx ps e tps tp_result
| (PUnit :: ps), (tp :: tps)
-> failwith "expected term of unit type"


Pair patterns are deconstructed into two smaller types, and the product type they are checked against is broken into its two subterms, and then the list is lengthened with all of the subderivations.

    | (PPair(p, p') :: ps), (Prod(tp, tp') :: tps)
-> check_branch ctx (p :: p' :: ps) e (tp :: tp' :: tps) tp_result
| (PPair(p, p') :: ps), (_ :: tps)
-> failwith "expected term of product type"


Sum patterns work by recurring on the sub-pattern, dropping the left or right part of the sum type, as appropriate.

    | (PInl p :: ps), (Sum(tp, _) :: tps)
-> check_branch ctx (p :: ps) e (tp :: tps) tp_result
| (PInl p :: ps), (_ :: tps)
-> failwith "expected sum type"
| (PInr p :: ps), (Sum(_, tp) :: tps)
-> check_branch ctx (p :: ps) e (tp :: tps) tp_result
| (PInr p :: ps), (_ :: tps)
-> failwith "expected sum type"
| _ -> assert false
end


That's it! So I hope it's clear that ABTs can handle complex binding forms very gracefully.

## Friday, March 13, 2015

### Abstract Binding Trees

On reddit, Paul Chiusano asked me:
I have a somewhat related question - do you know if it is possible / easy to implement one or both typechecking algorithms using a representation of terms / types where polymorphic recursion is used to enforce proper variable scoping, a la bound.

The answer to this questions is no, I haven't thought about this at all. Except when I'm using a proper proof assistant, I don't bother with using types to track scoping. In fact, I tend to view term representations with highly-structured syntax as strong evidence for layering violations: the parser shouldn't know too much about the binding or type structure of a program, because conceptually binding structure is determined after we produce the syntax tree. This is made evident when you look at languages like ML or Haskell, which have features which make it impossible to know which names are in scope prior to doing significant amounts of work (because of import in Haskell, or open in ML).

That said, the problem which motivates types-for-binding is a real one: how can we make substitution safe, once and for all?

The technique I favor is what Bob Harper has dubbed "abstract binding trees", and documented in Chapter 1 of his book. I think it originates in Nuprl, and is also closely related to Martin-Löf's "system of arities" (see Chapter 3 of Programming in Martin-Löf's Type Theory).

To understand abstract binding trees, it helps to start thinking about ordinary abstract syntax trees. ASTs are really just constants $c$, or applications of a term constructor $f$ to some sequence of terms $f(t_1, \ldots t_n)$. (If you are a lisp programmer, you can think of these as s-expressions, and Prolog programmers can think of these as the ground elements of the Herbrand universe.)

Then, to get ABTs, we extend ASTs with two constructors, one for variables $x$, and one for abstractions $x.\;t$. Abstractions are purely binders, with no reduction rules associated with them, and abstract binding trees are simply equated up to alpha-equivalence.

By programming this idea up, we can simply parameterize over the set of term constructors, and have everything to do with scoping and substitution be programmed once and for all. So we end up in a situation where everything Just Works, and all the types are satisfyingly uncomplicated. Furthermore, the implementation is simple enough to fit into a blog post. (All the code in this post in in this gist.)

I'll do it in Ocaml, and so we'll need to define some signatures that Haskell programmers get out of the box. First, we need a signature for functorial types. This signature is satisfied by any parametric datatype with a map functional. It also needs to send identities to identities and respect composition, but we can't represent that unless we're in Agda or Coq.

 module type FUNCTOR = sig
type 'a t
val map : ('a -> 'b) -> 'a t -> 'b t
end


Next, we'll introduce a type of monoids, with a unit and combining operation. I'll just use a record here, since that's a lot lighter weight than ML's modules.

 type 'a monoid = {unit : 'a ; join : 'a -> 'a -> 'a}


With these preliminaries out of the way, we can move on towareds giving the meat of the representation. First, I'll give a type of variables (and sets of variables). This time, instead of being abstract, I'll just be concrete and use strings as the representation of variables. This is partly because this is what I actually do, but partly also for the rhetorical effect of emphasising that an often-maligned representation technique is A-ok.


type var = string
module V = Set.Make(struct type t = var let compare = compare end)

Now, I'll give the signature for abstract binding trees.
 module type ABT = sig
type 'a signature
type 'a f = Var of var | Abs of var * 'a | Tm of 'a signature
val map : ('a -> 'b) -> 'a f -> 'b f

type t
val into : t f -> t
val out  : t -> t f

val freevars : t -> V.t
val var : V.elt -> t
val abs : V.elt * t -> t
val tm : t signature -> t

val subst : t -> var -> t -> t
end


Here, the type 'a signature tells us what the term constructors we have are. We augment this with the 'a f type, which adds a new term former for variables Var of var, as well as a term constructor Abs of var * 'a for scoping a variable. Then we have an abstract type of terms t, which we can wrap and unwrap one level using the into and out functions.

We can also have ask for the free variables of a type with freevars, and we have some convenience constructors for variables, abstractions and terms, as well as the subst function which does the alpha-equivalence respecting substitution. We could also put in other operations (such as equality up to alpha-equivalence), but substitution suffices to illustrate the technique.

We'll also say just what a signature is, now. Basically, it's any functor which can collect up the elements of a monoid. This idea is from Björn Bringert and Aarne Ranta's paper A Pattern for Almost-Compositional Functions.


module type SIGNATURE = sig
include FUNCTOR

val join : 'a monoid -> 'a t -> 'a
end


We are finally in a position to give the implementation of ABTs. It's an Ocaml functor taking a signature F, and returning an instance of the ABT signature where the 'a signature type is identified with the type of the argument.

module Abt(F : SIGNATURE) : ABT with type 'a signature := 'a F.t =
struct
type 'a f =
| Var of var
| Abs of var * 'a
| Tm of 'a F.t

let map f = function
| Var x -> Var x
| Abs(x, e) -> Abs(x, f e)
| Tm t -> Tm (F.map f t)


As you can see, the type 'a f is just the same as in the signature, and the map function just lifts the signature type's map operation to the type augmented with binding structure.

  type t = In of V.t * t f


The concrete representation of terms is just the obvious recursive type built over the signature, augmented with a set of free variables at each node. This makes computing the free variables very easy:

  let freevars (In(vs, _)) = vs


In addition, unrolling the type t simply throws away some variables and returns the subterm:

  let out (In(_, t)) = t


On the other hand, building the embedding function with smart constructors is a little more work. First, we note that sets of variables form a monoid, with the empty set as the unit and the set union as the join operation.

  let m = {unit = V.empty; join = V.union}

We can use this to define the smart constructors. The var function simply says that its free variables are its one variable itself, and in abs(z,e), the free variables are the free variables of e minus z. For the Tm t case, we use the join operation that we required the signature to possess.
  let var x = In(V.singleton x, Var x)
let abs(z, e) = In(V.remove z (freevars e), Abs(z, e))
let tm t = In(F.join m (F.map freevars t), Tm t)

Then we can define into just by using these constructors.
  let into = function
| Var x -> var x
| Abs(x, e) -> abs(x, e)
| Tm t -> tm t


Now all that remains is to implement capture-avoiding substitution. First, we give a fresh function which returns a new renaming of v which is not in a set of names vs. It does this by sticking primes onto v until it finds one that isn't in vs. It's possible to do something smarter, but it serves for a blog post.

  let rec fresh vs v =
if V.mem v vs then fresh vs (v ^ "'") else v

We also need a variable renaming function. rename x y e renames x to y in the body e, and just walks down the term structurally. If it hits an abstractor, it will stop if the name is shadowed, and otherwise continue on down. Here, we rely on the functoriality of the signature to handle the Tm case.
  let rec rename x y (In(fvs, t)) =
match t with
| Var z -> if x = z then var y else var z
| Abs(z, e) -> if x = z then abs(z, e) else abs(z, rename x y e)
| Tm v -> tm (F.map (rename x y) v)


We can now give the definition of capture-avoiding substitution. At each abstraction, it does the usual thing of choosing a fresh identifierand renaming. It's easy to do this more efficiently (for instance with a parallel substitution), but it's not worth bothering with until it becomes a bottleneck.

  let rec subst t x body =
match out body with
| Var z when x = z -> t
| Var z            -> var z
| Abs(x, e) ->
let x' = fresh (V.union (freevars t) (freevars body)) x in
let e' = subst t x (rename x x' e) in
abs(x', e')
| Tm body -> tm (F.map (subst t x) body)
end


I'll finish off with a simple example of how we might use it. Let's proceed by giving a syntax for the simply-typed lambda calculus, in a bidirectional style. Note that the signature type 'a t simply has one constructor for each of the term formers of the lambda calculus, and it has no binding information in it at all – all binding is handled through the ABT interface. So the constructor for functions Lam of 'a, has a single subterm — we'll use typechecking to ensure that the subterm is of the form Abs(z, e).

module Lambda =
struct
type tp = Base | Arrow of tp * tp
type 'a t = Lam of 'a | App of 'a * 'a | Let of 'a * 'a | Annot of tp * 'a
let map f = function
| Lam x      -> Lam (f x)
| App (x, y) -> App(f x, f y)
| Let (x, y) -> Let(f x, f y)
| Annot(t, x) -> Annot(t, f x)

let join m = function
| Lam x -> x
| App(x, y) -> m.join x y
| Let(x, y) -> m.join x y
| Annot(_, x) -> x
end


With this signature in hand, we can create our syntax:

module Syntax = Abt(Lambda)

Now, here's a simple bidirectional typechecker for this calculus.
module Bidir = struct
open Lambda
open Syntax

Contexts are just lists of variables and their types.
  type ctx = (var * tp) list

In a bidirectional typechecker, every term is either used to synthesize a type, or is checked against a type. Here, we give predicates to check that.
  let is_synth = function
| Tm (Lam _) | Tm (Let (_, _)) -> false
| _ -> true
let is_check e = not(is_synth e)

Now, we give the unabs function, which deconstructs an abstraction and fails with an assertion error if it isn't handed an abstraction.
  let unabs e =
match out e with
| Abs(x, e) -> (x, e)
| _ -> assert false


The reasoning can be seen in the definition of the check ctx e tp function, which checks that term e has type tp in context ctx.

  let rec check ctx e tp =
match out e, tp with
| Tm (Lam t), Arrow(tp1, tp') ->
let (x, e') = unabs t in
check ((x, tp1) :: ctx) e' tp'

In particular, in the lambda case, we check that the outer constructor is Lam, and then unabstract the body to get the binder and subterm. This should always succeed (or else we've been passed an ill-formed binding tree). It might be possible to tighten up the definition of signatures, but since ABTs are constructed by the parser, errors are likely to be few (and easily detected by Quickcheck-y techniques).
    | Tm (Lam _), _               -> failwith "expected arrow type"
| Tm (Let(e', t)), _ ->
let (x, e'') = unabs t in
let tp1 = synth ctx e' in
check ((x, tp1) :: ctx) e'' tp


The let case works similarly to the lambda case.

    | body, _ when is_synth body ->
if tp = synth ctx e then () else failwith "Type mismatch"
| _ -> assert false


If we know a term is synthesizing, we can ask for its type, and then see if it's equal to the type we are checking against. Otherwise, we have ill-formed syntax.

  and synth ctx e =
match out e with
| Var x -> (try List.assoc x ctx with Not_found -> failwith "unbound variable")


If we want to synthesize a type for a variable, we just look it up in the context.

    | Tm(Annot(tp, e)) -> let () = check ctx e tp in tp


If we have a type-annotated term, we can use that to synthesize a type for it.

    | Tm(App(f, e))  ->
(match synth ctx f with
| Arrow(tp, tp') -> let () = check ctx e tp in tp'
| _ -> failwith "Applying a non-function!")


To infer a type for an application, we infer a function type for the head, and then check the argument against the argument type of the function, returning the result type of the function.

    | body when is_check body -> failwith "Cannot synthesize type for checking term"
| _ -> assert false
end


As you can see, this is more or less what you'd write without any binding machinery – there are a few more annoying Tm constructors, but that's about it. The key point is that the type of substitution is pure, without the need for a context monad, name supply, or similar. That leaves us free to engineer the context monad as needed for the typechecker, without having to work around the requirements of the term representation.