In my last post, I showed how you could implement the Int construction over resumptions, to derive a higher-order programming language from a first-order one. However, this is not yet a practical tool. The code I gave presented the language as a bunch of *combinators*, which rapidly get quite painful to write serious programs. Most of us prefer the lambda calculus to SKI combinators for good reason --- programs with variables and binders are much easier to read, since the code that actually does work in not obscured in a rat's nest of plumbing combinators that route data from one combinator to another.

I had hoped to rectify that in this post, by showing how to compile the linear lambda calculus into combinator terms. But when I started
to write it, I realized that I needed to write a warmup post first, which would introduce the basic ideas in a simpler setting. So I'll defer compiling linear types for a little while, and show how to embed *total functional programming* into OCaml.

Now, at first, this might sound completely impossible. OCaml is an impure programming language, in which every function potentially has side effects (like I/O, state manipulation, exceptions, and concurrency). How could it ever be possible to embed a pure language into an impure one?
The answer to this problem is *data abstraction*. ML has a module system that permits constructing abstract data types, and so we can give an implementation of an abstract data type of total programs, and then use type abstraction to ensure that we can never construct a program that fails to terminate.

The total language I'll use in this example is Goedel's *System T*, which is arguably the first proper total functional programming language. He extended the simply-typed lambda calculus with a type of natural numbers, and iteration over them (logicians call this system "higher-type arithmetic"). This system is remarkably expressive --- for example, you can write the Ackermann function in it --- but ever function in it still terminates. He introduced this system as part of his famous Dialectica (or functional) interpretation of logic. This interpretation deserves its own series of posts, exploring its applications to functional programming. But that's a major digression, so in the meantime you'll have to content yourself with Andrej Bauer's implementation of it in Coq.

So now let's return to System T and its denotational semantics. The lanaguage is basically the simply-typed lambda calculus extended with natural numbers and iteration over them. $$ \newcommand{\zero}{\mathsf{z}} \newcommand{\succ}[1]{\mathsf{s}({#1})} \newcommand{\iter}[4]{\mathsf{iter}({#1}, \zero \to {#2}, \succ{#3} \to {#4})} \begin{array}{lcl} A & ::= & 1 \bnfalt A \times B \bnfalt A \to B \bnfalt \N \\ e & ::= & () \bnfalt (e, e) \bnfalt \fst{e} \bnfalt \snd{e} \bnfalt \fun{x}{e} \bnfalt e \; e' \\ & | & \zero \bnfalt \succ{e} \bnfalt \iter{e}{e_z}{x}{e_s} \bnfalt x \bnfalt e : A \\ \Gamma & ::= & \cdot \bnfalt \Gamma, x:A \end{array} $$ $$ \begin{array}{cc} \frac{ x:A \in \Gamma } { \judge{\Gamma}{x}{A} } & \frac{ } {\judge{\Gamma}{()}{1}} \\[1em] \frac{\judge{\Gamma}{e_1}{A_1} \qquad \judge{\Gamma}{e_2}{A_2}} {\judge{\Gamma}{(e_1, e_2)}{A_1 \times A_2}} & \frac{\judge{\Gamma}{e}{A_1 \times A_2}} {\judge{\Gamma}{\pi_i(e)}{A_i}} \\[1em] \frac{\judge{\Gamma, x:A}{e}{B}} {\judge{\Gamma}{\fun{x}{e}}{A \to B}} & \frac{\judge{\Gamma}{e}{A \to B} \qquad \judge{\Gamma}{e'}{A}} {\judge{\Gamma}{e\;e'}{B}} \\[1em] \frac{ } {\judge{\Gamma}{\zero}{\N}} & \frac{\judge{\Gamma}{e}{\N} } {\judge{\Gamma}{\succ{e}}{\N}} \\[1em] \frac{\begin{array}{l} \judge{\Gamma}{e}{\N} \\ \judge{\Gamma}{e_z}{A} \\ \judge{\Gamma, x:A}{e_s}{A} \end{array}} {\judge{\Gamma}{\iter{e}{e_z}{x}{e_s}}{A}} & \frac{\judge{\Gamma}{e}{A}} {\judge{\Gamma}{(e:A)}{A}} \end{array} $$

So $\zero$ and $\succ{e}$ are the zero and successor operations, and the $\iter{e}{e_z}{x}{e_s}$ operation is essentially a fold operation over natural numbers. You give it a natural $e$, and a term $e_z$ of type $A$ in case $e$ is zero, and an operation $e_s$ that takes an $A$ and produces a new $A$ to handle the successor case.

This language has a denotational semantics in any cartesian closed category with a natural numbers object. It
looks like this:
$$
\begin{array}{lcl}
\interp{\judge{\Gamma}{x}{A}} & = & \pi^\Gamma_x \\
\interp{\judge{\Gamma}{()}{1}} & = & !_\Gamma \\
\interp{\judge{\Gamma}{(e, e')}{A \times B}} & = &
\left\langle \interp{\judge{\Gamma}{e}{A}},
\interp{\judge{\Gamma}{e'}{B}}
\right\rangle
\\
\interp{\judge{\Gamma}{\fst{e}}{A}} & = &
\interp{\judge{\Gamma}{e}{A \times B}}; \pi_1
\\
\interp{\judge{\Gamma}{\snd{e}}{B}} & = &
\interp{\judge{\Gamma}{e}{A \times B}}; \pi_2
\\
\interp{\judge{\Gamma}{\fun{x}{e}}{A \to B}} & = & \lambda(\interp{\judge{\Gamma, x:A}{e}{B}}) \\
\interp{\judge{\Gamma}{e\;e'}{B}} & = & \left\langle\interp{\judge{\Gamma}{e}{A \to B}},
\interp{\judge{\Gamma}{e'}{A}}
\right\rangle;
\mathit{eval}
\\
\interp{\judge{\Gamma}{\zero}{\N}} & = & !_\Gamma; \mathit{z} \\
\interp{\judge{\Gamma}{\succ{e}}{\N}} & = & \interp{\judge{\Gamma}{e}{\N}}; \mathit{s} \\
\interp{\judge{\Gamma}{\iter{e}{e_z}{x}{e_s}}{A}} & = &
\left\langle id_\Gamma, \interp{\judge{\Gamma}{e}{\N}}
\right\rangle;
\iota(\interp{\judge{\Gamma}{e_z}{A}},
\interp{\judge{\Gamma, x:A}{e_s}{A}})
\end{array}
$$
The basic idea is that a well-typed term $\judge{\Gamma}{e}{A}$ is a morphism $\interp{\Gamma} \to A$, where
$$
\begin{array}{lcl}
\interp{\cdot} & = & 1 \\
\interp{\Gamma, x:A} & = & \interp{\Gamma} \times A
\end{array}
$$
In other words, we are representing the *environment* of a term as a *tuple* of values of
the appropriate type. Therefore, the operation $\pi_x$ projects the $x$-th component out of a nested tuple, so that
if $x:A \in \Gamma$, then:
$$
\begin{array}{lcl}
\pi^\Gamma_x & \in & \interp{\Gamma} \to A \\
\pi^{(\Gamma, x:A)}_x & = & \pi_2 \\
\pi^{(\Gamma, y:B)}_x & = & \pi_1 ; \pi^\Gamma_x
\end{array}
$$
(As an aside, this is quite closely connected with de Bruijn indices.)

Here are the notational conventions I'm using.

- $f; g$ is composition in the diagrammatic order. That is, if $f : A \to B$ and $g : B \to C$, then $f; g : A \to C$.
- $!_A$ is the unique map $A \to 1$.
- If $f : A \to X$ and $g : A \to Y$, then$\left\langle f, g \right\rangle$ is the evident map $A \to X \times Y$.
- If $f : A \times B \to C$, then $\lambda(f)$ is the curried morphism of type $A \to (B \Rightarrow C)$.
- $\mathit{eval}$ is the application operator of type $(A \Rightarrow B) \times A \to B$.
- For natural numbers, $z : 1 \to \N$ and $s : \N \to \N$ are the zero and successor morphisms.
- If $f : A \to B$ and $g : A \times B \to B$, then $\iota(f, g) : A \times \N \to B$ is the iteration operator.
My specification of the universal property of natural numbers (i.e. $\iota$) is a little idiosyncratic, and is actually an equivalent variant of the official definition of parameterized natural numbers objects in category theory. This formulation is a little easier to use in programming and semantics.

So now, let's introduce an OCaml abstract data type of programs from System T.

module type T = sig type ('a,'b) hom type nat val id : ('a, 'a) hom val compose : ('a, 'b) hom -> ('b, 'c) hom -> ('a, 'c) hom val unit : ('a, unit) hom val pair : ('a,'b) hom -> ('a,'c) hom -> ('a, 'b * 'c) hom val fst : ('a * 'b, 'a) hom val snd : ('a * 'b, 'b) hom val curry : ('a * 'b, 'c) hom -> ('a, 'b -> 'c) hom val eval : (('a -> 'b) * 'a, 'b) hom val zero : (unit, nat) hom val succ : (nat, nat) hom val iter : ('a, 'b) hom -> ('a * 'b,'b) hom -> ('a * nat, 'b) hom val run : (unit, nat) hom -> int end

What we've done is to introduce a new type `('a,'b) hom`

which is the type of morphisms from `'a`

to `'b`

.
We then introduce identity and composition morphisms, and the categorical operations for each of the data types in our language. We have the terminal map `unit`

, the pairing `pair`

and projection (`fst`

and `snd`

) morphisms, currying (`curry`

) and evaluation (`eval`

) maps, and the zero (`zero`

), successor (`succ`

), and iteration (`iter`

) operators. Our interface also has a `run`

operation, which lets us observe the results of running a closed program of type `nat`

.

Strictly speaking, there's a slightly abusive pun in this interface. I'm defining the morphisms of a category, and I'm reusing the syntax of OCaml types to name the objects of the semantic category of types of T. Since OCaml does not have a mechanism for kind-level data, there is no alternative to this abuse, but it's worth noting in case you want to do something like this in a language (like Agda or Haskell) which does let you define datakinds.

Now, we can implement this module.

module Goedel : T = struct type ('a, 'b) hom = 'a -> 'b type nat = Z | S of nat let id x = x let compose f g a = g (f a) let unit _ = () let pair f g a = (f a, g a) let fst (a, b) = a let snd (a, b) = b let curry f a = fun b -> f (a, b) let eval (f, v) = f v let zero () = Z let succ n = S n let rec iter z s (a, n) = match n with | Z -> z a | S n -> s(a, iter z s (a, n)) let run tm = let rec loop = function | Z -> 0 | S n -> 1 + loop n in loop (tm()) end

There are no real surprises in this implementation module. We're representing T functions with OCaml functions, T pairs with OCaml pairs, and natural numbers with a new datatype of Peano style numbers. Iteration is just the obvious recursive definition. The point is that ML's type abstraction will protect us from constructing bad elements of the hom type, because clients can only construct values of hom-type using the operations we export.

One caveat is that this data abstraction actually relies on strictness. If we had tried to define this ADT in Haskell, then we would actually lose the termination guarantees. This is because under a lazy semantics, the nonterminating computation inhabits every type, no matter what. For example, we could define:

bad :: Hom () Nat bad = compose bad succ

The fact that call-by-value and call-by-name can interpret each other via a CPS-translation means that it ought to be possible to make this kind of data abstraction work in Haskell, but I don't see how, offhand.

At this point we're basically where we were in the last post, but with a different language. We've got a denotational
semantics, and a combinator-based implementation of it --- but we'd really like to use some of the great programming language
advances of the last fifty years, like *variables*.

So, let's implement this as an OCaml extension. I'll use Camlp4, but the same tricks should work with (e.g.) Racket's macro system, or with Template Haskell, and if anyone ports this code, post a link in the comments.

If you're copying this code into a toplevel, here are the incantations you'll need to convince the OCaml toplevel to accept quasiquotes and grammar extensions.

#load "dynlink.cma";; #load "camlp4oof.cma";; open Camlp4.PreCast let _loc = Camlp4.PreCast.Loc.ghost

Our first step will be to introduce a datatype for T-level types and terms. There's nothing too surprising in the datatype declarations. The only extra (relative to the semantics above) is that I've thrown in let-binding to make the examples more readable.

module Term = struct type var = string type tp = Nat | One | Prod of tp * tp | Arrow of tp * tp type term = | Var of var | Let of var * term * term | Lam of var * term | App of term * term | Unit | Pair of term * term | Fst of term | Snd of term | Zero | Succ of term | Iter of term * term * var * term | Annot of term * tp let rec print_tp () tp = let print fmt = Printf.sprintf fmt in match tp with | One -> "unit" | Nat -> "nat" | Prod(tp1, tp2) -> print "(%a * %a)" print_tp tp1 print_tp tp2 | Arrow(Arrow(_, _) as tp1, tp2) -> print "(%a) -> %a" print_tp tp1 print_tp tp2 | Arrow(tp1, tp2) -> print "%a -> %a" print_tp tp1 print_tp tp2

We're going to follow a Lisp-like strategy in embedding the syntax of our language. OCaml already has a syntax
for types and programs, so we might as well re-use that for parsing our language. So our "parser" is just a function
that takes an OCaml AST, and converts it into elements of the `term`

and `tp`

datatypes.
Camlp4 lets us match quotations in pattern match expressions, so this is really quite easy. Here, `<:ctyp< t >>`

just matches an AST with the same shape as the term `t`

. Holes inside of a patter can be bound to variables using the
notation `$x$`

. We can also do the same thing with expressions, using the quotation form `<:expr< e >>`

. As a result, all of the issues of precedence and so on are handled automatically, with exactly the rules as OCaml. (The one limitation is that the syntax of the primitive iterator is a little bit ugly.)

let rec tp = function | <:ctyp< nat >> -> Nat | <:ctyp< unit >> -> One | <:ctyp< $a$ -> $b$ >> -> Arrow(tp a, tp b) | <:ctyp< $a$ * $b$ >> -> Prod(tp a, tp b) | _ -> failwith "unhandled type expression" let rec term = function | <:expr< () >> -> Unit | <:expr< ($e1$, $e2$) >> -> Pair(term e1, term e2) | <:expr< (fst $e$) >> -> Fst (term e) | <:expr< (snd $e$) >> -> Snd (term e) | <:expr< Zero >> -> Zero | <:expr< Succ($e$) >> -> Succ(term e) | <:expr< iter (match $en$ with | Zero -> $ez$ | Succ $lid:x$ -> $es$) >> -> Iter(term en, term ez, x, term es) | <:expr< let $lid:x$ : $t$ = $e1$ in $e2$ >> -> Let(x, Annot(term e1, tp t), term e2) | <:expr< let $lid:x$ = $e1$ in $e2$ >> -> Let(x, term e1, term e2) | <:expr< ($e$ : $t$) >> -> Annot(term e, tp t) | <:expr< $lid:x$ >> -> Var x | <:expr< fun $lid:x$ -> $e$ >> -> Lam(x, term e) | <:expr< ($e1$ $e2$) >> -> App(term e1, term e2) | _ -> failwith "Unknown expression" end

We will also want to construct terms of ML type `('a,'b) hom`

, using references to the module we
implemented. The `Quote`

module gives a set of functions, which construct AST nodes corresponding to
each of the combinators in our interface. The only interesting case is `find`

, which receives a
context (a list of variables and their types), and constructs the appropriate projection (corresponding to
$\pi^\Gamma_x$ in our denotational semantics). Note that this function can fail if we try to look up a
variable not bound in the context.

module Quote = struct let id = <:expr< Goedel.id >> let compose f g = <:expr< Goedel.compose $f$ $g$ >> let unit = <:expr< Goedel.unit >> let fst = <:expr< Goedel.fst >> let snd = <:expr< Goedel.snd >> let pair f g = <:expr< Goedel.pair $f$ $g$ >> let prod f g = <:expr< Goedel.prod $f$ $g$ >> let curry f = <:expr< Goedel.curry $f$ >> let eval = <:expr< Goedel.eval >> let zero = <:expr< Goedel.zero >> let succ = <:expr< Goedel.succ >> let iter z s = <:expr< Goedel.iter $z$ $s$ >> let rec find x = function | [] -> raise Not_found | (x', t) :: ctx' when x = x' -> <:expr< Goedel.snd >> | (x', t) :: ctx' -> <:expr< Goedel.compose Goedel.fst $find x ctx'$ >> endNow, what we'll do is to write a typechecker that implements the typing rules I gave above. The nicest way of writing typecheckers is (IMO) in a monadic style. The monad I'll use is a reader plus error monad. The context is the read-only state, and the

`with_hyp`

operation temporarily extends the state with a hypothesis in a block, and the `lookup`

returns the projection code and the type for a variable. The `error`

function takes a format string and returns
an error object, using OCaml's nifty CPS-style output formatter.
module Context = struct open Term type ctx = (var * tp) list type 'a result = Error of string | Done of 'a type 'a t = ctx -> 'a result let return v = fun ctx -> Done v let (>>=) m f = (fun ctx -> match m ctx with | Done v -> f v ctx | Error s -> Error s) let with_hyp hyp cmd = fun ctx -> cmd (hyp :: ctx) let lookup x ctx = try Done (Quote.find x ctx, List.assoc x ctx) with Not_found -> Error (Printf.sprintf "'%s' unbound" x) let error fmt = Printf.ksprintf (fun msg ctx -> Error msg) fmt let run cmd = cmd [] end

Now, once we've defined the context monad, we can actually write the typechecker. Our typechecker is actually an *elaborater*, which is a common pattern in type-based compilation. We don't just typecheck a term, we also produce a term in the output language as part of the typechecking process. Furthermore, we also use *bidirectional typechecking*, in which we split the typechecker into two functions, `check`

and `synth`

. The `check`

function takes a term and the type to check it against, and returns the elaborated code. The `synth`

function just takes a term, and it infers a type for that term, in addition to emitting the code for that term.

module Elaborate = struct open Term open Quote open Context

The `check`

function has the type `Term.term -> Term.tp -> Ast.expr Context.t `

. It compares introduction forms (such as lambdas and pairs) against their expected types (arrow and product type) and complains if they don't line up. Note that because we check a function against a type, we know what type to ascribe to the variable that goes into the context. Likewise, we require let-bindings to be bound to an inferrable term in order to figure out their type.

You can also see that the value we return uses the operators from the `Quote`

module to construct terms to return. It lines
up exactly with the denotational semantics, illustrating that semantics can profitably be viewed as a guide to the design patterns of functional programming.

let rec check e tp = match (e, tp) with | Lam(x, e), Arrow(tp1, tp2) -> with_hyp (x, tp1) (check e tp2 >>= (fun t -> return (curry t))) | Lam(_, _), tp -> error "expected function type, got '%a'" print_tp tp | Unit, One -> return unit | Unit, tp -> error "expected unit type, got '%a'" print_tp tp | Pair(e1, e2), Prod(tp1, tp2) -> check e1 tp1 >>= (fun t1 -> check e2 tp2 >>= (fun t2 -> return (pair t1 t2))) | Pair(_, _), tp -> error "expected product type, got '%a'" print_tp tp | Iter(en, ez, x, es), tp -> check en Nat >>= (fun tn -> check ez tp >>= (fun tz -> (with_hyp (x, tp) (check es tp)) >>= (fun ts -> return (compose (pair id tn) (iter tz ts))))) | Let(x, e1, e2), tp2 -> synth e1 >>= (fun (t1, tp1) -> (with_hyp (x, tp1) (check e2 tp2)) >>= (fun t2 -> return (compose (pair id t1) t2))) | _, _ -> synth e >>= (function | (t, tp') when tp = tp' -> return t | (_, tp') -> error "Expected type '%a', inferred type '%a'" Term.print_tp tp Term.print_tp tp')The

`synth`

function has type `Term.term -> (Ast.expr * Term.tp) Context.t `

. Bidirectional typechecking experts and focalization ninjas will be puzzled to see `Zero`

and `Succ`

as having synthesized type, since they are introduction forms. (Normal people will regard this as entirely normal.) I now think the normal people are right, and that this is actually proof-theoretically justified, for reasons that would be a massive digression to go into. At any rate, you can see the dual pattern to checking here -- if you have an application, and you can infer the type of the head, then you have a type to check the argument against.
and synth e = match e with | Var x -> lookup x | Zero -> return (compose unit zero, Nat) | Succ e1 -> check e1 Nat >>= (fun t1 -> return (compose t1 succ, Nat)) | App(e1, e2) -> synth e1 >>= (function | t1, Arrow(tp2, tp) -> check e2 tp2 >>= (fun t2 -> return (compose (pair t1 t2) eval, tp)) | t1, tp_bad -> error "Expected function, got '%a'" Term.print_tp tp_bad) | Fst e' -> synth e' >>= (function | t', Prod(tp1, tp2) -> return (compose t' fst, tp1) | t', tp_bad -> error "Expected product, got '%a'" Term.print_tp tp_bad) | Snd e' -> synth e' >>= (function | t', Prod(tp1, tp2) -> return (compose t' snd, tp2) | t', tp_bad -> error "Expected product, got '%a'" Term.print_tp tp_bad) | Let(x, e1, e2) -> synth e1 >>= (fun (t1, tp1) -> (with_hyp (x, tp1) (synth e2)) >>= (fun (t2, tp2) -> return (compose (pair id t1) t2, tp2))) | Annot(e, tp) -> check e tp >>= (fun t -> return (t, tp)) | _ -> error "checking term in synthesizing position" endNow we can perform the one bit of metaprogramming magic which lets us splice in our DSL into ordinary OCaml terms. This extends OCaml's grammar with the

`T(t)`

construct, which constructs a term of type `(unit, 'a) hom`

from the lambda-term `t`

.
module Extension = struct open Syntax EXTEND Gram expr: LEVEL "top" [[ UIDENT "T"; tm = expr -> match Context.run (Elaborate.synth (Term.term tm)) with | Context.Done (e, _) -> e | Context.Error msg -> Loc.raise _loc (Failure msg) ]]; END endHere are a few examples. (I had to resist the temptation mightily, but I managed not to give factorial as an example.)

module Test = struct let e1 = T(let sum : nat -> nat -> nat = fun x y -> iter (match x with | Zero -> y | Succ n -> Succ n) in sum (Succ (Succ (Zero))) (Succ (Succ (Succ Zero)))) let e2 = T(let sum : nat -> nat -> nat = fun x y -> iter (match x with | Zero -> y | Succ n -> Succ n) in let mult : nat -> nat -> nat = fun x y -> iter (match x with | Zero -> Zero | Succ n -> add y n) in mult (Succ (Succ (Zero))) (Succ (Succ (Succ Zero)))) end

You can run these using the `Goedel.run`

function. Multiplying 2 and 3 is not the world's most exciting computation, but (a) we had a type-based guarantee that this multiplication would terminate, and (b) we embedded this into OCaml, where it is decidedly *not* the case that all functions terminate. IMO, that *is* exciting!

For comparison, here's what `e2`

looks like as a combinator expression (with all the extra `Goedel.blah`

prefixes stripped out, so that this is actually *less* noisy than it could be):

let e2 = compose (pair id (curry (curry (compose (pair id (compose fst snd)) (iter snd (compose snd succ)))))) (compose (pair id (curry (curry (compose (pair id (compose fst snd)) (iter (compose unit zero) (compose (pair (compose (pair (compose fst (compose fst (compose fst snd))) (compose fst snd)) eval) snd) eval)))))) (compose (pair (compose (pair snd (compose (compose (compose unit zero) succ) succ)) eval) (compose (compose (compose (compose unit zero) succ) succ) succ)) eval))

It's probably possible to make this a bit more readable with judicious use of infix operators, but frankly there's no real comparison in terms of readability --- having variables is a *massive* win, even though there is a totally compositional translation from lambda-terms into combinator expressions. It's actually not outside the realm of possibility that someone would want to use the `T()`

macro, but basically no one would want to write the desugared version. (And if they did, whoever had to maintain that code would probably throw it out and rewrite it...)

A great post, as always. It inspired me three comments:

ReplyDelete1. It is bad practice to destructively mutate the OCaml grammar. The Camlp4 quotation mechanism is here as a simple, coherent, modular and composable mechanism to embed arbitrary syntaxes/preprocessing into the expression or pattern language. Why instead overload a "T(expr)" syntax in a way that is ambiguous with constructor application? I'd rather write "let e1 = <:godel< ... >>".

2. I don't totally agree with your remark on datakinds: it would not be enough to define the product and arrows of the category as arbitrary data living at the type level, you also need a way (eg. type families) to project those terms back into the world of OCaml types inside the abstraction boundary, to be able to provide an implementation for those primitives. My personal choice would have been some `type ('a, 'b) pair` and `type ('a, 'b) arr` datatype, privately implemented by their projected-back semantics.

3. I'm wondering if it would be possible to distinguish the cartesian structure used to represent the products of the language (pair, fst, snd) and the cartesian structure used to represent environments (curry, eval). You are doing the same simplifying choice as the Categorical Abstract Machine (CAM), and the result is a relatively inefficient implementation (though representing environments with lists or nested tuples is in practice reasonable for simple examples). Having a distinction would help to experiment with other environment structures. For example, if we could define ('a, 'b) envpair as the first-class polymorphic type ('c. 'a -> 'b -> 'c), `curry` would just be the identity and `eval` just application. That could be quite fun.

ReplyDeleteBob Harper wrote the following comment, which got eaten by Google:This example appears in skeletal form in my

Programming in Standard ML, Chapter 32, under the name "Total Integer Function". It was introduced to illustrate that abstract types can enforce conditions that cannot be checked at run-time. I was attempting to stamp out the pernicious misconceptions about run-time checking using "assert" and suchlike.Answering to Gabriele above (I know, I'm late).

ReplyDelete1. Don't you want ('c. ('a -> 'b -> 'c) -> 'c), or am I misguessing the syntax? ('c. 'a -> 'b -> 'c) sounds like an empty type, if it's just Haskell's "forall c. a -> b -> c". But then, curry doesn't seem the identity and eval doesn't seem application. (I might be missing something though).

2. Finite products per se would be fine for efficiency (a projection from there is O(1), as long as you don't encode finite products as nested binary ones). The problem seems to me more with the closed structure — I barely know how to write eval and curry for finite products, let alone how to make those efficient.

I'm trying to rewrite your code in Haskell, but I'm hitting this error: https://stackoverflow.com/questions/53349684/systemt-compiler-and-dealing-with-infinite-types-in-haskell

ReplyDeleteThe basic issue, IIUC is that this blog post requires staging; the typechecker does not guarantee that the "macros" produce well-typed code; instead, the macros, if buggy, might produce ill-typed OCaml terms, but that's caught by the OCaml typechecker.

DeleteWithout staging, you can't just write a function `term -> ('a, 'b) hom` (since `'a` and `'b` depend on the actual term), you need to use (some encoding of) dependent types.