Friday, August 10, 2018

Polarity and bidirectional typechecking

This past July, Max New asked me about the relationship between bidirectional typechecking and semantic notions like polarization and call-by-push-value. I told him it was complicated, and that I would try to write something about the relationship. I was reminded of this since a couple of days ago, Conor McBride wrote a blog post where he laid out how he approaches bidirectional type systems. This is a fantastic post, and you should read it (probably twice or more, actually).

In his post, he remarks:

I like my type systems to be bidirectional. Different people mean different things by the word “bidirectional”, and I’m one of them. Some people relate the notion to “polarity”, and I’m not one of them.

I'm someone who would like to join the club Conor abjures, but I don't know how to!

It sure looks like bidirectional typechecking has a deep relationship to polarization or call-by-push-value, but I don't know how to make it work correctly. So, in this post is to explain why people think it does some deep semantic import, and then I'll talk about the mismatch that creates a problem I don't know the right way to handle. (Also, I apologize in advance for the lack of citations -- I really want to get this post out the door. If I didn't cite a paper you like (or even wrote), please link to it in the comments.)

The reason that people think that bidirectional typechecking must have a deep semantic meaning arises from how it works out in the simply-typed lambda calculus. Let's try writing some things down and seeing what happens. First, recall that bidirectional typechecking categorizes all terms as either "checking" or "synthesizing" and introduces two mutually recursive typing judgements for them.

\[\newcommand{\bnfalt}{\;\;|\;\;} \newcommand{\To}{\Rightarrow} \newcommand{\From}{\Leftarrow} \newcommand{\rule}[2]{\frac{\displaystyle \begin{matrix} #1 \end{matrix}}{\displaystyle #2}} \newcommand{\check}[3]{{#1} \vdash {#2} \From {#3}} \newcommand{\synth}[3]{{#1} \vdash {#2} \To {#3}} \newcommand{\lam}[1]{\lambda {#1}.\;} \newcommand{\inj}[2]{\mathsf{in}_{#1}({#2})} \newcommand{\case}[5]{\mathsf{case}({#1}, \inj{1}{#2} \to {#3}, \inj{2}{#4} \to {#5})} \newcommand{\match}[2]{\mathsf{match}\;{#1}\;\mathsf{of}\;[ #2 ]} \newcommand{\arm}[2]{{#1} \to {#2}} \newcommand{\downshift}[1]{\downarrow{#1}} \newcommand{\upshift}[1]{\uparrow{#1}} \newcommand{\thunk}[1]{\left\{{#1}\right\}} \newcommand{\return}[1]{\mathsf{return}\;#1} \newcommand{\fun}[1]{\lambda\;#1} \newcommand{\checkn}[3]{{#1} \rhd {#2} \From {#3}} \newcommand{\checkp}[2]{{#1} \leadsto {#2}} \newcommand{\spine}[4]{{#1} \vdash {#2} : {#3} \gg {#4}} \newcommand{\unit}{()} \]

\[ \begin{array}{llcl} \mbox{Types} & A & ::= & b \bnfalt A \to B \\ \mbox{Contexts} & \Gamma & ::= & \cdot \bnfalt \Gamma, x:A \\ \mbox{Checking} & e & ::= & \lam{x}{e} \bnfalt t \\ \mbox{Synthesizing} & t & ::= & x \bnfalt t\;e \bnfalt e : A \\ \end{array} \]

We'll start with a pretty minimal calculus -- we've got a base type \(b\), and functions. Contexts work as usual, giving variables their types, but terms are divided into checking terms and synthesizing terms. Checking terms are either introduction forms (just lambda-abstraction, here) or any synthesizing term (the intuition is that if we can infer a type for a term, we can surely check its type as well). Synthesizing terms are either variables (we can just look up their type in the context) applications, or explicitly annotated terms.

\[\begin{matrix} \rule{ \check{\Gamma, x:A}{e}{B} } { \check{\Gamma}{\lam{x}{e}}{A \to B} } & \rule{ \synth{\Gamma}{t}{A \to B} & \check{\Gamma}{e}{A} } { \synth{\Gamma}{t\;e}{B} } \\[1em] \rule{ \synth{\Gamma}{t}{A} & A = B } { \check{\Gamma}{t}{B} } & \rule{ \check{\Gamma}{e}{A} } { \synth{\Gamma}{e : A}{A} } \\[1em] \rule{ x:A \in \Gamma } { \synth{\Gamma}{x}{A} } & \end{matrix}\]

You can see the intro/elim pattern in the first line -- when we check a lambda-abstraction (an introduction form) against a type \(A \to B\), we put the variable \(x\) in the context at type \(A\), and check the body at type \(B\). When we apply a function, we first infer a type \(A \to B\) for the function expression, which gives us a type \(A\) to check the argument against.

So far, this is pretty standard stuff. Now, let's tweak the rules slightly.

\[\begin{matrix} \rule{ \check{\Gamma, x:A}{e}{B} } { \check{\Gamma}{\lam{x}{e}}{A \to B} } & \rule{ \synth{\Gamma}{t}{A \to B} & \check{\Gamma}{e}{A} } { \synth{\Gamma}{t\;e}{B} } \\[1em] \rule{ \color{red}{\synth{\Gamma}{t}{b}} } { \color{red}{\check{\Gamma}{t}{b}} } & \color{red}{\mbox{(No annotation rule)}} \\[1em] \rule{ x:A \in \Gamma } { \synth{\Gamma}{x}{A} } & \end{matrix}\]

Now, we've made two changes. First, we've deleted the annotation rule, and second, we've restricted the checking/synthesis switch to only occur at base types \(b\).

  1. The effect of deleting the annotation rule is that it is not longer possible to write beta-reducible terms. A term in the simply-typed lambda-calculus written \((\lam{x:b}{x})\;y\) which reduces to \(y\) would be written \((\lam{x}{x} : b \to b)\;y\) in a bidirectional system -- but without annotations these terms can no longer be written.

  2. The effect of restricting the check/synth switch rule is that it is no longer possible to write eta-expandable terms. If \(f : b \to b \to b\) and \(x : b\), then the term \(f\;x\) would synthesize the type \(b \to b\) in the original system. However, it no longer typechecks in our restricted system. To make it work, we have to eta-expand the term, so that we write \(\lam{y}{f\;x\;y}\) instead. This now checks against \(b \to b\) as we expect.

So the joint-effect of these two restrictions is that only beta-normal, eta-long terms typecheck. The reason these terms are so important is that any two beta-eta equal terms will have the same normal form. So having a simple, easy characterization of these normal forms is really great! Moreover, this characterization is easy to extend to products:

\[\begin{matrix} \rule{ \check{\Gamma, x:A}{e}{B} } { \check{\Gamma}{\lam{x}{e}}{A \to B} } & \rule{ \synth{\Gamma}{t}{A \to B} & \check{\Gamma}{e}{A} } { \synth{\Gamma}{t\;e}{B} } \\[1em] \rule{ } { \check{\Gamma}{()}{1} } & \mbox{(No unit elimination rule)} \\[1em] \rule{ \check{\Gamma}{e_1}{A_1} & \check{\Gamma}{e_2}{A_2} } { \check{\Gamma}{(e_1, e_2)}{A_1 \times A_2} } & \rule{ \synth{\Gamma}{e}{A_1 \times A_2} & i \in \{1,2\} } { \synth{\Gamma}{\pi_i(e)}{A_i} } \\[1em] \rule{ \synth{\Gamma}{t}{b} } { \check{\Gamma}{t}{b} } & \mbox{(No annotation rule)} \\[1em] \rule{ x:A \in \Gamma } { \synth{\Gamma}{x}{A} } & \end{matrix}\]

This type system still characterizes normal forms in the STLC with units and products. Adding these constructors starts to give us a pattern:

  1. Introduction forms (lambda-abstractions, pairs, units) are checking.
  2. Elimination forms (applications, projections) are synthesizing.

Since units + pairs + functions are syntax for everything in cartesian closed categories, this is actually pretty wonderful. We seem to have a simple rule for characterizing beta-normal, eta-long forms.

But what happens when we try to add sums to the language? Let's try to follow our recipe, and see what happens:

\[\begin{matrix} \rule{ \check{\Gamma}{e}{A_i} & i \in \{1,2\} } { \check{\Gamma}{\inj{i}{e}}{A_1 + A_2} } & \rule{ \synth{\Gamma}{t}{A_1 + A_2} & \check{\Gamma, x_1:A_1}{e_1}{C} & \check{\Gamma, x_2:A_2}{e_2}{C} } { \check{\Gamma}{\case{t}{x_1}{e_1}{x_2}{e_2}}{C} } \end{matrix}\]

The introduction form seems to work. The elimination form is a bit more complicated -- it's the same syntax as always, but the checking/synthesis moding is a bit subtle. The expectation created by units/pairs/functions would be that both the scrutinee and the whole case form should synthesize, but expecting two branches with different contexts (i.e., \(\Gamma, x_1:A_1\) for \(e_1\) and \(\Gamma, x_2:A_2\) for \(e_2\)) to synthesize the same type is a morally dubious expectation (eg, it would not make sense in a dependently-typed setting). So we are led to say that case is checking, but that the scrutinee is synthesizing.

This imposes some restrictions on what does and doesn't count as a typeable term. For example, because \(\mathsf{case}\) is checking rather than synthesizing, we can never write an expression like:

\[a:((b \to A) + (b \to A)), x:b \vdash \case{a}{f}{f}{g}{g}\;x \From A\]

Instead of applying an argument to a case expression of function type, we have to push the arguments into the branches:

\[a:((b \to A) + (b \to A)), x:b \vdash \case{a}{f}{f\;x}{g}{g\;x} \From A\]

From the point of view of typing normal forms, this actually doesn't seem too bad, because most people would consider the second term simpler than the first, and so this gives us a "nicer" notion of normal form. However, this doesn't seem like a real explanation, since our rules permit things like the following:

\[\synth{f : b \to b, x:b+b}{f\;\case{x}{y}{y}{z}{z}}{b}\]

To get to a better explanation before the heat death of the universe, I'm going to skip over about 20 years of research, and jump straight to polarized type theory.

\[\begin{matrix} \mbox{Positive Types} & P & ::= & 1 \bnfalt P \times Q \bnfalt P + Q \bnfalt \downshift{N} \\ \mbox{Negative Types} & N & ::= & P \to N \bnfalt \upshift{P} \\ \mbox{Values} & v & ::= & () \bnfalt (v,v) \bnfalt \inj{i}{v} \bnfalt \thunk{t} \\ \mbox{Spines} & s & ::= & \cdot \bnfalt v\;s \\[1em] \mbox{Terms} & t & ::= & \return{v} \bnfalt \fun{\overrightarrow{\arm{p_i}{t_i}}} \bnfalt \match{x \cdot s}{\overrightarrow{\arm{p_i}{t_i}}} \\ \mbox{Patterns} & p & ::= & () \bnfalt (p,p') \bnfalt \inj{i}{p} \bnfalt \thunk{x} \\[1em] \mbox{Contexts} & \Gamma,\Delta & ::= & \cdot \bnfalt \Gamma, x:N \\ \mbox{Typing Values} & & & \check{\Gamma}{v}{P} \\ \mbox{Typing Spines} & & & \spine{\Gamma}{s}{N}{M} \\ \mbox{Typing Terms} & & & \checkn{\Gamma}{t}{N} \\ \mbox{Typing Patterns} & & & \checkp{p:P}{\Delta} \\ \end{matrix}\]

The key idea in polarized type theory is to divide types into two categories, the positive types (sums, strict products, and suspended computations, denoted by \(P\)) and the negative types (basically, functions, denoted by \(N\)). Positive types are basically those that are eliminated with pattern matching, and the negative types are the ones that are eliminated by supplying arguments. Negatives types can be embedded into positive types using the "downshift" type \(\downshift{N}\) (representing suspended computations) and positive types can be embedded into the negatives using the "upshift" \(\upshift{P}\) (denoting computations producing \(P\)'s).

The funny thing about this setup is that despite arising from meditation upon invariants of proof theory, we end up with a syntax that is much closer to practical functional programming languages than the pure typed lambda calculus! For example, syntax for polarized calculi tends to have pattern matching. However, one price of this is a proliferation of judgements. We usually end up introducing separate categories of values (for introducing positive types) and spines (argument lists for elminating negative types), as well as terms (how to put values and spines together in computations, as well as introducing negative types) and patterns (how to eliminate positive types).

Now, let's talk through the rules. First up is the \(\check{\Gamma}{v}{P}\) judgement for checking the type of positive values.

\[\begin{matrix} \rule{} { \check{\Gamma}{()}{1} } & \rule{ \check{\Gamma}{v}{P} & \check{\Gamma}{v'}{Q} } { \check{\Gamma}{(v,v')}{P \times Q} } \\[1em] \rule{ \check{\Gamma}{v}{P_i} & i \in \{1,2\} } { \check{\Gamma}{\inj{i}{v}}{P_1 + P_2} } & \rule{ \checkn{\Gamma}{t}{N} } { \check{\Gamma}{\thunk{t}}{\downshift{N}} } \end{matrix}\]

The rules for units, pairs and sums are the same as always. The rule for downshift says that if a term \(t\) checks at a negative type \(N\), then the thunked term \(\thunk{t}\) will check against the downshifted type \(\downshift{N}\).

We'll see the rules for terms in a bit, but next will come the rules for spines, in the judgement \(\spine{\Gamma}{s}{N}{M}\). This judgement says that if the spine \(s\) is applied to a head of type \(N\), it will produce a result of type \(M\). In this judgement, the type \(N\) is an algorithmic input, and the type \(M\) is an output.

\[\begin{matrix} \rule{ } { \spine{\Gamma}{\cdot}{N}{N} } \qquad \rule{ \check{\Gamma}{v}{P} & \spine{\Gamma}{s}{N}{M} } { \spine{\Gamma}{v\;s}{P \to N}{M} } % \\[1em] % \rule{ \forall i < n.\; & \checkp{p_i:P}{\Delta_i} & \checkn{\Gamma, \Delta_i}{t_i}{M} } % { \spine{\Gamma}{\match{x \cdot s}{\overrightarrow{\arm{p_i}{t_i}}^{i < n}}}{\upshift{P}}{M} } \end{matrix}\]

The first rule says that if you have an empty argument list then the result is the same as the input, and the second rule says that if \(v\) is a value of type \(P\), and \(s\) is an argument list sending \(N\) to \(M\), then the extended argument list \(v\;s\) sends the function type \(P \to N\) to \(M\).

With values and spines in hand, we can talk about terms, in the term typing judgement \(\checkn{\Gamma}{t}{N}\), which checks that a term \(t\) has the type \(N\).

\[\begin{matrix} \rule{ \check{\Gamma}{v}{P} } { \checkn{\Gamma}{\return{v}}{\upshift{P}} } \qquad \rule{ \forall i < n.\; & \checkp{p_i:P}{\Delta_i} & \checkn{\Gamma, \Delta_i}{t_i}{N} } { \checkn{\Gamma}{\fun{\overrightarrow{\arm{p_i}{t_i}}^{i < n}}}{P \to N} } \\[1em] \rule{ x:M \in \Gamma & \spine{\Gamma}{s}{M}{\upshift{Q}} & \forall i < n.\; \checkp{p_i:Q}{\Delta_i} & \checkn{\Gamma, \Delta_i}{t_i}{\upshift{P}} } { \checkn{\Gamma}{\match{x \cdot s}{\overrightarrow{\arm{p_i}{t_i}}^{i < n}}}{\upshift{P}} } \end{matrix}\]

The rule for \(\return{v}\) says that we can embed a value \(v\) of type \(P\) into the upshift type \(\upshift{P}\) by immediately returning it. Lambda abstractions are pattern-style -- instead of a lambda binder \(\lam{x}{t}\), we give a whole list of patterns and branches \(\fun{\overrightarrow{\arm{p_i}{t_i}}}\) to check at the type \(P \to N\). As a result, we need a judgement \(\checkp{p_i:P}{\Delta_i}\) which gives the types of the bindings \(\Delta_i\) of the pattern \(p_i\), and then we check each \(t_i\) against the result type \(N\).

The match statement \(\match{x\cdot s}{\overrightarrow{\arm{p_i}{t_i}}}\) also has similar issues in its typing rule. First, it finds a variable in the context, applies some arguments to it to find a value result of type \(\upshift{Q}\), and then pattern matches against type \(Q\). So we check that the spine \(s\) sends \(M\) to the type \(\upshift{Q}\), and then check that the patterns \(p_i\) yield variables \(\Delta_i\) at the type \(Q\), and then check the \(t_i\) against the type \(\upshift{P}\).

Restricting the type at which we can match forces us to eta-expand terms of function type. Also, these rules omit a side-condition for pattern coverage. (I have an old blog post about how to do that if you are curious.)

Both lambda-abstraction and application/pattern-matching need the judgement \(\checkp{p:P}{\Delta}\) to find the types of the bindings. The rules for these are straightforward: \[\begin{matrix} \rule{ } { \checkp{\thunk{x} {:} \downshift{\!N}}{x:N} } & \rule{ } { \checkp{\unit : 1}{\cdot} } \\[1em] \rule{ \checkp{p_1 : P_1}{\Delta_1} & \checkp{p_2 : P_2}{\Delta_2} } { \checkp{(p_1,p_2) : P_1 \times P_2}{\Delta_1, \Delta_2} } & \rule{ \checkp{p:P_i}{\Delta} & i \in \{1,2\} } { \checkp{\inj{i}{p} : P_1 + P_2}{\Delta} } \end{matrix}\]

Units yield no variables at type \(1\), pair patterns \((p_1, p_2)\) return the variables of each component, coproduct injections \(\inj{i}{p}\) return the variables of the sub-pattern \(p\), and thunk patterns \(\thunk{x}\) at type \(\downshift{N}\) return that variable \(x\) at type \(N\).

At this point, it sure looks like we have a perfect bidirectional type system for a polarized calculus. What's the problem? The problem is that I palmed a card! Here's the relevant bit of the grammar I kicked things off with: \[\begin{matrix} \ldots \\ \mbox{Contexts} & \Gamma,\Delta & ::= & \cdot \bnfalt \Gamma, \color{red}{x:N} \\ \ldots \end{matrix}\]

The context \(\Gamma\) has been restricted to only contain variables of negative type. It doesn't allow variables of positive type! And, I don't know how to add it in the "right" way. If we wanted positive variables (in fact, call-by-push-value only has positive variables), we could add them in the following way:

\[\begin{matrix} \mbox{Values} & v & ::= & () \bnfalt (v,v) \bnfalt \inj{i}{v} \bnfalt \thunk{t} \bnfalt \color{red}{u} \\ \mbox{Contexts} & \Gamma,\Delta & ::= & \cdot \bnfalt \Gamma, x:N \bnfalt \Gamma, \color{red}{u:P} \\ \mbox{Patterns} & p & ::= & () \bnfalt (p,p') \bnfalt \inj{i}{p} \bnfalt \thunk{x} \bnfalt \color{red}{u} \\[1em] \end{matrix}\] So we add value variables \(u\) to the syntax of values, and so we have to also add them to contexts, and also extend pattern matching with them to bind values. Then, the rules for these things would look like the following: \[\begin{matrix} \rule{ } {\checkp{u:P}{u:P}} & \rule{ u:Q \in \Gamma & Q = P } { \check{\Gamma}{u}{P} } \end{matrix}\]

So a variable pattern at value type simply binds the variable at that type, and when we use a value variable we have the check that the type in the context matches the type that we're checking the term at.

And that's the wrong thing to do! The bidirectional recipe says that we should check equality of types only when we switch between checking and synthesis, and so while this rule might or might not work, it's clearly not arranged the information flow properly, since we have a random-looking subsumption test in the value variable rule.

Some additional idea is needed, and I'm not sure what, yet.

Monday, August 6, 2018

Category Theory in PL research

Over on Twitter, Max New wrote:

Maybe the reason category theory seems useless for PL to many people is that it only gets really useful when you start doing "sophisticated" stuff like internal, enriched and fibered categories and the intros for computer scientists don't get to those things.

Max is a very sharp person, but I do not agree with this!

The most useful applications of category theory in my own research have all been very, very elementary, to the point that people who (unlike me) are genuinely good at category theory are always a little surprised that I can get away with such primitive machinery.

Basically, the key idea is to formulate models of languages/libraries in terms of categories of structured sets with structure-preserving maps (aka homomorphisms). And, well, that's it!

I don't have a really crisp explanation of the process of figuring out how to go from "I have a property I'd like to represent mathematically" to "sets should have the following structure and the following properties should be preserved by the homomorphisms", but I can give an example to illustrate the recipe.

  1. Let's suppose we want to control the memory usage of programs.

  2. Think of a type as a set, together with some operations talking about the property in question. In this case, let's say that a type is a pair \((X \in \mathrm{Set}, w_X : X \to \mathbb{N})\), where \(X\) are the elements of the type and the "weight function" \(w_X\) gives you a number for each value telling you how much memory gets used.

  3. Now, let's define what the functions \(f : (X, w_X) \to (Y, w_Y)\) should be. Since values are sets, obviously a function between two types should be a function \(f : X \to Y\) on the underlying sets. However, we have a weight function for each type, and so we should do something sensible them. The first, most obvious idea is to say that maps should preserve the weight exactly -- i.e., that \(w_Y(f(x)) = w_X(x)\).

    This means that memory is never allocated, but it also means that memory is never de-allocated. So since we probably do want to allow deallocation, it makes sense to weaken this condition to something more like \(w_Y(f(x)) \leq w_X(x)\).

  4. Now, we go off and prove things about this language. In short order we will discover that this is a monoidal closed category, and so we can give a linearly-typed language for memory-controlled programs.

This is basically the idea in the late Martin Hofmann's 2003 paper, Linear types and non-size-increasing polynomial time computation. He called this category the category of "length spaces". This paper was a revelation to me, because it was the first time that I really understood how plain old ordinary mathematics could be used to model genuinely computational phenomena.

There are a lot of examples like this -- here are four more I like:

  • Domains and continuous functions

    The great grand-parent of using structured sets to model computations is, of course, domain theory. In this setting, a type is a set along with a complete partial order on it, and functions must be continuous with respect to this order. The intuition is that an always-nonterminating program is the least informative program, and that as programs loop on fewer and fewer inputs, they get more and more defined. Continuity basically means that giving a program a more-defined input will never make it loop more often.

    Steve Vickers's book Topology via Logic was how I learned to think about domains in a way that brought the computational intuition to the forefront.

  • Nominal sets and equivariant maps

    In languages like ML, Haskell and Java, it is permissible to create pointers and test them for pointer equality, but it is not permissible to compare pointers for order (you can use == but you can't use <). This means that the order in which pointers are created is not observable, a fact which is very important for both the equational theory and optimization of programs.

    To model this, we can introduce a set of locations (aka names, or atoms). To model the fact that we can't observe the order, we can say that we want the meanings of programs to be independent of permutations of the names -- as long as we don't identify two locations, the program can't tell if we reorder the locations in the heap.

    So, we can abstractly model the idea of reordering locations by saying that a type is a pair \((X, a : Perm \times X \to X)\), where \(X\) is a set of values, and the "action" \(a\) tells you how to rename all the values in the set when you are given a permutation. Then, a morphism \(f : (X, a) \to (Y, b)\) will be a function such that \(b(\pi, f(x)) = f(a(\pi, x))\) -- that is, we get the same answer whether we rename the argument to \(f\) or apply the function and then rename.

    I learned this from Andy Pitts's work on nominal logic -- here is a nice introduction for SIGLOG news which he wrote.

  • Partial orders and monotone maps

    My student Michael Arntzenius has been looking at higher-order database query languages, and one of the ideas we have been working with (in the language Datafun), has been inspired by database query languages and dataflow analysis, where recursive queries are formulated as fixed points of monotone functions on partial orders.

  • Ultrametric spaces and non-expansive maps

    In joint work with Nick Benton, we used the category of (wait for it) complete 1-bounded bisected ultrametric spaces and non-expansive maps to model reactive computations.

    Our goal was to model reactive programs, which have the property of causality. That is, an interactive system can only produce an output at time \(n\) from the first \(n\) inputs -- it can't look into the future to make an action.

    We began with this standard definition of causality, which was defined on streams, and then -- after reading the classic paper of Alpern and Schneider Defining Liveness -- we noticed that it corresponded to nonexpansiveness when streams are given the Cantor metric (i.e., the distance between two streams is \(2^{-n}\) when they first differ at the \(n\)-th position). This led us to look at categories of metric spaces, and we were able to adapt some work of Lars Birkedal and his collaborators to devise a new language for reactive programming.

One point I want to make is that while this is not a mechanical process, it is also "ordinary mathematics". In the last two cases, we had to try very hard to understand the problems we were trying to solve, but we didn't need vast amounts of categorical machinery. We really only needed elementary definitions from category theory, and we needed it because it was the technology that let us recognise when we had actually found a nice algebraic solution to our problem.

Friday, August 3, 2018

The Worm Ouroboros

Here's a quick question: can the behaviour of the following bit of C code depend upon the semantics of the Python programming language?

1
2
3
4
5
6
7
int sum(size_t len, char *array) { 
  int total = 0;
  for (int i = 0; i < len; i++) { 
    total += array[i];
  }
  return total;
}

Now, the very fact that I am asking this question probably makes you suspicious. (This suspicion is correct and natural, and incidentally also indicates that you basically understand Derrida's critique of the metaphysics of presence.) Luckily, as lecturers go, I'm a very generous grader: I'll accept both no and yes as answer!

The "no" answer is easy enough to explain: the C language specification defines an (informal) abstract machine, and the behaviour of the sum function can be explained entirely in terms of that abstract machine. Python never shows up in this semantics, so how on Earth could it possibly be relevant to the behaviour of this code?

To get to "yes", let's look at another bit of C code (please forgive the total lack of error handling, as well as the absence of all includes):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
int foo(char *filename) { 
   int fd = open("foo.txt", O_RDONLY);  // open a file
   struct stat s;                        
   fstat(fd, &s);                      // we only want s.size
   //
   // memory-mapped IO (every time, I remember Perlis epigram 11)
   //
   char *array = 
    (char *) mmap(NULL, s.size, PROT_READ, MAP_SHARED, fd, 0);  
   //
   // Now we can add up the byte values:
   int total = sum(s.size, array);
   //
   // We're done with the file now.
   munmap(array, s.size);
   close(fd);

   return total;
}

This function will open a file "foo.txt", and then will get the file's size using fstat, and then use that size information to call mmap, mapping that file into memory, storing the start pointer at array. Then we'll call sum to total the bytes of the array. But note that:

  1. The semantics of array depend on the semantics of the file system.
  2. The file system might be a user-mode file system, implemented using something like FUSE or WinFsp.
  3. You could (and various people have) written user-mode filesystems to expose a Mercurial repository as a file system.
  4. Mercurial is implemented in Python.

Let me emphasize: on a modern computer, a pointer dereference could very lead to the execution of Python code. Indeed, seeing an assembly listing, like this one:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
sum:
        test    edi, edi
        jle     .L4
        mov     rdx, rsi
        lea     eax, [rdi-1]
        lea     rsi, [rsi+1+rax]
        mov     eax, 0
.L3:
        movsx   ecx, BYTE PTR [rdx]
        add     eax, ecx
        add     rdx, 1
        cmp     rdx, rsi
        jne     .L3
        rep ret
.L4:
        mov     eax, 0
        ret

offers no guarantee that anything "close to the metal" will actually happen. This is the normal state of affairs, which is almost certainly true about every computer upon which this article is being read. (If it's not, you've probably somehow compiled your browser into a kernel module, which is security-wise an incredibly reckless thing to do.)

You might wonder, what's the moral of this story? Actually, there isn't a moral -- instead, there's a question. Namely, how can we prove that this kind of thing is a sensible thing to do, ever? The key intellectual difficulty is that the rationale for for why this kind of program can work appears to be circular.

In general, the easy case of abstraction is when things build up in layers. We start with a piece of hardware, and on this hardware we run an operating system, which (by running in kernel mode to access various hardware primitives) implements address space virtualization. Since user-moder code can never touch the hardware primitives to manipulate things like page tables, we can conclude (assuming the implementation of virtual memory is correct) that the virtual memory abstraction is seamless -- user code can't tell (without relying on OS calls or side-channels like timing) how memory is implemented.

However, what memory-mapped IO and user-mode file systems let you do is to move part of the implementation of the memory abstraction into user space. Now the correctness of the Python implementation depends upon the correctness of the virtual memory, but the correctness of the virtual memory abstraction depends upon the correctness of the Python implementation!

If A implies B, and B implies A, then we are not licensed to conclude tha A and B hold. (This is because it's bad reasoning! Since false implies itself and false implies everything, if this principle were true in general then everything would be true.) And yet: something very close to this kind of reasoning is essential for building systems from modular components, or even writing concurrent algorithms. (We call it rely-guarantee, usually.)