Tuesday, April 19, 2011

Models of Linear Logic: Length Spaces

One of the things I've wondered about is what the folk model of linear logic is. Basically, I've noticed that people use linear logic an awful lot, and they have very clear intuitions about the operational behavior they expect, but they are often not clear about what equational properties they expect to hold are.

Since asking that question, I've learned about a few models of linear logic, which are probably worth blogging about. Morally, I ought to start by showing how linear algebra is a model of linear logic, but I'll set that aside for now, and start with some examples that will be more familiar to computer scientists.

First, recall that the general model of simply-typed functional languages are cartesian closed categories, and that the naive model of functional programming is "sets and functions". That is, types are interpreted as sets, and terms are functions between sets, with the cartesian closed structure corresponding to function spaces on sets.

Next, recall that the general model of intuitionistic linear logic are symmetric monoidal closed categories. So the question to ask is, what are some natural symmetric monoidal closed category given by sets-with-structure and functions preserving that structure? There are actually many answers to this question, but one I like a lot I learned from a 1999 LICS paper by Martin Hofmann, and is called "length spaces". It is very simple, and I find it quite beautiful.

To understand the construction, let's first set out the problem: how can we write programs that respect some resource bounds? For example, we may wish to bound the memory usage of a language, so that every definable program is size-bounded: the output of a computation is no bigger than its input.

Now, let's proceed in a really simple-minded way. A length space is a pair $(A, \sigma_A)$, where $A$ is a set and $\sigma_A : A \to \N$ is a size function. Think of $A$ as the set of values in a type, and $\sigma_A$ as a function which tells you how big each element is. These pairs will be the objects of our category of length spaces.

Now, define a morphism $f : (A, \sigma) \to (B, \tau)$ to be a function on sets $A \to B$, with the property that $\forall a \in A.\; \tau(f\;a) \leq \sigma(a)$. That is, the size of the output is always less then or equal to the size of the input.

Here's the punchline: length spaces are a symmetric monoidal closed category.

Given two length spaces $(A, \sigma)$ and $(B, \tau)$, they have a monoidal product $(A, \sigma) \otimes (B, \tau) = (A \times B, \fun{(a,b)}{\sigma(a) + \tau(b)})$. The unit is $I = (\{*\}, \fun{*}{0})$, and the associators and unitors are inherited from the cartesian product. The intuition here is that strict pair, requires resources equal to the sum of the resources of the components.

The function space construction is pretty clever, too. The exponential $(B, \sigma) \multimap (C, \tau) = (B \to C, \fun{f}{\min\comprehend{a \in \N}{\forall b \in B.\;\tau(f\;b) \leq \sigma(b) + a}})$. The intuition here is that if you have a morphism $A \otimes B \to C$, when we curry it we want a morphism $A \to (B \multimap C)$. However, the returned function may have free variables in $A$, so the size of the value it can return should be bounded by the size of its input $b \in B$ plus the size of its environment.

We also have a cartesian product, corresponding to lazy pairs. We define $(A, \sigma) \times (B, \tau) = (A \times B, \fun{(a,b)}{\max(\sigma(a), \tau(b)})$. The intuition is that with a lazy pair, where we have the (linear) choice of either the first or second component, we can bound the space usage by maximum of the two alternatives.

Note how pretty this model is -- you can think about functions in a totally naive way, and that's fine. We just carry around a little extra data to model size information, and now we have a very pretty extensional model of an intensional property -- space usage. This also illustrates the general principle that increasing the observations you can do creates finer type distinctions.


  1. That is a nice explanation, but I don't quite see how the size metric you've added to terms actually operates as a model of space usage. Part of the reason for this is that it seems like the arrows-of-the-category functions can throw away resources. Can exponential-ie-lolli functions throw away resources in exactly the same way? If so, as far as intuitions about programs go, this seems to have more of an affine feel and less of a linear feel.

  2. Hi Rob,

    This is indeed a model of affine logic, precisely because you can discard resources. You can make it non-affine by changing the $\leq$ into an equality. This is still monoidal closed, though offhand I don't know if it still has cartesian products.

    One surprising fact about this model is that it is a model of BI. You can form an exponential $(A,\sigma) \Rightarrow (B,\tau)$ as:

    (A \to B, \fun{f}{\min\comprehend{n}{\forall a \in A.\; \tau(f\;a) \leq \max(\sigma(a), n)}})

  3. Hmm... max/plus, I wonder if there's a semiring hiding here. The BI interpretation is neat, it fits with my understanding of the additive context constructor in the proof theory just fine. But that does illustrate in part that the model is still quite a bit "too lose," as there's the whole family of things true-in-BI-but-not-in-linear-logic and vice versa that hold in the model but not in the theory.

    In that respect, I guess the model I use of linear logic is very nearly just Lafont's resource interpretation (adapted to intuitionistic logic, but the resource interpretation for par never worked anyway). The problem with this model is, in my head at least, it's only informal, though there may be a way to make it formal by treating types as multisets of abstract resources, I guess? Surely this has been done.

  4. This comment has been removed by the author.

  5. Basically any ordered semiring works. One fun thing you can do is replace the naturals with propositions of separation logic. This gives you an indexed model of BI. I'm not sure what it's for, but I'm sure it's useful for something. I think spoons' cost semantics also fits into this view, but doing it right probably requires a value/computation split.

    I am actually interested in loose (but formal) interpretations, in the same way that sets and functions is a loose model of the lambda calculus. Simple, easy-to-use models are very useful for specifying the interface to libraries. Furthermore, having some slop in the model may reflect the fact that there can be operations that I could expose, but choose not to, in order to preserve freedom of implementation.

    (Also, thanks for the typo fix.)