Monday, August 6, 2018

Category Theory in PL research

Over on Twitter, Max New wrote:

Maybe the reason category theory seems useless for PL to many people is that it only gets really useful when you start doing "sophisticated" stuff like internal, enriched and fibered categories and the intros for computer scientists don't get to those things.

Max is a very sharp person, but I do not agree with this!

The most useful applications of category theory in my own research have all been very, very elementary, to the point that people who (unlike me) are genuinely good at category theory are always a little surprised that I can get away with such primitive machinery.

Basically, the key idea is to formulate models of languages/libraries in terms of categories of structured sets with structure-preserving maps (aka homomorphisms). And, well, that's it!

I don't have a really crisp explanation of the process of figuring out how to go from "I have a property I'd like to represent mathematically" to "sets should have the following structure and the following properties should be preserved by the homomorphisms", but I can give an example to illustrate the recipe.

  1. Let's suppose we want to control the memory usage of programs.

  2. Think of a type as a set, together with some operations talking about the property in question. In this case, let's say that a type is a pair \((X \in \mathrm{Set}, w_X : X \to \mathbb{N})\), where \(X\) are the elements of the type and the "weight function" \(w_X\) gives you a number for each value telling you how much memory gets used.

  3. Now, let's define what the functions \(f : (X, w_X) \to (Y, w_Y)\) should be. Since values are sets, obviously a function between two types should be a function \(f : X \to Y\) on the underlying sets. However, we have a weight function for each type, and so we should do something sensible them. The first, most obvious idea is to say that maps should preserve the weight exactly -- i.e., that \(w_Y(f(x)) = w_X(x)\).

    This means that memory is never allocated, but it also means that memory is never de-allocated. So since we probably do want to allow deallocation, it makes sense to weaken this condition to something more like \(w_Y(f(x)) \leq w_X(x)\).

  4. Now, we go off and prove things about this language. In short order we will discover that this is a monoidal closed category, and so we can give a linearly-typed language for memory-controlled programs.

This is basically the idea in the late Martin Hofmann's 2003 paper, Linear types and non-size-increasing polynomial time computation. He called this category the category of "length spaces". This paper was a revelation to me, because it was the first time that I really understood how plain old ordinary mathematics could be used to model genuinely computational phenomena.

There are a lot of examples like this -- here are four more I like:

  • Domains and continuous functions

    The great grand-parent of using structured sets to model computations is, of course, domain theory. In this setting, a type is a set along with a complete partial order on it, and functions must be continuous with respect to this order. The intuition is that an always-nonterminating program is the least informative program, and that as programs loop on fewer and fewer inputs, they get more and more defined. Continuity basically means that giving a program a more-defined input will never make it loop more often.

    Steve Vickers's book Topology via Logic was how I learned to think about domains in a way that brought the computational intuition to the forefront.

  • Nominal sets and equivariant maps

    In languages like ML, Haskell and Java, it is permissible to create pointers and test them for pointer equality, but it is not permissible to compare pointers for order (you can use == but you can't use <). This means that the order in which pointers are created is not observable, a fact which is very important for both the equational theory and optimization of programs.

    To model this, we can introduce a set of locations (aka names, or atoms). To model the fact that we can't observe the order, we can say that we want the meanings of programs to be independent of permutations of the names -- as long as we don't identify two locations, the program can't tell if we reorder the locations in the heap.

    So, we can abstractly model the idea of reordering locations by saying that a type is a pair \((X, a : Perm \times X \to X)\), where \(X\) is a set of values, and the "action" \(a\) tells you how to rename all the values in the set when you are given a permutation. Then, a morphism \(f : (X, a) \to (Y, b)\) will be a function such that \(b(\pi, f(x)) = f(a(\pi, x))\) -- that is, we get the same answer whether we rename the argument to \(f\) or apply the function and then rename.

    I learned this from Andy Pitts's work on nominal logic -- here is a nice introduction for SIGLOG news which he wrote.

  • Partial orders and monotone maps

    My student Michael Arntzenius has been looking at higher-order database query languages, and one of the ideas we have been working with (in the language Datafun), has been inspired by database query languages and dataflow analysis, where recursive queries are formulated as fixed points of monotone functions on partial orders.

  • Ultrametric spaces and non-expansive maps

    In joint work with Nick Benton, we used the category of (wait for it) complete 1-bounded bisected ultrametric spaces and non-expansive maps to model reactive computations.

    Our goal was to model reactive programs, which have the property of causality. That is, an interactive system can only produce an output at time \(n\) from the first \(n\) inputs -- it can't look into the future to make an action.

    We began with this standard definition of causality, which was defined on streams, and then -- after reading the classic paper of Alpern and Schneider Defining Liveness -- we noticed that it corresponded to nonexpansiveness when streams are given the Cantor metric (i.e., the distance between two streams is \(2^{-n}\) when they first differ at the \(n\)-th position). This led us to look at categories of metric spaces, and we were able to adapt some work of Lars Birkedal and his collaborators to devise a new language for reactive programming.

One point I want to make is that while this is not a mechanical process, it is also "ordinary mathematics". In the last two cases, we had to try very hard to understand the problems we were trying to solve, but we didn't need vast amounts of categorical machinery. We really only needed elementary definitions from category theory, and we needed it because it was the technology that let us recognise when we had actually found a nice algebraic solution to our problem.

2 comments:

  1. Hi Neel, very nice post.
    I don't really want to make a big deal out of this but I want to make 2 points to clarify what I meant.

    1. First, I would say that what you're talking about here is *denotational semantics*, and you are saying that category theory is useful for clarifying the basic notions (object, morphism, monoidal structure etc.) and the interpretation of the syntax in the models. This is all great stuff, but I think my preferred use of category theory is to go slightly further into what I would call *categorical logic* which is distinguished from denotational semantics in that while in denotational semantics we are interested in interpreting the syntax in a particular category and justifying program equivalences that way, in categorical logic we are interested in defining the class of all models and proving in addition a *completeness* theorem, and any time you have an interesting new judgment you end up needing more "exotic" theory to describe it properly.

    2. If you look at my next tweet, you'll see that the first example I provided was Moggi talking about how the notion of *strong monad* requires moving beyond the 2-category of categories. I expect this would surprise some people because strong monad doesn't seem to be a particularly sophisticated notion to a PL person (probably because it's one of the first category theory applications they learn). After all a strong monad is just a monad with some extra natural transformation. However I think that definition would be viewed with suspicion by a category theorist until you establish that it is equivalent to saying that a monad is strong is equivalent to it being enriched (and similarly another definition using fibred cats). To me it is the latter kind of definition that motivates me to use category theory: discover the right "doctrine" and the constructions become obvious.

    Finally the context for the tweet was my own experience with working on the semantics of gradual typing where my main conceptual leap thus far has been to use double categories/equipments, which is invariably met with extreme skepticism because it's so "exotic". It gave me the impression that there was something wrong with my approach, but the more I read the literature I realized that internal/enriched/fibred/multi-category theory comes up basically any time you move beyond simple types and cartesian closed categories, and if it doesn't it might actually improve the presentation to use it.

    ReplyDelete
  2. Hi Max, of course I don't want to say that sophisticated categorical techniques are bad!

    Your own work is a nice example of this -- there's an immediate, obvious analogy between contracts and dynamic typing (my favorite early example is Nick Benton's pearl Undoing Dynamic Typing), but before your work it was an analogy, not a theorem. Sure, it would be nice if you could get the same results with less work, but if wishes were horses...!

    Instead, my point is that there's a lot of value even in "low-tech" applications of category theory. Sets with structure are as basic as it gets, but there's still a lot of mileage to be had. Eventually you may want a more sophisticated approach, but for exploratory purposes having a simple model in hand is a really good thing. (Eg, to handle the empty type in our FRP language, we eventually moved to using functor category semantics.)

    Strength is actually a nice example of this. Its usual formulation is indeed slightly odd, but in a closed category a functor F being strong is equivalent to having a family of maps σ : (A ⇒ B) → (FA ⇒ FB), where double-arrow A ⇒ B is the exponential and single-arrow X → Y are morphisms. This asserts a totally sensible relationship between the internal and external homs, which shouldn't frighten anyone. Indeed, it should guide you to looking at the enriched view (since closed categories are self-enriched), but IMO even the starting point is a nice one.

    ReplyDelete