Friday, August 23, 2019

New Draft Paper: Survey on Bidirectional Typechecking

Along with J. Dunfield, I have written a new survey paper on bidirectional typechecking.

Bidirectional typing combines two modes of typing: type checking, which checks that a program satisfies a known type, and type synthesis, which determines a type from the program. Using checking enables bidirectional typing to break the decidability barrier of Damas-Milner approaches; using synthesis enables bidirectional typing to avoid the large annotation burden of explicitly typed languages. In addition, bidirectional typing improves error locality. We highlight the design principles that underlie bidirectional type systems, survey the development of bidirectional typing from the prehistoric period before Pierce and Turner's local type inference to the present day, and provide guidance for future investigations.

This is the first survey paper I've tried to write, and an interesting thing about the process is that it forces you to organize your thinking, and that can lead to you changing your mind. For example, before we started this paper I was convinced that bidirectional typechecking was obviously a manifestation of focalization. When we attempted to make this argument clearly in the survey, that led me to realize I didn't believe it! Now I think that the essence of bidirectional typechecking arises from paying close attention to the mode discipline (in the logic programming sense) of the typechecking relations.

Also, if you think we have overlooked some papers on the subject, please let us know. Bidirectional typechecking is a piece of implementation folklore that is perhaps over three decades old, which makes it very hard for us to feel any certainty about having found all the relevant literature.

Wednesday, August 21, 2019

On the Relationship Between Static Analysis and Type Theory

Some months ago, Ravi Mangal sent me an email with a very interesting question, which I reproduce in part below:

The question that has been bothering me is, "How are static analyses and type systems related?". Besides Patrick Cousot's Types as abstract interpretations, I have failed to find other material directly addressing this. Perhaps, the answer is obvious, or maybe it has just become such a part of the community's folklore that it hasn't been really written down. However, to me, it's not clear how one might even phrase this question in a precise mathematical manner.

The reason this question is so interesting is because some very beautiful answers to it have been worked out, but have for some reason never made it into our community's folklore. Indeed, I didn't know the answers to this question until quite recently.

The short answer is that there are two views of what type systems are, and in one view type systems are just another static analysis, and in another they form the substrate upon which static analyses can be built -- and both views are true at the same time.

Static Analysis (including Types) as Properties

One valid view of a type system, is that it is just another species of static analysis. At a high level, the purpose of a static analysis is to answer the question, "Can I tell what properties this program satisfies without running it?"

Abstract interpretation, model checking, dataflow analysis and type inference are all different ways of answering this question, each with somewhat different emphases and priorities. IMO, to understand the relations between these styles of analysis, it's helpful to think about the two main styles of semantics: operational semantics and denotational semantics.

A denotational semantics turns a program into a mathematical object (sets, domains, metric spaces, etc). This has the benefit that it lets you turn the full power of mathematics towards reasoning about programs, but it has the cost that deducing properties of arbitrary mathematical gadgets can require arbitrary mathematical reasoning.

Operational semantics, on the other hand, turns a program into a state machine. This has the benefit that this is relatively easy to do, and close to how computers work. It has the cost that it only gives semantics to whole programs, and in reality people tend to write libraries and other partial programs. (Also, infinite-state operational semantics doesn't make things obviously easier to work with than a denotational semantics.)

Operational semantics gives rise to model checking: if you have a program, you can view the sequence of states it takes on as a concrete model of temporal logic. So model checking then checks whether a temporal properties overapproximates or refines a concrete model.

So basically all static analyses work by finding some efficiently representable approximation of the meaning of a program, and then using that to define an algorithm to analyse the program, typically by some form of model checking. For type systems, the approximations are the types, and in abstract interpretation, this is what the abstract domains are.

David A. Schmidt's paper Abstract Interpretation from a Denotational Semantics Perspective shows how you can see abstract domains as arising from domain theory, by noting that domains arise as the infinite limit of a series of finite approximations, and so you can get an abstract domain by picking a finite approximation rather than going all the way to the limit.

Thomas Jensen's PhD thesis, Abstract Interpretation in Logical Form, shows how an abstract interpretation of a higher-order functional language corresponds to a certain intersection type system for it. This helps illustrate the idea that abstract domains and types are both abstractions of a program. (FWIW, finding equivalences between abstract domains and type systems used to be big topic back in the 90s, but seems to have disappeared from view. I don't know why!)

Now, model checking an infinite-state system will obviously fail to terminate in general. However, it can then be made effective and algorithmic by using your abstractions (either from types or from abstract interpretation) to turn the model into a finitary one that can be exhaustively searched.

For dataflow analysis, David A. Schmidt works out this idea in detail in his paper, Data Flow Analysis is Model Checking of Abstract Interpretations, where he shows how if you build an abstraction of program traces and model-check against mu-calculus formulas, you can get back most classical dataflow analyses. (If one lesson you draw is that you should read everything David Schmidt has written, well, you won't be wrong.)

It's less common to formulate type-based analyses in this form, but some people have looked at it.

  1. There is a line of work on "higher-order model checking" which works by taking higher-order programs, using type inference for special type systems to generate abstractions for them which lie in special subclasses of pushdown systems which are decidable for the properties of interest for program analysis (eg, emptiness testing). Luke Ong wrote an overview of the area for LICS 2015, in his paper Higher-Order Model Checking: an Overview.

  2. Dave Clarke and Ilya Sergey have a nice IPL 2012 paper, A correspondence between type checking via reduction and type checking via evaluation, in which they show how you can view types as abstract values, and use this perspective to derive an abstract interpreter for a program which operates on these abstract values. They don't state it explicitly, since it was too obvious to mention, but you can check that a program e has the type int by asserting that e will eventually evaluate to the abstract value int. This is easy to check since every program in their abstract semantics terminates -- and so you can just evaluate e and see if the result is int.

Types as Grammar

However, while types can be seen as a way of stating program properties, they have a second life as a way of defining what the valid programs are. A type system can also be seen as a way of statically ruling out certain programs from consideration. That is, an expression like 3("hi"), which applies the argument "hi" to the number 3, is valid Javascript or Python code, but is not a valid program in OCaml or Java, as it would be rejected at compile time by the typechecker.

As a result, type systems can also be used to simplify the semantics: if a term is statically ruled out, you don't have to give semantics to it. The benefit of doing this is that this can dramatically simplify your static analyses.

For example, if you have a typed functional language with integers and function types, then building an abstract interpretation to do a sign analysis is simplified by the fact that you don't have to worry about function values flowing into the integer domain. You can just take the obvious abstract domains for each type and then rely on the type system to ensure that only programs where everything fits together the way you want are ever allowed, freeing you of the need for a jumbo abstract domain that abstracts all values.

For cultural reasons, this methodology is often used in type systems research, but relatively rarely in static analysis work. Basically, if you tell a type systems researcher that you are changing the language to make it easier to analyze, they will say "oh, okay", but if you tell a static analysis researcher the same thing, they may express a degree of skepticism about your ability to update all Java code everywhere. Naturally, neither reaction is right or wrong: it's contextual. After all, there are billions of lines of Java code that won't change, and also the Java language does change from version to version.

However, as researchers we don't just pick up context from our problems, but also from our peers, and so the culture of type systems research tends to privilege new languages and the culture of static analysis research tends to privilege applicability to existing codebases, even though there is no fundamental technical reason for this linkage. After all, all static analyses exploit the structure of the language whenever they can in order to improve precision and scalability.

For example, in a typical intraprocedural dataflow analysis, we initialize all the local variables to the bottom element of the dataflow lattice. This is a sound thing to do, because the language is constrained so that local variables are freshly allocated on each procedure invocation. If, on the other hand, we were doing dataflow analysis of assembly code, different subroutines would all simultaneously affect the dataflow values assigned to each register.

As a consequence, one thing I'm looking forward to is finding out what static analysis researchers will do when they start taking a serious look at Rust. Its type system enforces really strong invariants about aliasing and data races, which ought to open the door to some really interesting analyses.

The Relationship Between the Two Views

As you can see, grammaticality constraints and program analysis are not opposed, and can overlap in effect. Furthermore, this means people writing papers on type systems are often blurry about which perspective they are taking -- in one section of a paper they may talk about types-as-properties, and in another they may talk about types-as-grammar. This ambiguity is technically harmless, but can be confusing to a reader coming to type systems from other areas within formal methods.

The reason this ambiguity is mostly harmless is because for well-designed languages there will be a nice relationship between these two views of what types are. John C Reynolds explained this, in his lovely paper The Meaning of Types: From Intrinsic to Extrinsic Semantics. (In his language, "intrinsic types" are types-as-grammar, and "extrinsic types" are types-as-property.)

Basically, he noted that when you give a denotational semantics under a types-as-grammar view, you interpret typing derivations, and when you give a denotational semantics in the types-as-property view, you interpret program terms. For types-as-grammar to make sense, its semantics must be coherent: two typing derivations of the same program must have the same semantics. (Otherwise the semantics of a program can be ambiguous.) But once coherence holds, it becomes feasible to prove the equivalence between the intrinsic and extrinsic semantics, because there is a tight connection between the syntax of the language and the typing derivations.

Noam Zeilberger and Paul-Andre Melliès wrote a POPL 2015 paper, Functors are Type Refinement Systems, where they work out the general semantic structure underpinning Reynolds' work, in terms of the relationship between intersection types (or program logic) and a base language.

Compositionality

Another cultural effect is the importance type systems research places on compositionality. It is both absolutely the case that there is no a priori reason a type system has to be compositional, and also that most type systems researchers -- including myself! -- will be very doubtful of any type system which is not compositional.

To make this concrete, here's an example of a type system which is pragmatic, reasonable, and yet which someone like me would still find distasteful.

Consider a Java-like language whose type system explicitly marks nullability. We will say that method argument (x : C) means that x is a possibly null value of type C, and an argument x : C! means that x is definitely of class C, with no null value. Now, suppose that in this language, we decide that null tests should refine the type of a variable:

                       // x : C outside the if-block
  if (x !== null) { 
    doSomething(x);    // x : C! inside the if-block
  } 

Basically, we checked that x was not null, so we promoted its type from C to C!. This is a reasonable thing to do, which industrial languages like Kotlin support, and yet you will be hard-pressed to find many formal type theories supporting features like this.

The reason is that the culture of type theory says that type systems should satisfy a substitution principle: a variable stands in for a term, and so substituting a term of the same type as the variable should result in a typeable term. In this case, suppose that e is a very complicated expression of type C. Then if we do a substitution of e for x, we get:

  if (e !== null) { 
    doSomething(e);    // e : C! -- but how?
  } 

Now we somehow have to ensure that e has the type C! inside the scope of the conditional. For variables this requires updating the context, but for arbitrary terms it will be much harder, if it is even possible at all!

The culture of static analysis says that since it's easy for variables, we should handle that case, whereas the culture of type theory says that anything you do for variables should work for general terms. Again, which view is right or wrong is contextual: it's genuinely useful for Kotlin programmers to have dataflow-based null-tracking, but at the same time it's much easier to reason about programs if substitution works.

The reason type theorists have this culture is that type theory has two parents. On one side are Scott and Strachey, who consciously and explicitly based denotational semantics upon the compositionality principle; and the other side comes from the structural proof theory originated by Gerhard Gentzen, which was invented precisely to prove cut-eliminability (which basically just is a substitution principle). As a result, the type-theoretic toolbox is full of tools which both assume and maintain compositionality. So giving that up (or moving past it, depending on how you want to load the wording emotionally) is something which requires very careful judgement.

For example, the Typed Racket language extends the dynamically-typed Racket language with types, and to support interoperation with idiomatic Racket code (which uses control-flow sensitive type tests), they do flow-sensitive typing. At ESOP 2011, Arjun Guha, Claudiu Saftoiu, and Shriram Krishnamurthi wrote a paper about a particularly elegant way of combining flow analysis and typechecking, Typing Local Control and State Using Flow Analysis.

Conclusion

Finally, one might wonder whether it is worth it to try developing a unified perspective on type theory and static analysis. It's a lot of work to learn any one of these things, and (for example) David Schmidt's papers, while deeply rewarding, are not for the faint of heart. So maybe we should all specialize and not worry about what members of cousin tribes are doing?

All I can say is that I think I benefited tremendously from the fact that (largely by accident) I ended up learning separation logic, denotational semantics and proof theory as part of my PhD. There actually is a wholeness or consilience to the variety of techniques that the semantics community has devised, and insights apply across domains in surprising ways.

My recent PLDI paper with Jeremy Yallop is a nice example of this. Very little in this paper is hard, but what makes it easy is that we felt free to switch perspectives as we needed.

Because I had learned to like automata theory from reading papers on model checking, I ended up reading about Brüggemann-Klein and Wood's characterization of the deterministic regular languages. Then, because I share the aesthetics of denotational semantics, I realized that they invented a superior, compositional characterization of the FOLLOW sets used in LL(1) parsing. Then, because I knew about the importance of substitution from type theory, this lead me to switch from the BNF presentation of grammars, to the mu-regular expressions. This let us offer the traditional parser combinator API, while under the hood having a type system which identified just the LL(1)-parseable grammars. Moreoever, the type inference algorithm we use in the implementation is basically a dataflow analysis. And that set things up in a form to which multistage programming techniques were applicable, letting us derive parser combinators that run faster than yacc.

But beyond the direct applications like that, it is just good to be able to go to a conference and be interested by what other people are doing. Everyone attending has developed deep expertise in some very esoteric subjects, and learning enough to learn from them is valuable. Research communities are long-running conversations, and being able to hold up your end of the conversation makes the community work better.

Monday, August 12, 2019

Michael Arntzenius on the Future of Coding Podcast

My PhD student Michael Arntzenius is on the Future of Coding podcast, talking about the things he has learned while developing Datafun. In the course of a PhD, people tend to work on extremely refractory problems, which is a useful discipline for developing and sharpening your worldview. Since Michael is a particularly thoughtful person, he's developed a particularly thoughtful view of computing, and so the interview is well worth reading or listening to.

This episode explores the intersections between various flavors of math and programming, and the ways in which they can be mixed, matched, and combined. Michael Arntzenius, "rntz" for short, is a PhD student at the University of Birmingham building a programming language that combines some of the best features of logic, relational, and functional programming. The goal of the project is "to find a sweet spot of something that is more powerful than Datalog, but still constrained enough that we can apply existing optimizations to it and imitate what has been done in the database community and the Datalog community." The challenge is combining the key part of Datalog (simple relational computations without worrying too much underlying representations) and of functional programming (being able to abstract out repeated patterns) in a way that is reasonably performant.

This is a wide-ranging conversation including: Lisp macros, FRP, Eve, miniKanren, decidability, computability, higher-order logics and their correspondence to higher-order types, lattices, partial orders, avoiding logical paradoxes by disallowing negation (or requiring monotonicity) in self reference (or recursion), modal logic, CRDTS (which are semi-lattices), and the place for formalism is programming. This was a great opportunity for me to brush up on (or learn for the first time) some useful mathematical and type theory key words. Hope you get a lot out of it as well -- enjoy!

I'd like to thank Steve Krouse for interviewing Michael. This interview series is a really nice idea -- a lot of the intuition or worldview underpinning research programs never makes it into a paper, but can be brought out nicely in a conversation.

Also, if you prefer reading to listening, the link also has a a transcript of the podcast available.