\[\newcommand{\bnfalt}{\;\;|\;\;} \newcommand{\To}{\Rightarrow} \newcommand{\From}{\Leftarrow} \newcommand{\rule}[2]{\frac{\displaystyle \begin{matrix} #1 \end{matrix}}{\displaystyle #2}} \newcommand{\check}[3]{{#1} \vdash {#2} \From {#3}} \newcommand{\synth}[3]{{#1} \vdash {#2} \To {#3}} \newcommand{\lam}[1]{\lambda {#1}.\,} \newcommand{\inj}[2]{\mathsf{in}_{#1}({#2})} \newcommand{\case}[5]{\mathsf{case}({#1}, \inj{1}{#2} \to {#3}, \inj{2}{#4} \to {#5})} \newcommand{\match}[2]{\mathsf{match}\;{#1}\;\mathsf{of}\;[ #2 ]} \newcommand{\arm}[2]{{#1} \to {#2}} \newcommand{\downshift}[1]{\downarrow{#1}} \newcommand{\upshift}[1]{\uparrow{#1}} \newcommand{\thunk}[1]{\left\{{#1}\right\}} \newcommand{\return}[1]{\mathsf{return}\;#1} \newcommand{\fun}[1]{\lambda\;#1} \newcommand{\checkn}[3]{{#1} \rhd {#2} \From {#3}} \newcommand{\checkp}[2]{{#1} \leadsto {#2}} \newcommand{\spine}[4]{{#1} \vdash {#2} : {#3} \gg {#4}} \newcommand{\tensor}{\otimes} \newcommand{\lolli}{\multimap} \newcommand{\letv}[2]{\mathsf{let}\;{#1}={#2}\;\mathsf{in}\;} \newcommand{\unit}{\langle\rangle} \newcommand{\letunit}[1]{\letv{\unit}{#1}} \newcommand{\pair}[2]{\left\langle{#1}, {#2}\right\rangle} \newcommand{\letpair}[3]{\letv{\pair{#1}{#2}}{#3}}\]
In the traditional recipe for bidirectional typechecking, introduction forms are checked, and the principal subterm of elimination forms are inferred. However, a while back Noam Zeilberger remarked to me that in multiplicative linear logic, bidirectional typechecking worked just as well if you did it backwards. It is worth spelling out the details of this remark, and so this blog post.
First, let's give the types and grammar of multiplicative linear logic.
\[ \begin{array}{llcl} \mbox{Types} & A & ::= & 1 \bnfalt A \tensor B \bnfalt A \lolli B \\ \mbox{Terms} & e & ::= & x \bnfalt \lam{x}{e} \bnfalt e\,e' \\ & & | & \unit \bnfalt \letunit{e}{e'} \\ & & | & \pair{e}{e'} \bnfalt \letpair{x}{y}{e}{e'} \\ \mbox{Contexts} & \Gamma & ::= & \cdot \bnfalt \Gamma, x \From A \\ \end{array} \]
Our types are the unit type \(1\), the tensor product \(A \tensor B\), and the linear function space \(A \lolli B\). The unit and pair have the expected introduction forms \(\unit\) and \(\pair{e}{e'}\), and they have "pattern matching" style elimination forms. Functions are introduced with lambdas \(\lam{x}{e}\) and eliminated with applications \(e\,e'\) as usual, and of course variable references \(x\) as usual. Contexts are a bit unusual -- they pair together variables and their types as usual, but instead of treating a variable as a placeholder for a synthesizing term, we treat variables as placeholders for checking terms.
Now, let's go through the typing rules. First, we give the introduction and elimination rules for the unit type.
\[ \begin{array}{ll} \rule{ } { \synth{\cdot}{\unit}{1} } & \rule{ \synth{\Delta}{e'}{A} & \check{\Gamma}{e}{1} } { \synth{\Gamma, \Delta}{\letunit{e}{e'}}{A} } \\[1em] \end{array} \]
The introduction rule says that in an empty context, the unit value \(\unit\) synthesizes the type \(1\). The pattern-matching style elimination \(\letunit{e}{e'}\) typechecks as follows. First, we infer a type \(A\) for the body \(e'\), and then we check that the scrutinee \(e\) has the unit type \(1\).
This order is backwards from traditional bidirectional systems -- we synthesize a type for the continuation first, before checking the type of the data we are eliminating. In the case of units, this is a mere curiosity, but it gets more interesting with the tensor product type \(A \tensor B\).
\[ \begin{array}{ll} \rule{ \synth{\Gamma}{e}{A} & \synth{\Delta}{e'}{B} } { \synth{\Gamma, \Delta}{\pair{e}{e'}}{A \tensor B} } & \rule{ \synth{\Gamma, x \From A, y \From B}{e'}{C} & \check{\Delta}{e}{A \tensor B} } { \synth{\Gamma, \Delta}{\letpair{x}{y}{e}{e'}}{C} } \end{array} \]
Now, the synthesis for pairs remains intuitive. For a pair \(\pair{e}{e'}\), we first infer a type \(A\) for \(e\), and a type \(B\) for \(e'\), and then conclude that the pair has the type \(A \tensor B\). However, the typing of the pair elimination \(\letpair{x}{y}{e}{e'}\) is much wilder.
In this rule, we first check that the continuation \(e'\) has the type \(C\), and then we learn from typechecking \(e'\) that \(x\) and \(y\) were required to have had types \(A\) and \(B\) respectively. This gives us the data that we need to check that \(e\) has the type \(A \tensor B\).
The linear function type \(A \lolli B\) has a similar character:
\[ \begin{array}{ll} \rule{ \synth{\Gamma, x \From A}{e}{B} } { \synth{\Gamma}{\lam{x}{e}}{A \lolli B} } & \rule{ \synth{\Gamma}{e'}{A} & \check{\Delta}{e}{A \lolli B} } { \check{\Gamma, \Delta}{e\,e'}{B} } \end{array} \]
Here, to infer at type for the introduction form \(\lam{x}{e}\), we infer a type \(B\) for the body \(e\), and then look up what type \(A\) the parameter \(x\) was required to be for the body to typecheck. To check that an application \(e\,e'\) has the type \(B\), we infer a type \(A\) for the argument \(e'\), and then check that the function \(e\) has the function type \(A \lolli B\).
Again, the checking/synthesis mode of thse rules are precisely reversed from usual bidirectional type systems. We can see how this reversal plays out for variables below:
\[ \begin{array}{ll} \rule{ } { \check{x \From A}{x}{A} } & \rule{ \synth{\Gamma}{e}{A} & A = B} { \check{\Gamma}{e}{B} } \end{array} \]
Here, when we check that the variable \(x\) has the type \(A\), the context must be such that it demands \(x\) to have the type \(A\). (However, the switch between checking and synthesis is the same as ever.)
If you are used to regular bidirectional systems, the information flow in the variable rule (as well as for pattern matching for pairs and lambda-abstraction for functions) is a bit unexpected. We are used to having a context tell us what types each variable has. However, in this case we are not doing that! Instead, we use it to record the types that the rest of the program requires
variables to have.
This is still a "well-moded" program in the sense of logic programming. However, the moding is a bit more exotic now -- within a context, the variables are inputs, but their types are outputs. This is a bit fancier than the mode systems that usual logic programming languages have, but people have studied mode systems which can support this (such as Uday Reddy's A Typed Foundation for Directional Logic Programming).
As far as the metatheory of this system goes, I don't know very much about it. Substitution works fine -- you can easily prove a theorem of the form:
Theorem (Substitution) If \(\check{\Delta}{e}{A}\), then
- If \(\check{\Gamma, x \From A, \Theta}{e'}{C}\) then \(\check{\Gamma, \Delta, \Theta}{[e/x]e'}{C}\).
- If \(\synth{\Gamma, x \From A, \Theta}{e'}{C}\) then \(\synth{\Gamma, \Delta, \Theta}{[e/x]e'}{C}\)
However, I don't presently know a good characterization of the kind of terms are typable under this discipline. E.g., in the standard bidirectional presentation, the annotation-free terms are precisely the \(\beta\)-normal terms. However, in the reverse bidirectional system, that is not the case.
Two papers that seem closely related to this system are:
Adam Chlipala, Leaf Petersen, and Robert Harper's TLDI 2005 paper, Strict Bidirectional Type Checking, and
Ningning Xie and Bruno C. d. S. Oliveira's ESOP 2018 paper, Let Arguments Go First.
Adam and company's paper includes the traditional synthesizing bidirectional hypotheses, as well as checking hypotheses very similar to the ones in this post, but inspired by relevance logic rather than linear logic. The basic idea is that if a hypothesis is relevant, then it is okay to let checking determine its type, since we are guaranteed that the variable will appear in some checking context (which will tell us what type it should have). The same idea applies here, since linearity necessarily implies relevance.
Ningning and Bruno's paper has an application rule that looks exactly like the one in this paper -- argument types are synthesized, which permits inferring the type of a function head in a checking context. However, their system is focused on inferring polymorphic types, which makes the precise relationship a bit unclear to me.
The implementation of reverse bidirectionality is just as easy as traditional bidirectional systems, but I will leave that for my next post.
Hey. I've been trying to implement this system in haskell, to gain a better intuition for how it works. My attempted implementation does not like the term (\x . x) 1, very much, and after scratching my head for a while, looking for where I went astray, I tried a derivation by hand. I'm starting to think the term: (\x . x) (), is not tybeable in this system, at any type, including 1.
ReplyDeleteTo check that (\x . x) () : 1, we infer () : 1, and then check (\x . x) : 1 -> 1. But the introduction rule for functions uses inference, not checking, mode, so we mode switch and infer a type for (\x . x).
To infer a type for (\x . x), the body: x, must be infered. But it cannot be, since the introduction form for variables uses checking, not inference, mode, and there is no way to switch.
Any idea what I'm missing?
Hi Luke, you're not missing anything -- this term is not typable. I mentioned that I didn't have a good characterization of what kinds of terms are typable in this system, and this was apparently more of a true statement than I thought. :)
ReplyDeleteAnyway, I've modified the post to
Hi Neel, nice to see you looking at this stuff! I'm not sure if this answers your question, but my original interest in dual bidirectional typing (developed a tiny bit more in section 5 of this paper) was a little bit different, as a way of computing the principal type scheme of a (linear) lambda term. By principal type scheme for a term t, I mean roughly a type involving free type variables such that any other type for t can be obtained as a substitution instance.
ReplyDeleteUnder this view, there should be another rule taking you from checking to synthesis while introducing a type variable:
(chk-inf): t infers α, if t checks against alpha where α is a fresh type variable.
On the other hand, the rule that you gave for moving from synthesis to checking is interpreted as adding a constraint to the principal typing scheme:
(inf-chk): t checks against B, provided that it infers A and we impose the constraint "A = B" (or at least "A ≤ B").
If you interpret the rules this way and include both chk-inf and inf-chk, then any beta-normal linear term will directly infer its principal type scheme (e.g., λx.λy.λz.x(y z) will infer the type scheme (β ⊸ γ) ⊸ (α ⊸ β) ⊸ (α ⊸ γ)), while a linear term that is not beta-normal will infer a principal type scheme with some constraints (e.g., the term (λx.x)(λy.y) will infer type α with the constraint that there exist β,γ such that γ ⊸ γ = (β ⊸ β) ⊸ α).
(If you want to see a Haskell implementation, there is one among a big mess of code over here -- see the routines "check" and "infer".)
And actually, if you keep thinking along these lines, then you'll see that dual bidirectional typing can also work for non-linear terms! It's just that now you need something like intersection types in order to combine different typing assumptions synthesized about the same variables (e.g., in the application rule). I remember looking at Stephen Dolan's thesis and believing that he was doing something like this, where the intersections were realized by a lattice operation on simple types.