In the last few years, Ross Patterson and Conor McBride have begun advocating using what they call applicative functors to structure programs, which are strong lax monoidal endofunctors. Applicatives are less general than monads, and consequently support a stronger equational theory. (See also the discussion by Edward Z. Yang and Tomas Petricek, which prompted me to write this note.)
Suppose that $C$ is our semantic category, and that it comes equipped with products $A \times B$. Then a lax monoidal endofunctor $(F, \phi, i)$ is an endofunctor $F : C \to C$, together with a pair of natural transformations $\phi_{A,B} : F(A) \times F(B) \to F(A \times B)$ and $i : 1 \to F(1)$.
These maps satisfy some coherence conditions, which basically ensure that it doesn't matter if you re-associate a product before or after applying the monoidal functor. Below, I give the interesting one, for the product. (This is also an experiment to use SVG in my blog, so let me know if it blows up for you.)
The basic way to read this diagram is that it's telling you that it doesn't matter how you do the computations to collect the values. You can either use $\phi_{A,B}$ followed by $\phi_{A \times B, C}$, or you can use use $\phi_{B, C}$ followed by $\phi_{A, B \times C}$, but either way you get the same computation.
For a functor to be strong, we also need a strength $\sigma_{A,B} : F(A) \times B \to F(A \times B)$. This is the usual presentation of this condition, since it makes the semantics easier to do. However, for purposes of intution, it's a little bit mysterious.
Strength gets less mysterious when presented as the (equivalent) condition that you have a family of maps $\mathrm{fmap} : (A \Rightarrow B) \to (F(A) \Rightarrow F(B))$. This means that in programming language terms, having a strength is equivalent to saying that the functor $F$ has a map operation definable in the language.
With these preliminaries aside, we can see the type theory for applicative functors below: $$ \begin{array}{ll} \rule{\judge{\cdot; \Delta, \Gamma}{e}{A}} {\judge{\Delta; \Gamma}{\val{e}}{F(A)}} & \rule{\judge{\Delta; \Gamma}{e}{F(A)} \qquad \judge{\Delta, u:A; \Gamma}{e'}{C}} {\judge{\Delta;\Gamma}{\letv{\val{u}}{e}{e'}}{C}} \\ \rule{\judge{\Delta; \Gamma, x:A}{e}{B}} {\judge{\Delta; \Gamma}{\fun{x}{e}}{A \to B}} & \rule{\judge{\Delta; \Gamma}{e}{A \to B} \qquad \judge{\Delta; \Gamma}{e'}{A}} {\judge{\Delta; \Gamma}{e\;e'}{B}} \\ \rule{x:A \in \Gamma} {\judge{\Delta; \Gamma}{x}{A}} & \rule{\judge{\Delta;\Gamma}{e}{A} \qquad \judge{\Delta; \Gamma, x:A}{e'}{C}} {\judge{\Delta; \Gamma}{\letv{x}{e}{e'}}{C}} \end{array} $$
As you can see, the main difference is that we introduce a new context $\Delta$ of applicative variables. Applicative terms have a let-style eliminator, which binds the result of an applicative computation to a variable in the applicative context $\Delta$. However, the ordinary variable rule cannot look at variables from $\Delta$ -- it can only look at variables from $\Gamma$. We only gain access to $\Delta$ when we introduce a term $\val{e}$ of type $F(A)$, when in the premise we move all the variables in $\Delta$ into the ordinary variable zone.
The beta- and eta-rules for this calculus are pretty much what you'd expect: $$ \begin{array}{lcl} \letv{\val{x}}{\val{e}}{e'} & =_\beta & [e/x]e' \\ \letv{\val{x}}{e}{\val{x}} & =_\eta & e \\ \end{array} $$
I'm not totally sure what the commuting conversions are. I think the easiest thing would be to prove a normalization theorem to work those out. I might do that in a future post.
Now, we can also give a denotational interpretation. For readability, we suppress the maps re-associating and permuting variables, as well as anything to do with units. Semicolon is diagrammatic composition, as usual. $$ \begin{array}{l} \interp{\judge{\Delta;\Gamma}{x}{A}} = \\ \qquad \pi_2; \pi_x \\ \interp{\judge{\Delta;\Gamma}{\letv{x}{e}{e'}}{C}} = \\ \qquad \left\langle F(\Delta) \times \Gamma; \interp{\judge{\Delta;\Gamma}{e}{A}}\right\rangle ; \interp{\judge{\Delta; \Gamma, x:A}{e'}{C}} \\[1em] \interp{\judge{\Delta; \Gamma}{\fun{x}{e}}{A \to B}} = \\ \qquad \lambda(\interp{\judge{\Delta;\Gamma,x:A}{e}{B'}}) \\ \interp{\judge{\Delta; \Gamma}{e\;e'}{B}} = \\ \qquad \left\langle \interp{\judge{\Delta;\Gamma}{e}{A \to B}}; \interp{\judge{\Delta;\Gamma}{e'}{A}}\right\rangle ; \mathrm{eval} \\[1em] \interp{\judge{\Delta;\Gamma}{\val{e}}{F(A)}} = \\ \qquad \sigma_{\Delta,\Gamma}; F(\interp{\judge{\cdot; \Delta,\Gamma}{e}{A}}) \\ \interp{\judge{\Delta;\Gamma}{\letv{\val{x}}{e}{e'}}{C}} = \\ \qquad \left\langle \interp{\judge{\Delta;\Gamma}{e}{F(A)}}; F(\Delta) \times \Gamma\right\rangle ; (\phi_{A, \Delta} \times \Gamma); \interp{\judge{x:A, \Delta; \Gamma}{e'}{C}} \end{array} $$
This is obviously a sound interpretation (modulo typos in my clauses). To close the loop on this post, we would need to construct a cartesian closed category with a strong lax monoidal endofunctor from the term model of the syntax. Again -- future post!
Obviously, some examples would also be nice! The coolest one I know is the application to linearly-typed programming, where we can use applicatives to explain exactly why substructural types make reasoning about imperative programs a lot easier. But that's a post for another day!
Let me begin with a minor point: how dare you write "Delta; Gamma"? Delta is immediately next to Gamma in the greek alphabet, and writing them in reverse order is just *perverse*.
ReplyDeleteWould it be possible to give a presentation centered not on a "let" binding, but on a lambda-inspired abstraction form? I'm thinking of introducing in your logic an applicative arrow (A -F-> B), along with elimination and introduction terms (a @F b) and (lamF x. a), such that (let val u = a in b) would desugar into ((lamF x. b) @F a).
I'm under the impression that the eta-reduction rule for such a system would be more natural -- the current "let val" form has this annoying likeness with sum types that the elimination rule must come up with an arbitrary result type C. But it may also very well not work at all (your system has a feeling of modal logic that are usually formulated in let-style, there must be a reason).
Finally, I miss a more detailed comparison with "monads": "applicative functors are more general" is well-known, but could you explicitly point out the difference between lax monoidal endomorphism and an endormorphism that is only a monoid? How would you change your type theory to capture one for monads?
Your suggestion is likely to work type-theoretically, and is close to the syntax that McBride and Patterson proposed.
ReplyDeleteHowever, it's not clear to me that the two systems are equivalent, at least in the simply-typed case. I can see how to send A -F→ B to F(A) → B and then use the encoding you suggest. But I don't see how we would show the other direction of equivalence in the simply-typed case, though -- I don't see how to translate F(b), where b is a base type, to a type in the "applicative arrow" langauge. However, if you have F-style polymorphism, then I think it's equivalently powerful -- F(A) can pretty clearly be encoded as ∀α. (A -F→ α) → α.
More on your other questions later...
This comment has been removed by the author.
ReplyDeleteExcellent post! Thanks.
ReplyDeleteQuestion: Do you happen to know of any generalization that need not require the functor to be an endo- one.
Tnanks