Wednesday, July 27, 2011

A new lambda calculus for bunched implications

I am going to put off syntactic extensions for a bit, and talk about an entirely different bit of proof theory instead. I am going to talk about a new lambda-calculus for the logic of bunched implications that I have recently been working on. $\newcommand{\lolli}{\multimap} \newcommand{\tensor}{\otimes}$

First, the logic of bunched implications (henceforth BI) is the substructural logic associated with things like separation logic. Now, linear logic basically works by replacing the single context of intuitionistic logic with two contexts, one for unrestricted hypotheses (which behaves the same as the intuitionistic one, and access to which is controlled by a modality $!A$), and one which is linear --- the rules of contraction and weakening are no longer allowed. The intuitionistic connectives are then encoded, so that $A \to B \equiv !A \lolli B$. BI also puts contraction and weakening under control, but does not do so by simply creating two zones. Instead, contexts now become trees, which lets BI simply add the substructural connectives $A \tensor B$ and $A \lolli B$ to intuitionistic logic. Here's what things look like in the implicational fragment:
$$
\begin{array}{lcl}
A & ::= & P \bnfalt A \lolli B \bnfalt A \to B \\
\Gamma & ::= & A \bnfalt \cdot_a \bnfalt \Gamma; \Gamma \bnfalt \cdot_m \bnfalt \Gamma, \Gamma \\
\end{array}
$$

Note that contexts are trees, since there are two context concatenation operators $\Gamma;\Gamma'$ and $\Gamma,\Gamma'$ (with units $\cdot_a$ and $\cdot_m$) which can be freely nested. They don't distribute over each other in any way, but they do satisfy the commutative monoid properties. The natural deduction rules look like this (implicitly assuming the commutative monoid properties):

$$
\begin{array}{ll}
\frac{}{A \vdash A} &
\\
\frac{\Gamma; A \vdash B}
{\Gamma \vdash A \to B}
&
\frac{\Gamma \vdash A \to B \qquad \Gamma' \vdash A}
{\Gamma;\Gamma' \vdash B}
\\
\frac{\Gamma, A \vdash B}
{\Gamma \vdash A \lolli B}
&
\frac{\Gamma \vdash A \lolli B \qquad \Gamma' \vdash A}
{\Gamma,\Gamma' \vdash B}
\\
\frac{\Gamma(\Delta) \vdash A}
{\Gamma(\Delta;\Delta') \vdash A}
&
\frac{\Gamma(\Delta;\Delta) \vdash A}
{\Gamma(\Delta) \vdash A}
\end{array}
$$

Note that the substructural implication adds hypotheses with a comma, and the intuitionistic implication uses a semicolon. I've given the weakening and contraction explicitly in the final two rules, so we can reuse the hypothesis rule.

Adding lambda-terms to this calculus is pretty straightforward, too:
$$
\begin{array}{ll}
\frac{}{x:A \vdash x:A} &
\\
\frac{\Gamma; x:A \vdash e:B}
{\Gamma \vdash \fun{x}{e} : A \to B}
&
\frac{\Gamma \vdash e : A \to B \qquad \Gamma' \vdash e' : A}
{\Gamma;\Gamma' \vdash e\;e' : B}
\\
\frac{\Gamma, x:A \vdash e : B}
{\Gamma \vdash A \lolli \hat{\lambda}x.e : B}
&
\frac{\Gamma \vdash e : A \lolli B \qquad \Gamma' e': \vdash A}
{\Gamma,\Gamma' \vdash e\;e' : B}
\\
\frac{\Gamma(\Delta) \vdash e : A}
{\Gamma(\Delta;\Delta') \vdash e : A}
&
\frac{\Gamma(\Delta;\Delta') \vdash \rho(e) : A \qquad \Delta \equiv \rho \circ \Delta'}
{\Gamma(\Delta) \vdash e : A}
\end{array}
$$

Unfortunately, typechecking these terms is a knotty little problem. The reason is basically that we want lambda terms to tell us what the derivation should be, without requiring us to do much search. But contraction can rename a host of variables at once, which means that there is a lot of search involved in typechecking these terms. (In fact, I personally don't know how to do it, though I expect that it's decidable, so somebody probably does.) What would be really nice is a calculus which is saturated with respect to weakening, so that the computational content of the weakening lemma is just the identity on derivation trees, and for which the computational content of contraction is an \emph{easy} renaming of a single obvious variable.

Applying the usual PL methodology of redefining the problem to one that can be solved, we can give an alternate type theory for BI in the following way. First, we'll define nested contexts so that they make the alternation of spatial and intuitionistic parts explicit:
$$
\begin{array}{lcl}
\Gamma & ::= & \cdot \bnfalt \Gamma; x:A \bnfalt r[\Delta] \\
\Delta & ::= & \cdot \bnfalt \Delta, x:A \bnfalt r[\Gamma] \\
& & \\
e & ::= & x \bnfalt \fun{x}{e} \bnfalt \hat{\lambda}x.\;e \bnfalt e\;e' \bnfalt r[e] \bnfalt \rho r.\;e \\
\end{array}
$$
The key idea is to make the alternation of the spatial parts syntactic, and then to name each level shift with a variable $r$. So $x$ are ordinary variables, and $r$ are variables naming nested contexts. Then we'll add syntax $r[e]$ and $\rho r.e$ to explicitly annotate the level shifts:
$$
\begin{array}{ll}
\frac{}{\Gamma; x:A \vdash x:A} &
\frac{}{\cdot,x:A \vdash x:A}
\\
\frac{\Gamma; x:A \vdash e:B}
{\Gamma \vdash \fun{x}{e} : A \to B}
&
\frac{\Gamma \vdash e : A \to B \qquad \Gamma \vdash e' : A}
{\Gamma \vdash e\;e' : B}
\\
\frac{\Delta, x:A \vdash e : B}
{\Delta \vdash A \lolli \hat{\lambda}x.e : B}
&
\frac{\Delta \vdash e : A \lolli B \qquad \Delta' e': \vdash A}
{\Delta,\Delta' \vdash e\;e' : B}
\\
\frac{r[\Gamma] \vdash e : A}
{\Gamma \vdash \rho r.\;e : A}
&
\frac{r[\Delta] \vdash e : A}
{\Delta \vdash \rho r.\;e : A}
\\
\frac{\Gamma \vdash e : A}
{\cdot, r[\Gamma] \vdash r[e] : A}
&
\frac{\Delta \vdash e : A}
{\Gamma; r[\Delta] \vdash r[e] : A}
\end{array}
$$
It's fairly straightforward to prove that contraction and weakening are admissible, and doing a contraction is now pretty easy, since you just have to rename some occurences of $r$ to two different variables, but may otherwise leave the branching contexts to be the same.

It's obvious that you can take any derivation in this calculus and erase the $r$-variables to get a well-typed term of the $\alpha\lambda$-calculus. It's only a bit more work to go the other way: given an $\alpha\lambda$-derivation, you prove that given any new context which erases to the $\alpha\lambda$-context, you can find a new derivation proving the same thign. Then there's an obvious proof that you can indeed find a new-context which erases properly.

One interesting feature of this calculus is that the $r$-variables resemble regions in region calculi. The connection is not entirely clear to me, since my variables don't show up in the types. This reminds me a little of the contextual modal type theory of Nanevski, Pfenning and Pientka. It reminds me even more of Bob Atkey's PhD thesis on generalizations of BI to allow arbitrary graph-structured sharing. But all of these connections are still speculative.

There is one remaining feature of these rules which is still non-algorithmic: as in linear logic, the context splitting in the spatial rules with multiple premises (for example, the spatial application rule) just assumes that you know how to divide the context into two pieces. I think the standard state-passing trick should still work, and it may even be nicer than the additives of linear logic. But I haven't worked out the details yet.

Tuesday, July 26, 2011

Functional Programming as a Particular Use of Modules

Bob Harper sometimes gets grumbly when people say that ML is an impure language, even though he knows exactly what they mean (and, indeed, agrees with it), because this way of phrasing things does not pay data abstraction its full due.

Data abstraction lets us write verifiably pure programs in ML. By "verifiably pure", I mean that we can use the type system to guarantee that our functions are pure and total, even though ML's native function space contains all sorts of crazy effects involving higher-order state, control, IO, and concurrency. (Indeed, ML functions can even spend your money to rent servers: now that's a side effect!) Given that the ML function space contains these astonishing prodigies and freaks of nature, how can we control them? The answer is data abstraction: we can define new types of functions which only contains well-behaved functions, and ensure through type abstraction that the only ways to form elements of this new type preserve well-behavedness.

Indeed, we will not just define a type of pure functions, but give an interface containing all the type constructors of the guarded recursion calculus I have described in the last few posts. The basic idea is to give an ML module signature containing:

  • One ML type constructor for each type constructor of the guarded recursion calculus.
  • A type constructor for the hom-sets of the categorical interpretation of this calculus
  • One function in the interface for each categorical combinator, such as identity, composition, and each of the natural transformations corresponding to the universal properties of the functors interpreting the type constructors.

That's a mouthful, but it is much easier to understand by looking at the following (slightly pretty-printed) Ocaml module signature:

module type GUARDED =
sig
type one
type α × β
type α ⇒ β
type α stream
type num
type •α

type (α, β) hom

val id : (α, α) hom
val compose : (α, β) hom -> (β, γ) hom -> (α, γ) hom

val one : (α, one) hom

val fst : (α × β, α) hom
val snd : (α × β, β) hom
val pair : (α, β) hom -> (α, γ) hom -> (α, β × γ) hom

val curry : (α × β,γ) hom -> (α,(β,γ) exp) hom
val eval : ((α ⇒ β, α) times, β) hom

val head : (α stream, α) hom
val tail : (α stream, •(α stream)) hom
val cons : (α × •(α stream), α stream) hom

val zero : (one,num) hom
val succ : (num, num) hom
val plus : (num × num, num) hom
val prod : (num × num, num) hom

val delay : (α, •α) hom
val next : (α,β) hom -> (•α, •β) hom
val zip : (•α × •β, •(α × β)) hom
val unzip : (•(α × β), •α × •β) hom

val fix : (•α ⇒ α, α) hom

val run : (one, num stream) hom -> (unit -> int)
end

As you can see, we introduce abstract types corresponding to each of our calculus's type constructors. So α × β and α ⇒ β are not ML pairs and functions, but rather is the type of products and functions of our calculus. This is really the key idea -- since ML functions have too much stuff in them, we'll define a new type of pure functions. I replaced the "natural" numbers of the previous posts with a stream type, corresponding to our LICS 2011 paper, since they are really a kind of lazy conatural, and not the true inductive type of natural numbers. The calculus guarantees definitions are productive, but it's kind of weird in ML to see something called nat which isn't. So I replaced it with streams, which are supposed to yield an unbounded number of elements. (For true naturals, you'll have to wait for my post on Mendler iteration, which is a delightful application of parametricity.)

The run function takes a num stream, and gives you back an imperative function that successively enumerates the elements of the stream. This is the basic trick for making streams fit nicely into an event loop a la FRP.

However, we can implement these functions and types using traditional ML types:

module Guarded : GUARDED =
struct
type one = unit
type α × β = α * β
type α ⇒ β = α -> β
type •α = unit -> α
type α stream = Stream of α * α stream delay
type num = int

type (α, β) hom = α -> β

let id x = x
let compose f g a = g (f a)

let one a = ()

let fst (a,b) = a
let snd (a,b) = b
let pair f g a = (f a, g a)

let curry f a = fun b -> f(a,b)
let eval (f,a) = f a

let head (Stream(a, as')) = a
let tail (Stream(a, as')) = as'
let cons (a,as') = Stream(a,as')

let zero () = 0
let succ n = n + 1
let plus (n,m) = n + m
let prod (n,m) = n * m

let delay v = fun () -> v
let next f a' = fun () -> f(a'())
let zip (a',b') = fun () -> (a'(), b'())
let unzip ab' = (fun () -> fst(ab'())), (fun () -> snd(ab'()))

let rec fix f = f (fun () -> fix f)

let run h =
let r = ref (h()) in
fun () ->
let Stream(x, xs') = !r in
let () = r := xs'() in
x
end

Here, we're basically just implementing the operations of the guarded recursion calculus using the facilities offered by ML. So our guarded functions are just plain old ML functions, which happen to live at a type in which they cannot be used to misbehave!

This is the sense in which data abstraction lets us have our cake (effects!) and eat it too (purity!).

Note that when we want to turn a guarded lambda-term into an ML term, we can basically follow the categorical semantics to tell us what to write. Even though typechecking will catch all misuses of this DSL, actually using it is honestly not that much fun (unless you're a Forth programmer), since even even small terms turn into horrendous combinator expressions -- but in another post I'll show how we can write a CamlP4 macro/mini-compiler to embed this language into OCaml. This macro will turn out to involve some nice proof theory there, just as this ML implementation shows off how to use the denotational semantics in our ML programming.

Wednesday, July 20, 2011

Termination of guarded recursion

I will now give a termination proof for the guarded recursion calculus I sketched in the last two posts. This post got delayed because I tried to oversimplify the proof, and that didn't work -- I actually had to go back and look at Nakano's original proof to figure out where I was going wrong. It turns out the proof is still quite simple, but there's one really devious subtlety in it.

First, we recall the types, syntax and values.

A ::= N | A → B | •A
e ::= z | s(e) | case(e, z → e₀, s(x) → e₁) | λx.e | e e
| •e | let •x = e in e | μx.e | x
v ::= z | s(e) | λx.e | •e

The typing rules are in an earlier post, and I give some big-step evaluation rules at the end of this post. Now, the question is, given · ⊢ e : A[n], can we show that e ↝ v?

To do this, we'll turn to our old friend, Mrs. step-indexed logical relation. This is a Kripke logical relation in which the Kripke worlds are given by the natural numbers and the accessibility relation is given by ≤. So, we define a family of predicates on closed values indexed by type, and by a Kripke world (i.e., a natural number n).

V(•A)ⁿ = {•e | ∀j<n. e ∈ E(A)ʲ}
V(A ⇒ B)ⁿ = {λx.e | ∀j≤n, v ∈ V(A)ʲ. [v/x]e ∈ E(B)ʲ}
V(N)ⁿ = {z} ∪ { s(e) | ∀j<n. e ∈ E(N)ʲ}

E(A)ʲ = {e | ∃v. e ↝ v and v ∈ V(A)ʲ}

This follows the usual pattern of logical relations, where we give a relation defining values mutually recursively with a relation defining well-behaved expressions (i.e., expressions are ok if they terminate and reduce to a value in the relation at that type).

Note that as we expect, j ≤ n implies V(A)ⁿ ⊆ V(A)ʲ. (The apparent antitonicity comes from the fact that if v is in the n-relation, it's also in the j relation.) One critical feature of this definition is that at n = 0, the condition on V(•A)⁰ always holds, because of the strict less-than in the definition.

The fun happens in the interpretation of contexts:

Ctxⁿ(· :: j) = ()
Ctxⁿ(Γ,x:A[i] :: j) = {(γ,[v/x]) | γ ∈ Ctxⁿ(Γ) and v ∈ Vⁿ(A)}
when i ≤ j
Ctxⁿ(Γ,x:A[j+l] :: j) = {(γ,[e/x]) | γ ∈ Ctxⁿ(Γ) and •ˡe ∈ Vⁿ(•ˡA)}
when l > 0

The context interpretation has a strange dual nature. At times less than or equal to j, it is a familiar context of values. But at future times, it is a context of expressions. This is because the evaluation rules substitute values for variables at the current time, and expressions for variables at future times. We abuse the bullet value relation in the third clause, to more closely follow Nakano's proof.

On the one hand, the fixed point operator is μx.e at any type A, and unfolding this fixed point has to substitute an expression (the mu-term itself) for the variable x. So the fixed point rule tells us that there is something necessarily lazy going on.

One the other hand, the focusing behavior of this connective is quite bizarre. It is not apparently positive or negative, since it distribute neither through all positives (eg, •(A + B) ≄ •A + •B) nor is it the case that it distributes through all negatives (eg, •(A → B) ≄ •A → •B). (See Noam Zeilberger, The Logical Basis of Evaluation Order and Pattern Matching.)

I take this to mean that •A should probably be decomposed further. I have no present idea of how to do it, though.

Anyway, this is enough to let you prove the fundamental property of logical relations:

Theorem (Fundamental Property). If Γ ⊢ e : A[j], then for all n and γ ∈ Ctxⁿ(Γ :: j), we have that γ(e) ∈ Eⁿ(A).
The proof of this is a straightforward induction on typing derivations, with one nested induction at the fixed point rule. I'll sketch that case of the proof here, assuming an empty context Γ just to reduce the notation:

Case: · ⊢ μx.e : A[j]
By inversion: x:A[j+1] ⊢ e : A[j]
By induction, for all n,e'. if •e' ∈ Vⁿ(•A) then [e'/x]e ∈ Eⁿ(A)
By nested induction on n, we'll show that [μx.e/x]e ∈ Eⁿ(A)
Subcase n = 0:
We know if •μx.e ∈ V⁰(•A) then [μx.e/x]e ∈ E⁰(A)
We know •μx.e ∈ V⁰(•A) is true, since · ⊢ μx.e : A[j]
Hence [μx.e/x]e ∈ E⁰(A)
Hence μx.e ∈ E⁰(A)
Subcase n = x+1:
We know if •μx.e ∈ Vˣ⁺¹(•A) then [μx.e/x]e ∈ Eˣ⁺¹(A)
By induction, we know [μx.e/x]e ∈ Eˣ(A)
Hence μx.e ∈ Eˣ(A)
Hence •μx.e ∈ Vˣ⁺¹(A)
So [μx.e/x]e ∈ Eˣ⁺¹(A)
Hence μx.e ∈ Eˣ⁺¹(A)

Once we have the fundamental property of logical relations, the normalization theorem follows immediately.

Corollary (Termination). If · ⊢ e : A[n], then ∃v. e ↝ v.

Evaluation rules:

e₁ ↝ (λx.e) e₂ ↝ v [v/x]e ↝ v'
—————— ————————————————————————————————
v ↝ v e₁ e₂ ↝ v'



e ↝ z e₀ ↝ v e ↝ s(e) [e/x]e₁ ↝ v
—————————————————————————————— —————————————————————————————————
case(e, z → e₀, s(x) → e₁) ↝ v case(s(e), z → e₀, s(x) → e₁) ↝ v



e₁ ↝ •e [e/x]e' ↝ v [μx.e/x]e ↝ v
————————————————————— ——————————————
let •x = e₁ in e₂ ↝ v μx.e ↝ v

Friday, July 15, 2011

Semantics of a weak delay modality

In my previous post, I sketched some typing rules for a guarded recursion calculus. Now I'll give its categorical semantics. So, suppose we have a Cartesian closed category with a delay functor and the functorial action and natural transformations:

•(f : A → B) : •A → •B
δ : A → •A
ι : •A × •B → •(A × B)
ι⁻¹ : •(A × B) → •A × •B
fix : (•A ⇒ A) → A

I intend that the next modality is a Cartesian functor (ie, distributes through products) and furthermore we have a delay operator δ. We also have a fixed point operator for the language. However, I don't assume that the delay distributes through the exponential. Now, we can follow the usual pattern of categorical logic, and interpret contexts and types as objects, and terms as morphisms. So types are interpreted as follows:

〚A → B〛 = 〚A〛 ⇒ 〚B〛
〚•A〛 = •〚A〛

Note that we haven't done anything with time indices yet. They will start to appear with the interpretation of contexts, which is relativized by time:

〚·〛ⁿ = 1
〚Γ, x:A[j]〛ⁿ = 〚Γ〛 × •⁽ʲ⁻ⁿ⁾〚A〛 if j > n
〚Γ, x:A[j]〛ⁿ = 〚Γ〛 × 〚A〛 if j ≤ n

The idea is that we interpret a context at time n, and so all the indices are interpreted relative to that. If the index j is bigger than n, then we delay the hypothesis, and otherwise we don't. Then we can interpret morphisms at time n as 〚Γ ⊢ e : A[n]〛 ∈ 〚Γ〛ⁿ → 〚A〛, which we give below:

〚Γ ⊢ e : A[n]〛 ∈ 〚Γ〛ⁿ → 〚A〛

〚Γ ⊢ μx.e : A[n]〛 =
fix ○ λ(〚Γ, x:A[n+1] ⊢ e : A[n]〛)

〚Γ ⊢ x : A[n]〛 =
π(x) (where x:A[j] ∈ Γ)

〚Γ ⊢ λx.e : A → B[n]〛 =
λ(〚Γ, x:A[n] ⊢ e : B[n]〛)

〚Γ ⊢ e e' : B[n]〛 =
eval ○ ⟨〚Γ ⊢ e : A → B[n]〛, 〚Γ ⊢ e' : B[n]〛⟩

〚Γ ⊢ •e : •A[n]〛 =
•〚Γ ⊢ e : A[n+1]〛 ○ Nextⁿ(Γ)

〚Γ ⊢ let •x = e in e' : •B[n]〛 =
〚Γ, x:A[n+1] ⊢ e' : B[n]〛 ○ ⟨id(Γ), 〚Γ ⊢ e : •A[n]〛⟩

Most of these rules are standard, with the exception of the introduction rule for delays. We interpret the body •e at time n+1, and then use the functorial action to get an element of type •A. This means we need to take a context at time n and produce a delayed one interpreted at time n+1.

Nextⁿ(Γ) ∈ 〚Γ〛ⁿ → •〚Γ〛ⁿ⁺¹
Nextⁿ(·) = δ₁
Nextⁿ(Γ, x:A[j]) = ι ○ (Nextⁿ(Γ) × δʲ⁻ⁿ) if j > n
Nextⁿ(Γ, x:A[j]) = ι ○ (Nextⁿ(Γ) × δ) if j ≤ n


I think this is a pretty slick way of interpreting hybrid annotations, and a trick worth remembering for other type constructors that don't necessarily play nicely with implications.

Next up, if I find a proof small enough to blog, is a cut-elimination/normalization proof for this calculus.

Wednesday, July 13, 2011

Guarded recursion with a weaker-than-Nakano guard modality

We have a new draft paper up, on controlling the memory usage of FRP. I have to say that I really enjoy this line of work: there's a very strong interplay between theory and practice. For example, this paper --- which is chock full of type theory and denotational semantics --- is strongly motivated by questions that arose from thinking about how to make our implementation efficient.

In this post, I'm going to start spinning out some consequences of one minor point of our current draft, which we do not draw much attention to. (It's not really the point of the paper, and isn't really enough to be a paper on its own -- which makes it perfect for research blogging.) Namely, we have a new delay modality, which substantially differs from the original Nakano proposed in his LICS 2000 paper.

Recall that the delay modality $\bullet A$ is a type constructor for guarded recursion. I'll start by giving a small type theory for guarded recursion below.

$$
\begin{array}{lcl}
A & ::= & A \to B \;\;|\;\; \bullet A \;\;|\;\; \mathbb{N} \\
\Gamma & ::= & \cdot \;\;|\;\; \Gamma, x:A[i]
\end{array}
$$

As can be seen above, the types are delay types, functions, and natural numbers, and contexts come indexed with time indices $i$. So is the typing judgement $\Gamma \vdash e : A[i]$.

$$
\begin{array}{ll}
\frac{x:A[i] \in \Gamma \qquad i \leq j}
{\Gamma \vdash x : A[j]}
&
\frac{\begin{array}{l}
\Gamma \vdash e:A[i] &
\Gamma,x:A[i] \vdash e' : B[i]
\end{array}}
{\Gamma \vdash \mathsf{let}\; x = e \;\mathsf{in}\; e' : B[i]}
\\ & \\
\frac{\Gamma, x:A[i] \vdash e : B[i]}
{\Gamma \vdash \lambda x.\;e : A \to B}
&
\frac{\begin{array}{l}
\Gamma \vdash e : A \to B [i] &
\Gamma \vdash e' : A[i]
\end{array}}
{\Gamma \vdash e \; e' : B[i]}
\\ & \\

\frac{\Gamma \vdash e : A[i+1]}
{\Gamma \vdash \bullet e : A[i]}
&
\frac{\begin{array}{ll}
\Gamma \vdash e : \bullet A[i] &
\Gamma, x:A[i+1] \vdash e' : B[i]
\end{array}}
{\Gamma \vdash \mathsf{let}\; \bullet x = e \;\mathsf{in}\; e' : B[i]}

\\ & \\

\frac{}
{\Gamma \vdash \mathsf{z} : \mathbb{N}[i]}

&
\frac{\Gamma \vdash e : \mathbb{N}[i+1]}
{\Gamma \vdash \mathsf{s}(e) : \mathbb{N}[i]}
\\ & \\

\frac{\Gamma, x:A[i+1] \vdash e : A[i]}
{\Gamma \vdash \mu x.\;e : A[i]}
&
\frac{\begin{array}{l}
\Gamma \vdash e : \mathbb{N}[i] \\
\Gamma \vdash e_1 : A[i] \\
\Gamma, x:\mathbb{N}[i+1] \vdash e_2 : A[i]
\end{array}}
{\Gamma \vdash \mathsf{case}(e, \mathsf{z} \to e_1, \mathsf{s}(x) \to e_2) : A[i]}
\end{array}
$$

The $i+1$ in the successor rule for natural numbers pairs with the rule for case statements, and this is what allows the fixed point rule to do its magic. Fixed points are only well-typed if the recursion variable occurs at a later time, and the case statement for numbers gives the variable one step later. So by typing we guarantee well-founded recursion!

The intro and elim rules for delays internalize increasing the time index, so that an intro for $\bullet A$ at time $i$ takes an expression of type $A$ at time $i+1$. We have a let-binding elimination form for delays, which differs from our earlier LICS paper, which had a direct-style elimination for delays. The new elimination is weaker than the old one, in that it cannot prove the isomorphism of $\bullet(A \to B) \simeq \bullet A \to \bullet B$.

This is really quite great, since that isomorphism was really hard to implement! (It was maybe half the complexity of the logical relation.) The only question is whether or not we can still give a categorical semantics to this language. In fact, we can, and I'll
describe it in my next post.