I'm attending ICFP 2016 in Nara, Japan (near Osaka) this year. In previous years, I've really enjoyed Edward Yang's conference talk summaries, and decided that this year, I would take notes about the talks I attended and put them online myself.

Today, I attended the HOPE 2016 workshop on higher-order programming with effects, and took the following notes. These are all rough notes, taken in haste and based on a highly-jetlagged listener's poor listening comprehension. So assume all errors and misunderstandings are my fault!

## Keynote (Leo White)

Due to jetlag, I overslept and missed it! Alas.

## Effects

*Effects via capabilities*, Fengyin Liu (speaker)The idea behind this work is to introduce explicit capability values that enable or disable effects. So you might have a capability c : IO, which must be passed to all IO operations. If you don't have the capability to pass to the IO operation, you can't invoke them.

print : IO -> string -> unit

Naively, though, there is an issue arising from the ability of closures to capture effect capabilities.

So two function spaces are introduced A => B, which is the usual ML-style function space, and A -> B, which is a function space which cannot capture capabilities or ML-style functions.

Interestingly, the full substitution lemma doesn't hold: you have to prove a value substitution theorem instead. This suggests to me that there might be a modal variant of this system, Davies-Pfenning style.

Liu suggested some type equivalences like A -> (B => C) == A -> B -> C, because the outer -> prevents the capture of capabilities. They can't prove the soundness of this by induction, of course, so I suspect they'll eventually want a logical relation to establish soundness. (A logical relation for Dotty would be exciting -- maybe chat with Amal Ahmed & Nada Amin about this?)

Polymorphism introduces some subtleties (ie, you might not know whether a type variable will be instantiated to a pure type), and they also worry about verbosity in practice (which might hinder adoption by Scala programmers).

*A Logical Acount of a Type-and-Effect-System*, Kasper Svendsen (speaker)They have a parallel ML-style calculus with reads, writes and CAS. Locations are approximated in the type system with regions.

The region context is split into a public and private part, to control which locations you can get interference on.

Function types are annotated with which public, which private regions, and which read/write effects they have.

New regions can be introduced with dynamic extent. Private regions can be shared temporarily during parallel composition.

The type system makes region variables implicit -- this info is only in the typing derivation. So all terms look like ML terms.

The goal is to prove the correctness of program transformations which rely on the semantics of effects. For example, a sequential composition can be equated to a parallel composition

e; e' == e || e'

when there is no interference between these two terms.

He explains the intuition with binary logical relation, which are formulated within the Iris program logic (paper at POPL 2015). Here, the logical relation is a per-type family of simulation relation, encoded as Hoare triples in a

*unary*program logic.Here, they use a really neat feature of Iris. Since Iris has any monoidal resource, they use simulation relations as resources! Then they express the parallel composition via a separating conjunction of simulations, one for e and one for e'. The fact that e and e' don't interfere means that e' steps are stuttering steps for e, and vice-versa. They even let simulation relations share state, when the state is read-only.

*Simple Dependent Polymorphic IO Effects*, Amin Timay (speaker)They want a logical relation for a type-and-effect system for IO. The IO effects are like regexps, but have effect variables, channels, and quantification over what is written -- eg, cat would have the effect:

to indicate that it writes what it reads from a to b. They want the relation to relate (say) that a buffered implementation of cat refines the unbuffered version, and also to validate the parallel composition transformation for non-interfering operations.

The language of effects is

*very*rich, so I don't have a good feel for what it can and can't express. Eg, the reasoning for the buffering optimization was*entirely*type-based, based on the effect subtyping order!

## Verification

*Concurrent Data Structures Linked in Time*, German Delbianco (speaker)This talk was about Hoare-style verification of highly concurrent data structures, such as "concurrent snapshots".

Eg, you have a shared array a, and a function write(i,x) which makes an atomic update to the array, and a function scan(), which returns a snapshot of the state of the array at a particular instant.

He then presented Jayanti's algorithm for this, and gave a nice intuition for why the right notion for correctness of the snapshots was linearizability. What made the algorithm tricky is that the linearization points for the algorithm are not fixed -- they are not fixed (they can be vary with the runtime inputs).

Then he introduced the notion of events in the ghost state, which are organized for each thread into subject and environment events, and which are each connected to an instant of time. The variation in where linearization points occur could then arise via modifications of when each event happened -- which is an update on the "linked list" of time, which can be modelled as ghost state!

This has all been verified in FCSL, a Coq-based program logic that Ilya Sergey and Aleks Nanevski are some of the main forces behind.

Ralf Jung asked if they had verified any clients, and German said they had used it for a few small clients, but nothing big. Ilya said that people just assumed snapshots were "obviously" good, but no one had any really good/complex examples of them. He said that they hoped to figure some out now that they had a declarative spec of them.

*Growing a Proof Assistant*, William Bowman (speaker)WB began with an example of some notations for an informal type soundness proof. He pointed out that some of the notations (like BNF) are formal, and others (like defining let-binding in terms of lambdas) are informal.

Then he pointed out that there are tools like Ott which compile BNF to Coq/Agda/LaTeX, and that most proof assistants (like Coq) have APIs to define notations, tactics, and plugins. Then he said that this wasn't enough, and that he wanted to build a proof assistant focused on extensibility.

The core type theory was TT (basically MLTT with a countable universe hierarchy), and the extensibility framework was Racket (to reuse as much of the macro machinery as possible).

Then he began explaining how cur could be used to implement pattern matching. Things like recursive functions required compile-time state to communicate information from outside the local syntax tree.

Then he moved on to defining tactics, and gave a small language for this. He needed to support fully elaborating terms (so that user tactics could be defined solely on kernel syntax), and also a way to run cur code at expansion-time to support Mtac style tactics.

This made me wonder about phase separation, but then he immediately put this and several other issues (such as how to preserve properties like parametricity while supporting reflection on syntax).

Someone asked about how to combine extensions. WB replied that macros support this because of the architecture of Racket macros, which essentially support open recursion to permit multiple macros to run at once. He also remarked that this creates classical problems from OO like diamond inheritance.

## Compilation

Algebraic Effects for Functional Programming, Daan Leijen (speaker)

Daan Leijen gave a talk about adding algebraic effects to Koka. Koka had effect types based on monads, but (a) he wanted a better story for composition, and (b) he wanted to be able to handle exotic control operators (like yield, exceptions, and async/await) in a uniform way.

He has a full compiler for effect handlers in Koka, with row-polymorphic effect typing. The compilation scheme uses a type-directed CPS translation to implement effect handlers targeting JS, .NET, JVM (which are rather direct-style). Then he talked a bit about the semantics and syntax of Koka to set the stage.

Next, he talked about examples, starting with exceptions. He mentioned that row types in Koka permit duplicated lables. I wonder how Koka picks?

I also wondered about using algebraic effects for the "mock object" pattern from OO testing.

Then he showed iteration, and remarked that handlers permit the handler to control whether iteration stops or not. Then he showed asynchronous callbacks (Node-style), and showed that you could pass the handler's resume continuation as an argument.

Then he talked about the compilation to managed code where first-class continuations don't exist. This uses a type-directed style to avoid using CPS when it doesn't need it. The main issue is with effect polymorphism, which he resolves by compiling it twice in both ways, and then using a worked-wrapper transformation to pick the right representation.

SPJ asked if he depended upon a defined order of evaluation. Daan said this was about effects for Haskell, and handed off to Sam Lindley, who said that the situation in Haskell was the same as in Koka -- you have to say what the evaluation order is (?).

*Administrative Normal Form, Continued*, Luke MaurerCPS is good for optimization, but due to the costs, not everyone uses it. Flanagan pointed out that some CPS optimizations work in direct style, but it's not all of then.

Eg, case-of-case often creates values that go to known call sites, which shouldn't be allocated. This work extends ANF to support these, and to some significant effect in heap allocation rates.

Case-of-case can be solved by pushing in the context, and while there are engineering issues of code duplication, it can often be helpful. Many compilers do this.

However, it doesn't work for recursive functions. In CPS this transformation happens "almost for free". That is, you can do the static argument transformation, and then inline the static continuation to get the beta-reductions to fire. This is "contification".

So how does contification work in ANF? He notes that the static argument transformation only worked because the program was a tail call. So they tag functions which are always tail called (they are called "labels"). Then they add a new rewrite which moves outer cases into a label.

He gave the operational semantics for join points, which are pretty sensible.

Then he pointed out that it helps with list fusion. An unfold uses a step function

`state -> option (a x state)`

, which lets you eliminate intermediate lists, but introduces useless Just/Nothing cruft. But the new label/case-of-case machinery handles this and reduces many of them, and some benchmarks kill*all*of them.This is in a new branch of GHC.

I asked how they would know if they had all the new constructs they needed. SPJ said he didn't know any CPS optimizations that could not be handled now, but that he didn't have a proof. Someone asked about one benchmark which allocated less but ran more slowly. I didn't understand the answer, though. :( Sam Lindley asked about the connection to SSA and phi-functions. Luke said that labels are much like phi-functions in SSA.

## Semantics

*Functional models of full ground, and general, reference cells*, Ohad KammarHow can we understand reference cells. They have multiple axes:

- Locality -- freshness of newly allocated cells. (ie, ref 0 == ref 0 is always false)
- Ground vs Non-ground -- can we point to computations which can modify the heap (ref (int -> int))?
- Full storage -- semantics can depend on shape of the heap: (ref ref bool)

This talk is about full ground state. Approach of the talk is denotational semantics for full ground state, to get an equational theory for programs. He wants to use it to analyze the value restriction, and relax it if possible. He also wants to validate runST.

He found a bug in his semantics last night, so he will discuss the general techniques used rather than a particular model.

Levy introduced a trick to handle circular ground state, using

*names*for reference types. (A la class names in OO).Kripke worlds are a finite set of locations, and a name for each location. A world morphism is a name-preserving injection. Worlds have monoidal structure. The language lives in the functor category over worlds, and so can interpret CCCs, and references at a world w are interpreted as the set of locations of that type.

To model computations we need a monad

To decide which equations we need, we have the usual deref/assignment/allocation equations. We don't have a complete axiomatization of state, so we have to just pick some. So they picked a nontrivial one, the equation for runST.

Then he gave the wrong monad, which used ends to model stores, but I don't have a good feel for them. It validated masking, but not dereference. Then he gave a second monad which had too little structure, in that it modelled deref but not masking.

Then he gave a small type system for runST. It interpreted each region label in runST as a world (as above).

Lars Birkedal asked about which equations he was interested in, and Ohad named some of them. Daan Leijen said he had done an operational proof of runST and didn't like, and encouraged Ohad to keep going.

I had wanted to propose a similar scheme to "Effects via capabilities" for Rust at some point before 1.0.

ReplyDelete(I'm not sure how familiar you are with Rust's type system?)

The basic idea was the same: an `IO` capability (zero-sized type) that needs to be passed into any function that wants to do IO. The next question was likewise "what about closures capturing it", here the idea was to exploit the different closure types Rust already has (or had, by that point): `IO` would be affine (non-`Copy`), so `&Fn` closures (with a shared, immutable environment) could not move it out from their environment, so all `&Fn` and `fn` (functions without an environment) could be known to be pure except if they took an `IO` capability explicitly as an argument. Then one option would be to just require threading the `IO` capability through linearly everywhere (like the IO monad, or Clean), but another idea I had was to allowing cloning/copying the `IO` capability through (unique) `&mut` references (but not `&`) -- so if an IO-doing function wants to call another IO-doing function, it can make a copy of the IO cap to call it with, instead of having to expect getting it back in the return value, while still preserving the purity of `fn`s and `&Fn`s. In fact you'd probably want this to happen implicitly, at least in the "obvious" cases, to not regress (too much) on ergonomics.

This also would've allowed neat things like, for instance, the `File` type could store an `IO` capability internally (but not allow retrieving it externally) and only use it for accessing that particular file, so if an otherwise-pure function has an `&mut File` parameter, you can infer that the function may do I/O on that file, but not any other I/O. So the `File` itself would be a kind of capability, in effect.

(Another interesting thought that's occurred to me since then. My original idea was that the `Cell` and `RefCell` types, which allow mutation through `&` references (that is, shared mutability -- analogous to `IORef` in Haskell) would simply have to require an `IO` capability for their methods (much like how `IORef`'s methods are in the `IO` monad), again to preserve the purity of `fn` and `&Fn`. But this is slightly unsatisfying: if a function creates a `Cell` or `RefCell` locally, on the stack, it and its callees can mutate it all they want, and the caller will be none the wiser (this is like the `ST` monad). So requiring the global `IO` capability to mutate a local `Cell` seems too coarse-grained. Maybe the `IO` capability could have a lifetime parameter? And then `IO<'static>` would correspond to the earlier "plain" `IO` type, being the capability for *globally* observable shared mutable state (which includes any "external" I/O like files or whatever), and you could call the methods of a local `Cell` with any `IO<'a>` where `'a` encompasses its lifetime. But I haven't really thought through the details here.)

Oh it seems I forgot to mention the other closure types at all. `FnMut` and `FnOnce` closures (with unique but temporary access to their environment, and ownership of their environment, respectively) would be able to copy/clone (and in the case of `FnOnce`, move) the `IO` capability, and therefore call functions which do I/O.

Delete