Via Michael Arntzenius (@arntzenius) on Twitter, I saw that Hillel Wayne posted the following remark:
Lots of people say "FP is easier to analyze than imperative code because of purity" but whenever I ask for evidence people look at me like I'm crazy. So I'd like to make a challenge: I'll provide three imperative functions, and your job is to convert them into pure functions.
If you follow the link, you'll see his three programs. Each of them is basically a small imperative program, of the kind you might see a Python programmer write. I'm not going to post any correctness proofs of them, since my interest is rather to comment upon the difficulty of verification. Since I am too bad at concision to fit thoughts into 280 characters, I'm doing this via a blog post. :)
The high-order bit is that the three programs he suggested are equally easy/hard to verify in imperative and functional styles. This is because none of the programs make any significant use of procedural abstraction.
The difficulty of imperative programming arises from the combination of state, aliasing and procedure calls. Any two of these features can be handled without that much difficulty, but the combination of all three effectively makes reasoning (nearly) as difficult as correct concurrent programming.
At a high level, reasoning is easy (more honestly, feasible) when each construct in your language has a compositional specification -- when you can reason about the behaviour of a subprogram just in terms of its specification, rather than looking at its full source code each time.
Hoare logic gives you a compositional reasoning technique for reasoning about imperative updates. When data can be aliased, a specification must also say something about the aliasing expected by your program. The current best technique for these specifications is a generalization of Hoare logic called separation logic. (I should say something about how it works, but this post is already too long!)
The addition of full procedures makes the problem much more challenging. This is because the specification of a procedure call must state its effect, without saying anything about the specific program points from which it will be called. Otherwise you might as well have just inlined the function, and prove the correctness of a fully procedure-free program. Indeed, Leslie Lamport advocates doing just this, in his paper Composition: A Way to Make Proofs Harder. This is actually a reasonable position, if you are interested in verifying algorithms (which exist in isolation) rather than verifying programs (which always exist as parts of a larger program).
Intuitively, the difficulty is a lot like reasoning about cooperative concurrency -- when you call an arbitrary procedure, it's a bit like yielding to the scheduler and having it execute some other piece of code for a while before returning control to you. For this logical/conceptual concurrency to actually accomplish anything, the unknown code hidden in the procedure call has to do something to your data, but at the same time can't break any of your invariants. So you need to reason about data invariants, object ownership, and temporal protocols, all at the same time.
Matters get yet more complicated yet if you want to specify programs which pass around genuinely unknown functions (for instance, storing pointers to functions in the heap, a device which many programmers call "objects"). In this case, you can't inline the functions, because you don't know what they will be!
But simply rejecting OO (or higher-order state) doesn't work, because plenty
of important programs rely upon it. For example, if you want to prove that your
scheduler implementation implements a fair strategy, you have to come up with a
protocol between the scheduler and the code that compiler writers generate. Similar
difficulties arise if you want to prove that if your compiler generates a
bunch of provably correct .o
files, the whole program actually does work
when they get linked together. (For example, when will a C program work, if
you link it to a new malloc/free
implementation?)
Removing any one of the three difficulties makes things easier, but still leads to interesting systems:
- a) State + procedure calls ==> Idealized Algol. Idealized Algol is a language with higher-order functions and mutable variables, but lacking pointers. John Reynolds invented a beautiful generalization of Hoare logic (called specification logic) to reason about it in the early 80s. It's not well-known any more, but I really like it.
- b) State + aliasing ==> basically, pointer programs without procedures, John Reynolds also invented the state-of-the-art technique for reasoning about these programs (called separation logic) in the early 2000s.
- c) Aliasing + procedures ==> purely functional programming. Basically, we don't have to care who else can see our data if they can never change it. In what you might see as a running theme, John Reynolds also devised many of the techniques we use for reasoning about purely functional programs (such as parametricity) in the late 1970s and early 1980s.
Some related observations:
- This is, incidentally, why reasoning about performance of Haskell
code is much harder than reasoning about its functional correctness.
The value of a Haskell program never depends on any other parts of
the program, which gives you a simple compositional model for functional correctness.
But whether forcing a thunk takes time or not depends on whether other parts of your program have forced it already. So reasoning about the performance of Haskell code requires complex reasoning about aliasing. This is one reason why newer purely functional languages, such as Idris, Purescript, and Koka, are all call-by-value rather than call-by-need -- it makes their performance model compositional.
Note that for building a real program, procedural decomposition is not optional. You just can't do without being able to break your program into pieces. Since this feature is basically non-negotiable, aliasable state becomes very dangerous. This is why people say that "imperative programming is harder than functional programming" -- the unstated assumption is that you have functions, and that state is aliasable pointer-like state.
However, by making the treatment of state a little less lax, you can retain the ease of reasoning while still permitting the controlled use of state.
- The safe fragment of Rust is a mix of a) and c) -- it permits you to
create aliases of pointers to data only when you don't use them to mutate. (Even though I doubt this was a direct inspiration, John Reynolds also pioneered this approach with his 1978 paper Syntactic Control of Interference.. (ACM link, sorry.) Peter O'Hearn wrote a paper Syntactic Control of Interference, Revisited with a modernized approach to the semantics and typing rules.)
If you add unsafe, then you need very fancy modern variants of separation logic to reason about the result. See RustBelt: Securing the Foundations of the Rust Programming Language, by Ralf Jung, Jacques-Henri Jourdan, Robbert Krebbers, Derek Dreyer. (Don't feel obliged to look at this unless you are very interested -- the work is excellent but unavoidably very technical.)
Indeed, semantically Haskell is very similar to Rust+unsafe in this regard -- the runST operator lets you create computations that use highly aliasable state, as long as it doesn't escape the scope of a computation. And the same fancy separation logic that works for proving the safety or Rust is needed to show the safety of Haskell! See A Logical Relation for Monadic Encapsulation of State: Proving contextual equivalences in the presence of runST by Amin Timany, Leo Stefanesco, Morten Krogh-Jespersen, and Lars Birkedal. (Both of these papers appeared at the same conference!)
This comment has been removed by the author.
ReplyDeleteHi Neel. I wrote a response here:
ReplyDeletehttps://www.reddit.com/r/tlaplus/comments/8f4j6j/are_functional_programs_easier_to_verify_than/dy0gp64/
Hi Neel! Is the "specification logic" paper from the early 80s called "The essence of ALGOL"? Thanks.
ReplyDeleteHi Anton, "The Essence of Algol" describes the programming language, and there's a paper "Idealized Algol and its Specification Logic" describing the program logic. I've added a link to this paper in the blog post (it's a Postscript file, which you'll likely need to convert to PDF).
ReplyDeleteThanks a lot for all the links! I really enjoyed this post.
ReplyDelete"Aliasing + procedures ==> purely functional programming"
ReplyDeleteIsn't absence of aliasing one of the advantages of (pure) functional programming. Can you give an example or refer to an elaboration on this remark?
It may be of interest to know about a "pointer programs without procedures" approach to writing software out in the wild: http://number-none.com/blow/john_carmack_on_inlined_code.html (I sure found it fascinating!)
ReplyDelete