Joshua and I thought our new typechecking algorithm for higher-rank polymorphism was easy to implement, but I have to admit that we weren't 100% sure about this, since we've been immersed in this work for months.
However, Olle Fredriksson wrote us within one day of the draft going online with a link to his Haskell implementation. This was immensely reassuring, since (a) he implemented it so quickly, and (b) he didn't need to ask us how to implement it -- this means we didn't leave anything important out in the description!
In fact, Olle is technically the first person to implement the algorithm in the paper, since both Joshua and I have actually implemented variants of this algorithm for different languages. I'll try to write a small ML implementation to go along with Olle's Haskell code in the next few weeks. If you want to beat me to it, though, I have no objections!
A possibly stupid question: in figure 6, why don't UvarWF and EvarWF have the same form? Am I missing something subtle, missing something blatantly obvious, or is this just two different ways of saying the same thing?
ReplyDeleteIt's just two ways of saying the same thing. I'm not sure why we wrote it two different ways, and we'll change it to be consistent in the final version (if it gets in, or in the resubmit elsewhere if it doesn't).
ReplyDeleteAs I recall, I thought "α̂ ∈ Γ" would be unclear—does it only mean Γ = …, α̂, …, or does it also include Γ having the form …, α̂=τ, …? (This is why I insisted on "α̂ ∈ unsolved(Γ)" instead of "α̂ ∈ Γ" elsewhere.) With UvarWF there's no ambiguity. But yeah, they should probably be consistent.
ReplyDeleteA different pair of stupid questions: I'm not used to see such proofs going for 71 pages, does that hint at the complexity of the proofs or at the level of accuracy you wanted to work at?
ReplyDeleteOut of curiosity, are proofs in such detail still faster to write than mechanized proofs?
It's mostly a function of two things. First, our initial version was incorrect, and so we did everything in great detail just to make sure that we weren't fooling ourselves. Second, we gave a complete formalization of the algorithm, together with a completeness proof for it, which turns out to be a somewhat unusually high level of detail.
ReplyDeleteIt is probably faster to write them than in a mechanized way, since we are still somewhat informal about alpha-equivalence issues. However, I expect the "expansion factor" from paper to Coq to be smaller than average in this case. :)
Regarding formalized proofs versus length of proofs "on paper" I would suggest that you have a look at: http://dedukti.gforge.inria.fr/
ReplyDeleteThough not a proof assistant proper it likely gives much more readable proofs than Coq at little or no extra cost, especially if you have already done most of the work "on paper".