Wednesday, July 9, 2014

Two Drafts on Dependent Types

I have two new draft papers to publicize. The first is a paper with Pierre Pradic and Nick Benton:
  • Integrating Linear and Dependent Types, Neelakantan R. Krishnaswami, Pierre Pradic, Nick Benton. The technical report with proofs is also available.

    In this paper, we show how to integrate linear types with type dependency, by extending the linear/non-linear calculus of Benton to support type dependency.

    Next, we give an application of this calculus by giving a proof-theoretic account of imperative programming, which requires extending the calculus with computationally irrelevant quantification, proof irrelevance, and a monad of computations. We show the soundness of our theory by giving a realizability model in the style of Nuprl, which permits us to validate not only the β-laws for each type, but also the η-laws.

    These extensions permit us to decompose Hoare triples into a collection of simpler type-theoretic connectives, yielding a rich equational theory for dependently-typed higher-order imperative programs. Furthermore, both the type theory and its model are relatively simple, even when all of the extensions are considered.

Sometimes, it seems like every problem in programming languages research can be solved by either linear types, or dependent types. So why not combine them, and see what happens?

  • Mtac: A Monad for Typed Tactic Programming in Coq, Beta Ziliani, Derek Dreyer, Neelakantan R. Krishnaswami, Aleksandar Nanevski, Viktor Vafeiadis.

    A website with Coq source and tutorial is available.

    Effective support for custom proof automation is essential for large-scale interactive proof development. However, existing languages for automation via tactics either (a) provide no way to specify the behavior of tactics within the base logic of the accompanying theorem prover, or (b) rely on advanced type-theoretic machinery that is not easily integrated into established theorem provers.

    We present Mtac, a lightweight but powerful extension to Coq that supports dependently-typed tactic programming. Mtac tactics have access to all the features of ordinary Coq programming, as well as a new set of typed tactical primitives. We avoid the need to touch the trusted kernel typechecker of Coq by encapsulating uses of these new tactical primitives in a monad, and instrumenting Coq so that it executes monadic tactics during type inference.

Since I'm not the main author of this paper, I feel free to say this is really good! Mtac manages to strike a really amazing balance of simplicity, cleanliness, and power. It's really the first tactic language that I want to implement (rather than grudgingly accepting the necessity of implementing).

Tuesday, July 1, 2014

PhD opportunities at the University of Birmingham

My university, the University of Birmingham, is looking for applicants to the CS PhD program. I'm putting our advertisement on my blog, in case you (or your students, if you're a professor) are looking for a graduate program -- well, we're looking for students! We have an imminent funding deadline -- please contact us immediately if you are interested!


We invite applications for PhD study at the University of Birmingham.

We are a group of (mostly) theoretical computer scientists who explore fundamental concepts in computation and programming language semantics. This often involves profound and surprising connections between different areas of computer science and mathematics. From category theory to lambda-calculus and computational effects, from topology to constructive mathematics, from game semantics to program compilation, this is a diverse field of research that continues to provide new insight and underlying structure.

  • See our webpage, with links to individual researchers, here:

    http://www.cs.bham.ac.uk/research/groupings/theory/

  • Information about PhD applications may be found here:

    http://www.cs.bham.ac.uk/admissions/postgraduate-research/

  • If you are considering applying, please contact any of us. We will be very happy to discuss the opportunities available.
    • Martin Escardo (topology, computation with infinite objects, constructive mathematics, intuitionistic type theory)
    • Dan Ghica (game semantics, heterogeneous computing, model checking)
    • Achim Jung (mathematical structures in the foundations of computing: logic, topology, order)
    • Neel Krishnaswami (type theory, verification, substructural logic, interactive computation)
    • Paul Levy (denotational semantics, lambda-calculus with effects, nondeterminism, category theory, game semantics)
    • Uday Reddy (semantics of state, separation logic)
    • Eike Ritter (security protocol verification)
    • Hayo Thielecke (abstract machines, concurrent and functional programming, software security)
    • Steve Vickers (constructive mathematics and topology, category theory and toposes)

Thursday, June 19, 2014

Gödel's Ontological Argument

In his paper Jokes and their Relation to the Cognitive Unconscious: Marvin Minsky argued that a sense of humor would be an essential component of a rational agent. His argument went a bit like this: an AI would use need to use logic to deduce consequences from their data about the world. However, any sufficiently large database of facts would inevitably contain inconsistencies, and a blind application of ex falso quodlibet would lead to disaster.

The bus timetable, which you believe to be correct, says the bus arrives at 3:35, but it has actually arrived at 3:37. This is a contradiction, and so by ex falso, you think it's a good idea to give me all your money!

Clearly, the correct response to such an argument is to laugh at it and move on. Whence the sense of humor, according to Minksy.

When I first read Anselm's ontological argument for the existence of God, I had the correct Minskyan reaction — I laughed at it and moved on. However, one of the curiosities of mathematical logic is that Kurt Gödel did not laugh at it. He found all of the gaps in Anselm's reasoning, and then — and here we see what being one of the greatest logicians of all time gets you — he proceeded to repair all the holes in the proof.

That is: Gödel has given a proof of the existence of God. So let's look at the proof!

We begin by axiomatizing a predicate

\[ \Good : (i \to \prop) \to \prop \]

To do this, we first give the following auxilliary definitions.

  • $\Godlike : i \to \prop$
    $\Godlike(x) = \forall \phi:i \to \prop.\; \Good(\phi) \Rightarrow \phi(x)$

  • $\EssenceOf : i \to (i \to \prop) \to \prop$
    $\EssenceOf(x, \phi) = \phi(x) \wedge \forall \psi:i \to \prop.\; \psi(x) \Rightarrow \Box(\forall x.\; \phi(x) \Rightarrow \psi(x))$

  • $\Essential : i \to \prop$
    $\Essential(x) = \forall \phi:i \to \prop.\; \EssenceOf(x,\phi) \Rightarrow \Box(\exists y.\phi(y))$

So something is godlike if it has all good properties. Something $x$'s essence is a property $\phi$, if $x$ has property $\phi$, and furthermore every property $x$ has is implied by $\phi$. Something $x$ is essential, if something with its essence necessarily exists. (This sounds like medieval theology already, doesn't it?)

Now, we can give the axioms describing the $\Good$ predicate.

  1. $\forall \phi:i \to \prop.\; \Good(\lambda x.\lnot \phi(x)) \iff \lnot \Good(\phi)$
  2. $\forall \phi, \psi:i \to \prop.\; (\forall x:i.\; (\phi(x) \Rightarrow \psi(x))) \Rightarrow \Good(\phi) \Rightarrow \Good(\psi)$
  3. $\Good(\Godlike)$
  4. $\Good(\phi) \Rightarrow \Box(\Good(\phi))$
  5. $\Good(\Essential)$

The first axiom says that every property is exactly good or not good (bad). The second says that if a property is a logical consequence of a good property, it is also a good property. The third says that being godlike is good. The third property says that every good property is necessarily good. Finally, being essential (as opposed to accidental) is good. I feel like I could argue with any of these axioms (except maybe 2), but honestly I'm more interested in the proof itself.

Below, I give Goedel's result, rearranged a bit and put into a natural deduction style. Loosely speaking, I'm using the natural deduction format from Davies and Pfenning's A Judgmental Reconstruction of Modal Logic. In my proof, I use the phrase “We have necessarily $P$” to mean that $P$ goes into the valid context, and I use “Necessarily:” with an indented block following it to indicate that I'm proving a box'd proposition. The rule is that within the scope of a “Necessarily:”, I can only use “necessarily $P$” results from outside of it.

This style is occasionally inconvenient when proving lemmas, so I'll also use the “necessitation rule” from traditional modal logic, which says that if you have a closed proof of $P$, you can conclude $\Box(P)$. (This rule is implied by the Davies and Pfenning rules, but it's handy in informal proofs.)

Now, their proof system is basically for S4, and the ontological argument uses S5. So I'll also make use (as an axiom) of the fact that $\Diamond(\Box(P))$ implies $\Box(P)$ — that is, that possibly necessary things are actually necessary. This saves having to mention world variables in the proof.

Overall, the proof system needed for the proof is a fully impredicative second-order classical S5 — a constructivist may find this a harder lift than the existence of God! (Can God create a consistent axiomatic system so powerful She cannot believe it?) Jokes aside, it's an interesting proof nonetheless.

We begin by showing that any Godlike entity only has necessarily good properties.

Lemma 1. (God is good) $\forall x:i.\; \Godlike(x) \Rightarrow \forall \phi:i \to \prop.\; \phi(x) \Rightarrow \Box(\Good(\phi))$

Proof.

  • Assume $x$ and $\Godlike(x)$, $\phi$ and $\phi(x)$.
  • For a contradiction, suppose $\lnot \Good(\phi)$.
    • Then by axiom 1, $\Good(\lambda x.\lnot \phi(x))$.
    • Unfolding $\Godlike$ and instantiating with $\lambda x.\;\lnot \phi(x), \Good(\lambda x.\lnot \phi(x)) \Rightarrow \lnot \phi(x)$.
    • We know $\Good(\lambda x.\lnot \phi(x))$.
    • Hence $\lnot \phi(x)$
    • This contradicts $\phi(x)$.
  • Therefore $\Good(\phi)$.
  • By Axiom 4, $\Box(\Good(\phi))$.

Then, we show that all of a Godlike entity's properties are entailed by being Godlike. I was initially tempted to dub this the “I am that I am” lemma, but decided that “God has no hair” was funnier. The name comes from the theorem in physics that “black holes have no hair” — they are completely characterized by their mass, charge and angular momentum. Similarly, here being Godlike completely characterizes Godlike entities.

If you accept Leibniz's principle, this implies monotheism, as well. Hindus and Buddhists will disagree, because they often deny that things are characterized by their properties. (For differing reasons, Jean-Paul Sartre might say the same, too, as would Per Martin-Löf!)

Lemma 2. (God has no hair): $\forall x:i.\; \Godlike(x) \Rightarrow \EssenceOf(x, \Godlike)$

Proof.

  • Assume $x$ and $\Godlike(x)$.
  • By definition, $\EssenceOf(x, \Godlike)$ = $\Godlike(x) \wedge \forall \psi:i \to \prop.\; \psi(x) \Rightarrow \Box(\forall x.\; \Godlike(x) \Rightarrow \psi(x))$
    1. $\Godlike(x)$ holds by assumption.
    2. Assume $\psi:i \to \prop$ and $\psi(x)$.
      • By Lemma (God is good), $\Box(\Good(\psi))$.
      • So necessarily $\Good(\psi)$.
      • Necessarily:
        • Assume $x$ and $\Godlike(x)$.
          • By definition of $\Godlike$, $\Good(\psi) \Rightarrow \psi(x)$.
          • But necessarily $\Good(\psi)$.
          • Hence $\psi(x)$
        • Therefore $\forall x.\; \Godlike(x) \Rightarrow \psi(x)$
      • Therefore $\Box(\forall x.\; \Godlike(x) \Rightarrow \psi(x))$
  • Therefore $\forall \psi:i \to \prop.\; \psi(x) \Rightarrow \Box(\forall x.\; \Godlike(x) \Rightarrow \psi(x))$
  • Therefore $\EssenceOf(x, \Godlike)$

Next, we show that if a Godlike object could exist, then a Godlike entity necessarily exists.

Lemma 3. (Necessary Existence): $(\exists x:i.\; \Godlike(x)) \Rightarrow \Box(\exists y:i.\; \Godlike(y))$

Proof.

  • Assume there is an $x$ such that $\Godlike(x)$.
  • By axiom 5, $\Good(\Essential)$.
  • By the definition of $\Godlike$, we have $\Essential(x)$.
  • Unfolding $\Essential$, we get $\forall \phi:i \to \prop. \EssenceOf(x,\phi) \Rightarrow \Box(\exists y.\phi(y))$
  • By Lemma (God has no hair), We know $\EssenceOf(x, \Godlike)$.
  • Hence $\Box(\exists y:i.\; \Godlike(y))$

We can now show that the above implication is itself necessary. We could have stuck all of the above theorems inside a “Necessarily:” block, but that would have been an annoying amount of indentation. So I used the necessitation principle, which is admissible in Davies/Pfenning.

Lemma 4. (Necessary Necessary Existence): $\Box(\exists x:i.\; \Godlike(x)) \Rightarrow \Box(\exists y:i.\; \Godlike(y))$

Proof. By inlining the proofs of all the earlier lemmas, we can give a closed proof of the necessary existence lemma (i.e., using nothing but the axioms and definitions).

So by necessitation, we get $\Box(\exists x:i.\; \Godlike(x)) \Rightarrow \Box(\exists y:i.\; \Godlike(y))$.

Now, we‘ll show that it's possible that God exists. I would have called it the “no atheists in foxholes” lemma, except that the US military atheists’ association maintains a list of atheists in foxholes.

Lemma 5. (God is possible): $\Diamond (\exists x.\Godlike(x))$

Proof.

  • For a contradiction, suppose $\Box(\forall y:i.\lnot \Godlike(y))$.
    • Assume $y$ and $\Godlike(y)$.
      • Instantiating the contradiction hypothesis, $\lnot \Godlike(y)$.
    • Therefore $\forall y:i.\; \Godlike(y) \Rightarrow \lnot \Godlike(y)$.
    • By Axiom 2, $\Good(\lambda y.\lnot \Godlike(y))$
    • By Axiom 1, $\lnot \Good(\Godlike)$.
    • By Axiom 3, $\Good(\Godlike)$.
    • This is a contradiction.
  • Therefore $\lnot \Box(\forall y.\lnot \Godlike(y))$.
  • By quantifier twiddling, $\lnot \Box(\lnot \exists y.\Godlike(y))$
  • By clasical definition of possibility $\Diamond (\exists y.\Godlike(y))$

Now we can finish off Gödel's argument!

Theorem 1. (God necessarily exists): $\Box(\exists y:i.\; \Godlike(y))$

Proof.

  • By Lemma (Necessary Necessary Existence), $\Box[(\exists x:i.\; \Godlike(x)) \Rightarrow \Box(\exists y:i.\; \Godlike(y))]$
  • Necessarily $(\exists x:i.\; \Godlike(x)) \Rightarrow \Box(\exists y:i.\; \Godlike(y))$.
  • By Lemma (God is Possible), $\Diamond ((\exists x:i.\; \Godlike(x)))$.
  • Stipulating $\exists x:i.\; \Godlike(x)$:
    • By using the implication, $\Box(\exists y:i.\; \Godlike(y))$.
  • Therefore $\Diamond \Box(\exists y:i.\; \Godlike(y))$.
  • In S5, $\Diamond \Box P$ implies $\Box P$ for any $P$.
  • Therefore $\Box(\exists y:i.\; \Godlike(y))$

When I set out to understand this proof, I wanted to see where the proof was nonclassical — I initially thought it would be funny to constructivize his argument and announce the existence of a computer program which was a realizer for God. It turns out that this is impossible, since this is a deeply classical argument. However, there might still be some computational content in this proof, since Jean-Louis Krivine has looked at realizability interpretations of very powerful classical systems such as ZF set theory. AFAICT, the tricky part will be finding a specific predicate satisfying the axioms of $\Good$.

However, as far as understanding the proof goes, the use of excluded middle actually turns out to not bother me overmuch. After all, proving that it is not the case that God doesn't exist, sort of makes a good case for apophatic theology, which is pretty congenial to me as a cultural (albeit firmly atheist) Hindu!

Instead, the part I find most difficult to swallow is the proof of the possible existence of God, because it runs afoul of relevance. Specifically, the proof that $\Godlike(y) \Rightarrow \lnot \Godlike(y)$ does not use its hypothesis. As a result, I find myself quite dubious that being $\Godlike$ is a $\Good$ property (Axiom 3).

Wednesday, June 11, 2014

From DPLL(T) to Sequent Calculus

While I was at MPI-SWS, I overlapped there with Ruzica Piskac, who has done a lot of work on decision procedures and other kinds of model-theoretic approaches to theorem proving.

She once joked to me that she found this style of logic more comfortable than proof-theoretic approaches to logic, because type theory struck her as a bit like religion: it wasn't just enough to solve a problem, you had to solve it in an orthodox way. I told her that I agreed, and this was actually the reason I was more comfortable with type theory!

The verification literature is full of an enormous number of tricks, and I've always been a bit reluctant to look deeply into it because I've had the impression that an attack that works for one problem can fail utterly on a problem which is only slightly different. In contrast, with proof theory once a problem is solved, you can be confident that your techniques are robust to small changes. The ritual requirements of proof theory — that each rule mentions only one connective, that introduction and elimination forms satisfy logical harmony, and that cut-elimination holds — work to ensure that the resulting systems are very modular and well-structured.

More recently, though, I've had the chance to talk with Vijay d'Silva, and read some of the papers he and his collaborators have been writing about the foundations of static analysis (for example, d'Silva, Haller, and Kroening's et al's recent Abstract Satisfaction). Their papers revealed to me some of the architecture of the area (otherwise hidden from me behind intricate combinatorial arguments), by explaining it in the terms of categorical logic.

Since then, I've had the thought that it would be worthwhile to see if the techniques of SAT solving and verification could be cast into proof-theoretic terms, partly just for the intellectual interest, but also because I think that could help us understand how to integrate decision procedures into type inference systems in a systematic and principled way. These days, it's relatively well-understood how to bolt an SMT solver onto the side of a typechecker, thanks to the work on Dependent ML and its successors, but the seams start to show particularly when you need to generate error messages.

All this is to preface the fact that I have just begun reading Mahfuza Farooque's PhD thesis, Automating Reasoning Techniques as Proof Search in Sequent Calculus. This thesis reformulates DPLL(T), the fundamental algorithm behind SMT solvers, in terms of classical sequent calculus. In particular, she shows how proof search in a variant of LK corresponds to (i.e., is in a bisimulation with) the abstract DPLL(T) algorithm.

This is fascinating enough to begin with, but the specific sequent calculus Farooque gives is really astonishing. These days, we have more or less gotten used to the idea that any problem in proof theory becomes more tractable if you look at it through the lens of focusing.

In particular, we also know (e.g., see Chaudhuri's thesis) that by choosing the polarization of atomic formulas appropriately, you can control whether you do goal-directed or forward search for a proof. So in some sense, it's not surprising that Farooque uses a polarized variant of classical linear logic as her sequent calculus.

However: Farooque's calculus chooses polarities on the fly! In other words, you don't have to fix a proof search strategy up front. Instead, her system shows how you can decide what kind of proof search strategy you want to use based on the search you've done so far. Seen in this light, it's almost unsurprising that DPLL-based provers work well, because they have the freedom to tailor their proof search strategy to the specific instance they are given.

This is a really remarkable result!

Monday, March 24, 2014

Papers I'm reading

Here is an incomplete selection of some good papers I'm currently reading:
  • Constructing Type Systems over an Operational Semantics, Robert Harper.
  • A Realizability Interpretation of Martin-Loef Type Theory, Catarina Coquand.

    These two papers give a very nice introduction to developing the metatheory of dependent type systems. Harper's paper is about extensional type theory, and Coquand's is about intensional type theory, but IMO they both demonstrate the benefits of using realizability to formulate the semantics of type theory -- they are both impressively simple, given the complexity of the type theories involved.

  • Categorical Glueing and Logical Predicates for Models of Linear Logic, Masahito Hasegawa.

    This is another paper on logical relations, this time for linear logic. This is one of the papers that has made me seriously interested in classical linear logic for the first time. Basically, the "double glueing" construction used to formulate models to CLL looks a bit like the notion of "biorthogonality" or "TT-closure" sometimes to interpet higher-order programming languages, but with some very intriguing differences.

    Pitts's Parametric Polymorphism and Operational Equivalence is a good introduction to biorthogonality, though the actual origins of the idea are a bit mysterious to me. At any rate, the idea is that if you have a program value, then its "orthogonal" is the set of continuations which won't crash when supplied the value as an argument. Similarly, if you have a continuation, then its orthogonal is the set of values that won't crash the continuation. So if you take a set of values, and then take their orthogonal, and then take their orthogonal again, you have a new, potentially bigger, set of values. A "biorthogonal" is a set of values which remains the same when you take its double orthogonal. This means you can equivalently think of a set of values as the set of continuations which accept it, which is often quite convenient for defining the meaning of a type.

    With double glueing, you again think of a type in as a set of values which make it up, and a set of continuations it can supply. But unlike with biorthogonality, there is no procedure for constructing the continuations from the values or vice-versa --- instead, you get to define the values and the continuations as anything you like, as long as everything plays nicely together. This extra degree of freedom is really intriguing, and I'm curious what can be done with it.

Friday, March 7, 2014

2014 Midlands Graduate School

The 2014 Midlands Graduate School is coming up. Please attend, or send your students!


University of Nottingham, 22-26 April 2014

The Midlands Graduate School (MGS) in the Foundations of Computing Science is a collaboration between researchers at the Universities of Birmingham, Leicester, Nottingham and Sheffield. It was established in 1999. The MGS has two main goals: to provide PhD students with a sound basis for research in the mathematical and practical foundations of computing and to give PhD students the opportunity to make contact with established researchers in the field and their peers who are at a similar stage in their research careers.

This year, the MGS is at the University of Nottingham. It will start on 22 April and finish on 26 April.

Program

MGS2014 consists of ten courses, each with five hours of lectures plus exercise sessions. One third of the programme offers introductory (or core) courses which all participants attend. The remainder provides advanced (or specialised) courses from which each participant selects a subset depending upon his or her interests and one course given by an invited lecturer (Conor McBride).

Core Courses

Course TitleAcronymLecturerAffiliation
Category TheoryCATRoy CroleLeicester
Denotational Semantics DENAchim JungBirmingham
Typed Lambda CalculusLAMPaul Blain LevyBirmingham

Advanced Courses

Course TitleAcronymLecturerAffiliation
Concurrency, Causality, ReversibilityCCRIrek UlidowskiLeicester
Theory of Randomised Search HeuristicsHEUDirk Sudholt, Per Kristian Lehre, Pietro S. Oliveto, Christine ZargesBirmingham, Nottingham, Sheffield
Homotopy Type TheoryHOTThorsten AltenkirchNottingham
Infinite Data StructuresINFVenanzio CaprettaNottingham
Logical relations and parametricityPARUday ReddyBirmingham
Higher-Order Functional Reactive ProgrammingREANeelakantan KrishnaswamiBirmingham

Invited Lecturer

In addition to the standard programme, Conor McBride from the University of Strathclyde will give an invited course on Dependently Typed Programming (DTP).

Registration

The deadline for registration for MGS 2014 is Friday, 21 March. The registration fee is £440.

This includes 5 nights of accommodation (from Monday 21 evening to Saturday 26 April morning), catering for lunches and coffee breaks and the conference dinner. Accommodation is at the University's Newark Hall of Residence and includes breakfast.

To register for MGS 2014, you need to book this via the University of Nottingham online store. The site will ask you to register, if this is the first time to use it. You can pay using a credit or debit card.

We recommend to register as early as possible because places will be allocated on a first-come first-serve base and it is quite possible that we run out of space before 21 March.

Contact

For any additional information, contact the local organiser, Thorsten Altenkirch, at the address txa@cs.nott.ac.uk.

Monday, February 3, 2014

Retiring a joke

I just saw that Andy Pitts has a new draft out, An Equivalent Presentation of the Bezem-Coquand-Huber Category of Cubical Sets, with the following abstract:

Staton has shown that there is an equivalence between the category of presheaves on (the opposite of) finite sets and partial bijections and the category of nominal restriction sets: see [2, Exercise 9.7]. The aim here is to see that this extends to an equivalence between the category of cubical sets introduced in [1] and a category of nominal sets equipped with a "01-substitution" operation. It seems to me that presenting the topos in question equivalently as 01-substitution sets rather than cubical sets will make it easier (and more elegant) to carry out the constructions and calculations needed to build the intended univalent model of intentional constructive type theory.

Nominal methods have been an active area of research in PL theory for a while now, and have achieved a rather considerable level of sophistication. I have occasionally joked that it seems like we had to aim an awful lot of technical machinery at the problem of alpha-equivalence -- so what happens when we realize that there are other equivalence relations in mathematics?

But with this note, it looks like the last laugh is on me!

Pitts shows that you can use nominal techniques to give a simpler presentation of the cubical sets model of univalent type theory. Since HoTT gives us a clean constructive account of quotient types and higher inductive types, all the technical machinery invented to handle alpha-equivalence scales smoothly up to handle any equivalence relation you like.