While I was at MPI-SWS, I overlapped there with Ruzica Piskac, who has done a lot of work on decision procedures and other kinds of model-theoretic approaches to theorem proving.

She once joked to me that she found this style of logic more comfortable than proof-theoretic approaches to logic, because type theory struck her as a bit like religion: it wasn't just enough to solve a problem, you had to solve it in an orthodox way. I told her that I agreed, and this was actually the reason I was more comfortable with type theory!

The verification literature is full of an enormous number of tricks, and I've always been a bit reluctant to look deeply into it because I've had the impression that an attack that works for one problem can fail utterly on a problem which is only slightly different. In contrast, with proof theory once a problem is solved, you can be confident that your techniques are robust to small changes. The ritual requirements of proof theory — that each rule mentions only one connective, that introduction and elimination forms satisfy logical harmony, and that cut-elimination holds — work to ensure that the resulting systems are very modular and well-structured.

More recently, though, I've had the chance to talk with Vijay
d'Silva, and read some of the papers he and his collaborators have
been writing about the foundations of static analysis (for example,
d'Silva, Haller, and Kroening's *et al*'s recent *Abstract
Satisfaction*). Their papers revealed to me some of the
architecture of the area (otherwise hidden from me behind intricate
combinatorial arguments), by explaining it in the terms of categorical
logic.

Since then, I've had the thought that it would be worthwhile to see if the techniques of SAT solving and verification could be cast into proof-theoretic terms, partly just for the intellectual interest, but also because I think that could help us understand how to integrate decision procedures into type inference systems in a systematic and principled way. These days, it's relatively well-understood how to bolt an SMT solver onto the side of a typechecker, thanks to the work on Dependent ML and its successors, but the seams start to show particularly when you need to generate error messages.

All this is to preface the fact that I have just begun reading Mahfuza
Farooque's PhD thesis, *Automating Reasoning Techniques as Proof
Search in Sequent Calculus*. This thesis reformulates DPLL(T), the
fundamental algorithm behind SMT solvers, in terms of classical
sequent calculus. In particular, she shows how proof search in a
variant of LK corresponds to (i.e., is in a bisimulation with) the
abstract DPLL(T) algorithm.

This is fascinating enough to begin with, but the specific sequent calculus Farooque gives is really astonishing. These days, we have more or less gotten used to the idea that any problem in proof theory becomes more tractable if you look at it through the lens of focusing.

In particular, we also know (e.g., see Chaudhuri's thesis) that by choosing the polarization of atomic formulas appropriately, you can control whether you do goal-directed or forward search for a proof. So in some sense, it's not surprising that Farooque uses a polarized variant of classical linear logic as her sequent calculus.

However: Farooque's calculus chooses polarities *on the fly*! In other
words, you don't have to fix a proof search strategy up
front. Instead, her system shows how you can decide what kind of proof
search strategy you want to use based on the search you've done so
far. Seen in this light, it's almost unsurprising that DPLL-based
provers work well, because they have the freedom to tailor their proof
search strategy to the specific instance they are given.

This is a really remarkable result!

Silva's CADE 15 paper("Abstract Interpretation as Automated Deduction") is also quite interesting. I think in sprit is's similar with Abramsky's duality for logic and program

ReplyDelete