Dr Byron Cook: Geek of the Week

On moving to Cambridge University after developing the SLAM model checker used by Microsoft's Static Driver Verifier, Dr Bryan Cook's new computer locked up with what turned out to be a faulty driver. The result was TERMINATOR, the first practical tool for automatically proving that any application would always terminate.

For seventy years, few mathematicians sought to tackle the conundrum postulated by Alan Turing, the father of modern computer science, who stated that there could be no general procedure to decide if a self-contained computer program will eventually halt: As recently as three years ago Scientific American magazine said the problem was unsolvable.1148-byron.jpg

That was before Dr Byron Cook, a principal researcher at Microsoft’s laboratory at Cambridge University, and his colleagues did the unthinkable – they hacked a fix – a hugely significant breakthrough which won him the British Computer Society’s Roger Needham award.

As a result of Byron’s work, termination bugs in mainstream software are set to become a thing of the past; and from his research has come the first practical tool for automatically proving termination of real-world programmes, called TERMINATOR.

Prior to this, Byron was one of the developers of the SLAM software model checker which is used in a Windows product called Static Driver Verifier, which automatically finds bugs in Windows OS device drivers.

Before joining Microsoft, Byron worked at Prover Technology, where he investigated new algorithms for use in SAT solvers and symbolic model checking tools. These tools (Prover SL, for example) are used in a variety of applications, including the verification of microprocessors, aircraft software, and embedded systems.

Byron’s PhD is from the Oregon Health and Science University (OGI) where he developed a programming language for describing microprocessors and invented a technique to decompose proofs of microprocessor correctness.

As well as his research work at Cambridge University, Byron is also Professor of Computer Science at Queen Mary’s college, University of London.

Did you wake up in the middle of the night and suddenly decide that the Halting problem had to be fixed or was it something had been burning away at the back of your mind for some time?
It was sudden, just after I moved to Cambridge in 2004. Before moving I had been working on an automatic program verification tool called SLAM. Our primary application area for SLAM was the problem of proving the absence of bugs in Windows device drivers. SLAM eventually became the basis of the Windows product “Static Driver Verifier”. SLAM implemented a set of heuristics that specifically would not work for the question of program termination or liveness—-an example of a liveness property is “Whenever Acquire() is called, Release() will eventually be called later”.

I began working on termination for two reasons:

  1. When I moved to Cambridge my new computer kept hanging. Not crashing, but hanging. It was incredibly frustrating. After looking into the problem for more than a week I discovered that that the problem was that, in rare instances, a networking device driver’s dispatch routine on my machine was failing to terminate when responding to an event. I found it ironic that I’d been working on tools for device driver correctness and then I’d spent a week diagnosing a problem that I’d previously decided was not important when working on SLAM. I knew then that automating the search for program termination proofs was the problem that I should look into.
  2. When I moved to Europe several German researchers (Andreas Podelski and Andrey Rybalchenko) who were interested in the topic asked me to join them. This is because I had experience with SLAM and had some initial ideas on how to make practical SLAM-based implementations out of some theoretical ideas based on Ramsey’s theorem.

Note that the problem of program termination has a special connection to Cambridge. Cambridge’s Alan Turing not only proved the program termination problem undecidable, but was also the first to formally define the mathematical connection between program termination and the ordinal numbers. Also, “Ramsey’s theorem” is named after Cambridge’s Frank Ramsey.

I read that when you tried to describe your termination research you found it awkward to explain with existing mathematical symbols, so you decided to invent new ones. What was that process like?
Much of my programming involves finding satisfying assignments to propositions expressed in mathematical logic. For example, when attempting to prove termination of a program, my tools are searching for a “finite set of disjunctively well-founded relations whose union is larger than the transitive closure of the program’s transition relation”. If we can find such a set, we’ve proved termination. If no such set exists, the program does not terminate.

Terminator is designed to automatically search for such a finite set. The truth is that I spend as much time tinkering with the formulae as I do programming solutions. Often when typesetting mathematics, or writing them on the whiteboard, we are very lazy.

We choose, say, calligraphic “M” to represent a well-founded relation. Why? Uh, I don’t know…..perhaps because M is the first letter in the word “measure”? Perhaps “M” helps convey the intuition, maybe it doesn’t. Or perhaps we choose a Greek letter. There are not many Greek letters, so perhaps we choose gamma? 

But, wait; gamma is often used to represent other things. There is a lot of re-use of symbols to mean different things, even in the same area of research, and these symbols are not typically chosen for a good reason other than the fact that we all know them.

My friend Tauba Auerbach is an artist who works in, among other, typography. Over coffee one morning I was telling her about the state of affairs in my area relating to symbols, and she suggested that we try to invent a few new symbols that might help convey the intuition, and simply the formulae. In the end the problem turned out to be much harder than we’d thought it would be. After months of trying and failing to come up with things that worked we came up with some rules that the symbols needed to meet:

  1. 4 strokes or less on the whiteboard
  2. Not too curvy or ornamental
  3. Not previously used
  4. Yet, should appear similar to a known symbol if trying to convey something similar to a known concept
  5. Everyone should like it: me, Tauba, and at least 5 randomly chosen researchers from my research area

Just finding symbols that meet constraints 1 through 3 is very difficult. Try it. Most of the “reasonable” 4-strokes-or less symbols are already taken (and, or, entailment, plus, subtraction, etc). In the end we came up with about 5 symbols that make my life much easier. Several of the symbols are “parametric”, in the sense that they take a symbol and modify them slightly to create something that’s related to the parameter. For example, our lifting symbol takes a relation R and a function f builds a set of pairs of elements (s,t) such that the pair (f(s),f(t)) are related by R. Another set of symbols are related to restriction. In mathematics we often use | to restrict a large set or relation by an element or smaller set. I needed 3 or 4 of these symbols, so we ended up using | with additional elements added to them.

Can you tell me a little about Terminator? What language is it written in?
Terminator is an automatic tool that can be used to prove termination and other “liveness properties” of C programs.  Terminator reduces the problem of program termination and liveness to an iterative search for candidate termination argument which is checked and then refined on-demand. I could discuss how Terminator works for hours and hours, which isn’t probably appropriate for this interview. Instead, I’ll just tease the reader: to prove termination you need to find good finite representations of infinite executions of systems with potentially unbounded state spaces. There are a number of papers on my website if the reader is interested in hearing more.

Terminator is written in the functional language F#.  Its typical for tools like Terminator to be written in functional language such as Caml or F#, for the reason that F#’s syntax for rich pattern matching and recursive definitions facilitates the development of algorithms that one typically implement.

Will your research find itself in production by the Windows kernel team? Is it already in production?
We have a few challenges to solve before I’d be willing to make a release that the Windows organization would possibly want.  The biggest is the question of data structures: often to automatically prove termination you must figure out the shapes of data structures that specific pointers point to (such as cyclic doubly-linked lists).  This is called “shape analysis”. Without this capability, in practice Terminator will fail a lot in the field.  Together with the other members of the SLAyer team, we are developing a tool that does precisely this. Things are going very well in this direction, but the tools are not ready for primetime yet. You’ll notice if you look at my technical papers that more than half of them are on topics related to shape analysis.
Why do you think so few people had tackled the problem seriously? Would you blame that on Turing – who famously showed it was impossible to devise an algorithm to prove that any given program will always run to completion?
When I first starting working on the problem I was often discouraged by other because “solving the halting problem is impossible, Turing already proved it undecidable”. I would imagine that others in the past listened to this criticism. I think that it’s a matter of defining “impossible” and re-stating Turing’s undecidability result in less technical terms. I usually explain Turing’s result in the following way:

“Turing proved that, no matter what automatic program termination proving method you come up with, it will sometimes fail. However, so long as the answer ‘Fail’ is included in the possible set of answers from the method, then there is no inconsistency. Turing’s technical result relies on the termination proving method have no way to represent failure”.

The real crux of the matter is in how we interpret the word “impossible”. We all use the internet even though we know it will sometimes fail, but that doesn’t make networking “impossible”. A mathematician, in contrast, might say that networking is impossible for this reason. Another example: I know that occasionally the train from London to Cambridge will fail, but that doesn’t make getting to Cambridge “impossible”. In the same way, proving program termination is not impossible, even though it is technically undecidable.

Researchers are coming up with interesting ideas about how to improve programming but are the best of those ideas from research labs and universities percolating into practice fast enough?
I can speak more generally about computer science research, and less about “improving programming”. I think that things are much better than they were ten or fifteen years ago. In some sense we have returned to how things were 50 or 60 years ago before there was a difference between the computer industry and academia.

One thing I love about companies like Microsoft, Google, Apple, etc is that developers are hired for their brilliance and their ability to adapt. You hear stories from the old days of Zerox PARC, Bell labs, IBM research, etc. of product groups not being willing to consider new ideas from the corporate research labs. I have never experienced this. I attribute this to the fact that high-tech firms formed using the hiring and leadership strategies inspired by Robert Noyce often manage to maintain open channels of collaboration between the various points on the spectrum between product development and research. For example, I spent 2 years out from my pure research endeavour to take a position as a developer in the Windows base operating systems group. It was a fantastic experience that benefited both myself as well as the both the Microsoft Windows and Microsoft Research organizations.

Some pure academics are horrified to hear me advocating against the separation of research and development. But if you read about the history of major scientific and technological advances you’ll see that it was the blurring of the boundary that between applied and theoretical that was key to success. For example, the fact that Turing was at both Bletchley Park and the Cambridge Mathematical Laboratory was vital to the development of the modern computer.

What can you imagine happening to operating systems in the future?
There’s an inherent tension between performance and correctness. Environments like operating systems, language runtimes, web servers, web browsers, etc are all in the same quandary: “do we run the application/plug-n/etc quickly or safely?” Advances in automatic program analysis and verification have a lot to contribute here. You could imagine a future where automatic analysis/verification tools are integrated with the existing paradigm of compilation, runtimes, and virtualization. For example, an operating system could quickly run programs that have been verified to meet certain properties, whereas programs that have not been statically proved to meet certain guarantees could be run more slowly in a safe sand-boxed environment.
I’d like to ask you a couple of general questions about programming. When you’re writing code (with a tricky mathematical invariant) do you always formally prove your code correct? Is it habit you then all programmers should get into?
It depends. I’m fortunate to be surrounded by a group of very clever, careful, and patient colleagues; both within Microsoft Research and across Europe. Examples include Josh Berdine, Alexey Gotsman, Andrey Rybalchenko, Peter O’Hearn, and Matthew Parkinson. When I’m developing a tricky algorithm I inevitably end up explaining it to someone like Josh Berdine on his whiteboard at 3am. Explaining the correctness of an algorithm to someone like Josh amounts to a full mathematical proof. Do I write the proof down? Sometimes: When I’m writing a paper about the algorithm I usually do, otherwise I usually have some notes and diagrams in my journal but it’s not readable by anyone else beyond myself.
Since you began programming, what’s changed about the way you think about it?
I was very fortunate to see the future of programming long before it was available to others. My first exposure to the development of automatic formal verification tools was when I was at Intel Research around 1996, just after the FDIV bug. We used something called FL, developed by Carl Segar, which was a lazy functional language in which the “Boolean” type was much more symbolic than in traditional languages: Boolean-decision diagrams were used to implement Booleans, rather than concrete values 0 and 1. Many of the procedures needed for the development of verification tools were built-in to the language, and integrated well with the language runtime’s garbage collection, etc. Network-based batch job control (such as “cloud computing”) was well supported at Intel. With those tools you could set up jobs and they’d be run using idle cycles of computers all around the world at various Intel sites. When I left Intel I felt as though I had been dropped into the dark ages.

Nowadays this sort of power is available to everyone in the world. What’s more: with F#, powerful libraries such as Z3, and the various cloud computing services that are now available, developing just seems so easy now!

Do you think F# will take over the industry in much the same way as C++ has and is there a danger of F# becoming too ‘expert friendly’ in the same way that C++ has?
I don’t think that the F# team is worried about taking over the world. They are very focused on making a set of coherent technical decisions that make for an amazing development experience for those programmers for whom it is a good fit. For my purposes F# is the best choice by perhaps 15 orders of magnitude. If I were programming bionic devices I might choose a much lower level language where I could control the memory management.

As long as Don Syme is the lead architect of F# I think there’s 0% risk it will go the way of C++’s complexity. Avoiding complexity has been on of the key themes throughout his career. I have known Don Syme for about 15 years, when we at Intel Research together. We drove to work together every day. At the time I was doing research on programming languages, and he was doing formal verification (Funny, we’ve now traded research areas). During our drives to/from Intel he would tirelessly complain about the complexities created by me and my friends in the area of programming language research (monads, type classes, module systems, etc). His views were perhaps a little too extreme, and perhaps they’ve moderated some, but he still comes from that essential world view.

In my view F# is basically C# where, as much as possible, “let” is the only language construct that you need to know. This is very much by design. Don’s office is next to mine at Microsoft Research and the themes of our discussions remain the same today as they were 15 years ago: making the difficult easy, removing unnecessary complexity. To paraphrase Einstein: F# makes things as simple as possible, but no simpler. F# is like waterfowl: it looks very graceful and calm from above, but below the water its legs are furiously working to protect you from the reality below.

How do you see the future for procedural programming languages? Do you think languages are converging in the direction of mixed functional/OO programming?
Yes. Mixed. It’s the same thing with typed vs. untyped. Right now we’re forced to choose, but this will not always be true. The problem in the past is that we did not have powerful enough program analysis and verification tools, and thus we relied on very draconian language categories and very inflexible type systems to achieve our goals. With modern tools I think we’ll see more use of analysis to make untyped languages run nearly as fast as typed languages, and to use optimization techniques on pure programs written in impure languages.
What’s your desert island list of books for programmers?
  • Bugs in writing – Lyn Dupre
  • How to solve it – George Poyla
  • Handbook of Practical Logic and Automated Reasoning – John Harrison
  • A big stack of New Yorker magazines.
What about the Art of Computer Programming? Several people I’ve interviewed have read it from cover to cover. Some have left it on the shelf and use it as reference. And one used it as a monitor stand. What camp are you in?
Honestly, I’ve never read that book. I should. There are a lot of books that you’d probably be shocked that I’ve never read. I tend to read more books and articles that are not about computers and mathematics at all.

On this topic: I’m much more of a believer in the oral tradition in science, instead of technical papers and books. Of course it doesn’t scale well, but I’m fortunate to know many fantastically smart people (e.g. Peter O’Hearn, Andreas Podelski, Mooly Sagiv). If you’d asked me what I would take with me to a desert island, I would not say books. Instead I’d bring one of my many encyclopaedic friends to facilitate a long and entertaining Socratic discussion.