Thursday, 31 May 2018

My Awesome Experience at the Summer School on Formal Techniques (SSFT'18)

I spent the past week, at the Summer School on Formal Techniques, and it was an absolutely amazing experience. Chronicled below, are the different great talks/labs that were part of this week long program, as well as my thoughts interspersed in. Overall, this was an unforgettable week, where I learnt a lot, made new friends, and had some nice discussions about a lot of very interesting topics. I also found out about some things that I had no idea about before, and thus I decided to write at least a short summary down for each of the different things that happened, so that maybe it might be useful to some (or more likely me from the future) as references for things to read up on.

Group Photo
Group Photo at SSFT'18

I initially heard about SSFT via a message on our Project Everest Slack. After all, two of our team (Nikhil Swamy and Jonathan Protzenko) were going to give talks and conduct tutorials at this summer school. Since I had only recently (approximately a year ago) started my journey into Formal Methods/Techniques, this seemed like the perfect opportunity to widen my field of view in this field- a field which seems to be able to cover almost every topic known in Computer Science. As a person who started from a more "systems-y" background, I (rightly) thought that this would give me the requisite theoretical background to dive deeper into this field, in addition to giving me practical hands on experience with other tools and techniques used in this field. I'd been using F* for almost a year by now, but had yet to have had a chance to mess around with other tools that are commonly used (except for Dafny, which is a beautiful language to start off in this field btw). The fact that approximately half the time was dedicated to the labs, surely was a bonus!

The Summer School was held at Menlo College in Atherton, California, from Sat May 19 to Fri May 25, 2018. I must thank SRI for organizing it, and NSF for funding our travel/stay/etc. The organization of the entire summer school, as well as how everything was managed, was absolutely impeccable. Oh, and a special shout-out goes to the chefs at Menlo College who made absolutely marvelous food that kept us powered up through this week long intense mental workout!

As for the actual summer school, it started off with an optional background course, titled "Speaking Logic", which ran over the first two days (i.e. Sat/Sun). Despite it being optional, it seems like most of us at the summer school attended it, and it definitely was worth it. N Shankar took us through a beautiful journey through the fundamentals and principles of logic and formal methods. Along this journey, we made stops along Naive Set Theory, Propositional Logic, Cook's Theorem, Reductions to SAT, Proof Systems, Minimal Logic, and Meta-Theorems. We also stopped off at Interpolation, Resolution, First Order Logic, SMT Solving, the Tarski-Knaster Theorem, and Bounded Model Checking. Alongside these (already amazing) topics were the awesome detours that Shankar took, with his own anecdotes and stories. A bunch of these even led to some intense discussions during and after the lectures! We did do a bunch of proving things in PVS, and it definitely seems like a great tool to get people started with some serious formal proofs :)

After this deep dive into Logic, Stéphane Graham-Lengrand took us on an adventure through Type Theory, where the Proofs-as-Programs paradigm reigns king. Type Theory is the foundation for a wide variety of interactive theorem provers, such as Coq, Agda, Lean, Matita, etc. During this talk, we went over the Lambda Calculus and Simple Types, Intuitionistic logic (and Constructivism), Computing with Intuitionistic proofs, HOL with proof-terms, Dependent types, and more, all building up to the final bomb that blew our minds- Homotopy Type Theory. Stéphane used an extremely well thought out analogy, using a Vacuum Cleaner Power Cord, to explain the basics and motivation for Homotopy Type Theory (HoTT). I'd always wondered why people were excited when talking about equalities, but with just this one example, Stéphane has ensured that I am definitely going to dive into this topic, much deeper, sometime soon! BTW, this entire talk was punctuated with proofs done in Coq, and the exercises for the same, can be found here.

As day 2 came to an end, some of us realized that we'd covered a lot of ground, and that we'd need some time to digest it, so we went back over our notes and the slides, trying to ensure that we'd actually got some of the details straight. BTW, the speaking logic slides are not only available online, but are also quite detailed, and I would recommend the interested reader to take a look at these to get an overview of what logic and proofs and formal methods are about. However, the slides do not include all of those amazing anecdotes etc, that were there during the actual talk, so maybe you should consider joining the summer school when it happens next? ;)

Starting from Day 3 (i.e. Monday), we had a change of pace, with the schedule changing to a bunch of talks until mid-afternoon, followed by a couple of lab-sessions.

Monday's talks started with a wonderful talk by Emina Torlak, about Rosette. The talk, titled "Solver-Aided Programming" was about a programming model that integrates solvers into the language, which provides constructs for program verification, synthesis, and more. A strong emphasis was placed on the paradigm of "Verify, Debug, Solve, Synthesize", which are implemented in Rosette as powerful user-friendly constructs. Since Rosette is built on top of Racket, we get all the benefits of Racket (and thus, all the LISP goodness), which leads to a very elegant way to program using solvers. The four parts of the paradigm can be thought of as the following queries that any user might want to ask a language: (1) Verify: "find me an input on which the program fails, or prove that it cannot fail", (2) Debug: "localize the bad parts of the program", (3) Solve: "Find values that repair this failing run", and (4) Synthesize: "Find code that repairs the program".

The next talk was by Nikhil Swamy, about F*. It explained how there is a gap between interactive proof assistants (such as Coq) and semi-automated verifiers of imperative programs (such as Dafny), and that F* was about bridging that gap. F* has an ML-like syntax, and has dependent typing. What this means is that verification is done via the process of type checking. The rest of the lecture continued with examples showing (simple) proofs written in F*, for functional programs, as well as a discussion on how we can write proofs for effectful code, using Monadic effects, and by modeling the heap. Personally, despite having worked with F* for almost a year now, I still gained some new insights, such as the fact that subtyping being possible due to the core feature of refinements being proof irrelevant after finishing the proof.

Next up, after lunch, was a talk by the unforgettably energetic Mooly Sagiv, titled "Modularity for decidability of deductive verification with applications to distributed systems". It took us through the motivation of why decidability was crucial for verification, and that despite seemingly restrictive (since the logic then becomes less expressive), we are able to talk about and prove whatever interesting properties that are needed, at least in the case of distributed systems. Specifically, this decidable deductive verification was demonstrated using Ivy. The basic idea behind this is that when a verifier goes into the "Unknown" state (i.e., it is divergent; aka "I can't decide"), this gives very little (if any) information back to the person working with the verifier, since the property being verified might either be true, or might have a counter-model (read: counter-example), but no further information is known. Instead however, if we were to restrict ourselves to a logic that is always decidable, then we are guaranteed never to reach the "unknown" case. The talk then went on to some examples of verification in this decidable world (and how to represent some things that are "not expressible", but turn out to be expressible when you look at them from a different perspective). What was really interesting was how many fewer lines of proof were required per line of code in Ivy, in comparison with other tools. More details about how it is able to do so, would be in the following lecture.

After a short break, we had the lab sessions, where we split off into 2 groups (using a great way to introduce randomness btw: if the first letters of both your first and last name lie in the same half of the alphabet, then you belong to group A, else group B). All the groups had all the lab sessions, just in a different order; since I was in Group A, my order shall be based off of that.

Our first lab session was by Emina Torlak, where we got to dive into working with Rosette. As a warmup, we worked on finding and fixing a bug in a tiny BitVector example. As a larger example, we worked on Sudoku. Starting only with a checker (i.e., a program that, given a Sudoku solution, results in a "yes it is a valid solution" or "no it is not"), we turned it into a Sudoku solver (find a solution, given a puzzle), a validity checker (checks if a Sudoku puzzle has exactly 1 solution), and minimal puzzle generator (a puzzle that has the least number of constraints needed to be valid). What was extremely interesting was how easy it was to build each of these using just the checker. Rather than gain domain knowledge about how to write solvers, generators etc., we simply piggy-backed on the already implemented checker, and with only a single-digit number of lines of code, were able to implement all of this! Unfortunately, I do not know of a public link where this tutorial is available, but it is a very well designed and thought out tutorial that I'd recommend people try if they can find it.

Next up was the lab session by Mooly Sagiv, about Ivy. Here, he gave a live demonstration of using Ivy to verify a protocol for mutual exclusion. In contrast with previous experiences with verification, this was a welcome change, where the tool itself gives you a graphical counter-example, when it is unable to prove that a certain invariant holds. While it still falls upon the user of the tool to provide stronger invariants that can be proven via the induction, being able to see these counter examples instead of simply a "timed-out" as is more likely in other tools, is definitely a game changer

The next day (Tuesday), we started off with a talk by Andreas Abel, about Agda- the dependently typed functional programming language which is a proof assistant. The material for what was covered can be found here. The talk had a strong "do this live" approach, where Andreas was explaining things as he continued working on Agda code, proving things. We also went over an elegant representation and ordering invariant for binary search trees in Agda. This was based off of Conor McBride's paper "How to keep your neighbours in order". Personally, I found that Agda had a very elegant interface which made it more natural to think about proofs, and this made understanding the proofs themselves a lot easier.

The next talk was by Emina Torlak. Yes, almost every speaker gave 2 talks and had 2 lab sessions. This time, the talk was about how to build a solver aided language. The classic (read: hard) way to build such a tool is to build a symbolic compiler, which requires an expertise in Programming Languages, Formal Methods, and Software Engineering. A much easier method would be to build an interpreter for the language, and have something that uses this to build all our tools for us. This kind of tool is a significant technical challenge and is what Rosette solves for us. We can simply have a (deep or shallow) DSL hosted in Rosette (where, since Rosette is built on top of Racket, this becomes very easy), and then we can easily build the 4 tools- verify, debug, solve, and synthesize, very easily. The talk then went into details about how this massive technical hurdle is overcome (since neither symbolic execution, nor bounded model checking, are up to the exact requirements of this task of precise symbolic encodings). It is done via type-driven state-merging. This was followed by 3 different use-cases (out of many many others) where Rosette has been used to get some very interesting results.

The last talk before the break before the labs, was by Jonathan Protzenko, titled "Verified low-level programming embedded in F*". Specifically, the talk was about a low-level subset of F*, named Low*, which allows one to write and reason about low-level C code. The talk goes into detail about how different parts of C are modeled in Low*, as well as talks about the kreMLin compiler, which compiles Low* code to readable C. Since proof-erasure is a part of this process of compiling down to C (because C has no notion of proofs by itself), we only need to use the low-level subset in the actual computation bits, and can use the full power of F* in the proofs. There is a tutorial for Low* available online (currently a work-in-progress).

The first of the labs on the same day was part 2 of Emina Torlak's lab. In this, we worked on actually building DSLs- first a shallowly embedded DSL, followed by a deep embedded DSL, for a circuit-based programming language. An important thing to note here is that shallow DSLs are faster and easier to implement, and are almost always the right choice for either the "verify" or the "debug" operations. However, if we want any sort of "synthesize" operation, then shallow DSLs fall short extremely quickly, and deep embeddings work out much better.

The second lab on Tuesday was part 1 of the F*/Low* Lab, conducted by Nikhil Swamy. Here, we went across some select examples from the F* Tutorial. Special emphasis was placed on how one comes up with these proofs, and how F* is able to infer most of the proof, requiring only a "here's the form of the induction" argument. One interesting point that came up during this discussion was the possibility for F* to automatically "guess" the form of the induction. This seems quite doable in a large number of cases, and is something that is worth considering, though F* doesn't support this at the moment, and one needs to explicitly point out things like "do an induction on this list" (and the rest of the proof is automatic). The tutorial is well written and thought out in a way to be self-contained, while not repeating concepts that might be known to an OCaml / F# developer. There are some nice motivating examples/stories in the tutorial online too. I would definitely recommend the interested reader to try these out (though some people might need to take a crash course on OCaml / SML / F# syntax, if they aren't used to it).

With this, Tuesday came to a close, and we started Wednesday with a talk by Dirk Beyer, about CPAChecker. The talk, titled "Configurable Software Model Checking - A Unifying View" was about how one can unify different techniques, ideas, and algorithms in program analysis and verification, into a single framework, that also allows for the creation of "intermediate" algorithms. This ability to configure allows one to move all the way between imprecise but scalable Data-flow analysis, all the way to the precise but expensive Model Checking, by better combination of abstractions. Via Dynamic Precision Adjustment, one gets a better fine tuning of abstractions and gets adjustable precision, thereby allowing a similar movement between precise/expensive and imprecise/scalable. By using an adjustable block encoding, one can change how many statements are handled together all at once. The rest of the talk revolved around CPAChecker's features, concepts, ideas, algorithms, as well as architecture. We'd get to play around with it in the lab soon! Oh, and btw, CPAChecker has regularly been doing well in the SV-COMP Competition on Software Verification, coming in with Gold in 2018!

Next up, part 2 of Andreas Abel's talk. He continued with the "doing it live" approach, but this time, we were looking at examples from programming languages, like representation of expressions, evaluation, and equational reasoning. Again, the files can be found here. In my opinion, understanding these things well definitely involves following along, proving these things in Agda, so I would 100% recommend interested readers take a look at the Lec2.zip file on the website, and try proving stuff. The comments in these files are really helpful, especially if you've already gone over the files from Lecture 1. I have to specially point out how amazing it is that time was put in to make sure these examples were extremely well documented, and is thus easy enough for a beginner to follow along. Being at the talk though helped a lot too, and probably saved a bunch of time for many of us in understanding these ideas.

And then right after lunch, part 2 of Mooly Sagiv's talk. This time, the talk was much more about the technical details of Ivy and its 3 most important principles: (1) What 1st order structures would exist in the language -- abstract states and imperative updates. This gives us a "step towards decidability". (2) Theories as add-ons. When the user axiomatizes domain knowledge in EPR (which is what gives the decidability), soundness is checked, and we get reusable domain knowledge with predictable automation. (3) Modularity for breaking quantifier alternation cycles. It falls upon the user to break these cycles, and Ivy will only point out that there are cycles, instead of trying to break them by itself. One very interesting idea that falls out of this is how sometimes it becomes useful to perform the abstraction of functions as relations instead.

As for the labs for Wednesday, we started off with the first of the Agda labs, by Andreas Abel. In this lab, we worked on some simple definitions and proofs in Agda. I'd strongly recommend going over the exercises which are quite self-contained, and have some interesting bits that one might get stuck upon before realizing where to go next.

The next lab was the first of the CPAChecker labs, by Dirk Beyer. Here, we followed along parts of the CPAChecker Tutorial, which is also very self-contained. It contains a bunch of nicely chosen examples which help in identifying different features of CPAChecker, and has a nice progression to it.

And with that, Wednesday draws to an end. By now, we've all had a lot of interesting ideas, works, thoughts, anecdotes, stories, and concepts explained to us- and while for some, it might have seemed like an overwhelming amount, to most of us (or at least to me), this was a treasure trove of knowledge in a short, condensed, concentrated form. Personally, I was having an amazing time here. I must also bring up the fact that the rest of the students at the summer school were also a major factor in this, since everyone was brimming with ideas, and worked on such varying topics, that no matter who you were talking to, there was always something amazing to learn.

Anyways, back to the talks- we are now on Thursday. We start the day with a talk titled "Verifying Properties of Binarized Deep Neural Networks", by Nina Narodytska. With Machine Learning (and more specifically, Deep Neural Networks) becoming the rage and the norm for a lot of industries, it becomes essential to actually try to understand properties about them, especially about robustness to perturbation. In this talk, a specific class of neural networks is taken and is studied via an encoding to Boolean Satisfiability (i.e. SAT). Once it is encoded at a SAT problem, one can then leverage the full-power of research done in making better SAT solvers, to be able to scalably verify properties of these binarized deep neural networks. An arXiv pre-print of the work can be found here.

Next up was the second talk by Dirk Beyer, which was split into 4 parts (again, the material can be found here). One of the parts was about how one might combine different verifiers etc. Currently, we have a lot of verifiers, due to competitions such as SV-COMP, but it would be great to be able to leverage the strengths of one to help the other. This is where conditional model checking comes in. A really elegant "reducer" based construction was shown, which allows one to basically "import" any verifier and make it use the results of a previous conditional verifier. Another part was regarding verification with witnesses, which was about witness validation, as well as stepwise refinement of witnesses. This is another case where multiple verifiers could work in conjunction to either aid each other, or to provide higher assurance in each other's results. Another idea was about execution based validation of witnesses- this one is especially important when talking to people outside of the verification community, since they respond best to "here's a test case that breaks the software" rather than "here's a set of paths which might lead to a break in the software".

Finally, the last of the talks of the day was by Gordon Plotkin. This talk, titled "Some Principles of Differentiable Programming Languages" was an absolute beauty! In this talk, he walked us through what Differentiable Programming Languages are, and why they are necessary (as well as difficult). He then went on to explain some previous foundational work that might have been useful, if it weren't for those pesky partial functions. Following along, he went along his thought process in how he worked on designing the right language which has differentiation as a fundamental operation in it (despite having conditionals and looping). Personally, I found it extremely fascinating to see the thought process of reaching such an elegant language in the end. It starts off extremely messy, and keeps getting messier until suddenly, beauty emerges at the end. There are a whole bunch of notes that I wrote down during this talk, but I believe nothing can do justice to summarizing this talk. It was filled with lots of interesting side-notes, and anecdotes, and ideas that themselves could be talks of their own.

Back to the labs: we started Thursday's lab session with part 2 of the Ivy lab, by Mooly Sagiv. At this time, we got to actually play around with and truly understand the though process behind coming up with inductive invariants while proving things in Ivy. We specifically took up the example of leader election in a ring, and tried to prove that at the end of the protocol, exactly one leader would be elected. Due to Ivy's fast turn-around time to respond to the written invariants, as well as its relatively easy to understand graphical representation of counter examples, it became much more about identifying stronger and stronger properties that might help us in proving the property we want, rather than fighting with the prover, which seems to happen in a lot of automated proof systems. At the end of finding the proof, we also end up having a very nice birds eye view of the proof, since we've written these nice inductive invariants, by the time we are done.

Next lab for the day: part 2 of the F*/Low* Lab, conducted by Jonathan Protzenko. Here, we were proving correctness and some properties of some short examples about working with machine integers, references, and buffers. The code which we were working on, is heavily commented, and is extremely easy to follow along after the F* tutorial. I personally would recommend doing it, to quickly get a handle on the basics of Low*. These examples however, should soon become a part of the (currently work in progress) Low* Tutorial.

Next up, the Banquet. Everyone had an awesome time at this. And here's a picture of almost everyone who was there (looks like some people are missing from the photo, probably because the photo was taken very close to end of day):

Group Photo
At the Banquet


Finally, we arrive to the last day of the summer school (did a whole week go by so fast?!). We start off the day with 2 lab sessions (yep, labs instead of talks at the start of the day).

The first was part 2 of the CPAChecker lab, by Dirk Beyer. Here, we continued with the tutorial, but this time instead, we concentrated on the parts about combining verifiers, and about witness generation/checking. Personally, I found the reducer generating (arguably) readable code, which was interesting (and unexpected). Overall though, this lab helped cement the ideas that were discussed in the second talk by Dirk.

The second was part 2 of the Agda lab, by Andreas Abel. This time, we looked over more definitions and proofs in Agda (see Exercises2.agda). In the course of this, we also ended up learning about various proof styles that are possible in Agda, especially when talking about auxiliary "helper" lemmas. Proving decidability of various things was a nice exercise and is something I'd definitely recommend trying.

As a fitting end to the summer school, was a talk by Edward A. Lee titled "What Good are Formal Methods?". Based loosely around his book, titled "Plato and the Nerd", he walked us through the nuances of how combining different deterministic systems can lead to non-determinism, and how moving between the viewpoints of a scientist and an engineer is essential in looking at the right way at models. Everything we say about, and prove about systems, is always actually about a model of the system, and depending on the viewpoint, either the model is flawed (scientist), or the system/realization is flawed (engineer). He then goes on to talk about non-falsifiable theories, with the "Digital Physics" hypothesis taken as a prime example. another interesting direction was about the Incompleteness of Determinism, which he shows via the concept of "Superdense Time". This talk was definitely an amazing journey through a lot of different ideas and concepts from a variety of different fields, that we do not even think about on a regular basis. I am definitely going to read the book, because if it is anything like the talk, it should definitely be a joy to read!

And finally, we come to the end of this brilliant, beautiful and amazing week. I got a chance to meet such awesome people, discuss mind-shattering ideas, talk about random topics in great depth, and make some great new friends and acquaintances. I hope to stay in touch with as many of you as I can, and hope to meet again sometime really soon!