Online Worldwide Seminar onLogic and Semantics (OWLS) |
OWLS has now been running for close to two years and has played an important role keeping the community connected in difficult times. As things slowly get back to normal, and with the zoom fatigue that has built up in the last year, the organizers feel that this is a good time to take a break from the weekly seminars. We might still organize some special edition seminars or revive the series later in the year!
Thanks for joining us and participating so actively!
The OWLS mailing list is still available. It will be used to inform people if the seminar was to resume. To subscribe, or manage an existing subscription, visit this link:
Messages will be sent from the address cl-owls@lists.cam.ac.uk.
Seminars used to take place on Wednesdays at 2pm UTC.
Seminars were about 45 minutes long including questions. Every other week, the seminar hosted a young researcher to present their work; these seminars are labelled OWLS-YR.
Here is the list of past seminars in reverse chronological order.
Updating beliefs (also called conditioning) is an important technique in probabilistic reasoning. It may happen that the evidence for such conditioning is fuzzy, for instance when I'm 80% sure that I heard the alarm. In such situations there are two update rules, due to Pearl and to Jeffrey. They give wildly different outcomes and satisfy different mathematical (logical) properties. What makes this situation even more uncomfortable is that it is not clear when one should apply which rule. Bayesian probability plays an important role in modern cognition theory. An intriguing question is whether the human mind works according to Pearl or Jeffrey. This presentation puts all this in a mathematically precise perspective.
In this talk, we investigate whether a categorical semantics can be given to the (non-session-typed) π-calculus in the same way that a cartesian closed category is a model of the λ-calculus. We show that an affirmative answer can be given under the assumption that special processes, called forwarders, behave as identities. This is shown by designing a variant of i/o-typed π-calculus and a categorical structure called compact closed Freyd category and establishing a correspondence between them. The correspondence is not ad-hoc in the sense that the definition of compact closed Freyd category is fairly standard: compact closed Freyd categories combine two well-known structures, namely, closed Freyd category and compact closed category. The correspondence, however, is not completely satisfactory because the assumption that "forwarders behave as identities" is questionable from the operational/behavioral viewpoint. Unfortunately, the statement that "forwarders behave as identities" is not valid for the majority of the conventional behavioral equivalences for the π-calculus. We examine and discuss if there are ways to validate the assumption by restricting the language or modifying the operational semantics. (Based on joint works with Takeshi Tsukada)
The problem of identifying a probabilistic context free grammar has two aspects: the first is determining the grammar's topology (the rules of the grammar) and the second is estimating probabilistic weights for each rule. Given the hardness results for learning context-free grammars in general, and probabilistic grammars in particular, most of the literature has concentrated on the second problem. In this work we address the first problem. We restrict attention to structurally unambiguous weighted context-free grammars (SUWCFG) and provide a query learning algorithm for structurally unambiguous probabilistic context-free grammars (SUPCFG). We show that SUWCFG can be represented using co-linear multiplicity tree automata (CMTA), and provide a polynomial learning algorithm that learns CMTAs. We show that the learned CMTA can be converted into a probabilistic grammar, thus providing a complete algorithm for learning a structurally unambiguous probabilistic context free grammar (both the grammar topology and the probabilistic weights) using structured membership queries and structured equivalence queries. A summarized version of this work was published at AAAI 21.
We provide axioms that guarantee a category is equivalent to that of continuous linear functions between Hilbert spaces. The axioms are purely categorical and do not presuppose any analytical structure such as probabilities, convexity, complex numbers, continuity, or dimension. We'll discuss the axioms, sketch the proof of the theorem, and survey open questions, further directions, and context. (Based on joint work with Andre Kornell arxiv:2109.07418.)
Automata models between determinism and nondeterminism can retain some of the algorithmic properties of deterministic automata while enjoying some of the expressiveness and succinctness of nondeterminism. In the setting of regular languages two such models -- namely good-for-games automata and history-deterministic automata -- coincide, and have known applications in reactive synthesis. In this talk, I discuss what happens when these notions are lifted to the quantitative setting. Perhaps surprisingly, the two automata models are no longer equivalent. This talk will explore the landscape of these automata classes and their applications to different quantitative synthesis problems.
Solving parity games has been of great interest due to its unique complexity status. In this talk, we will look at "The Strahler number of a Parity game" and argue that it is a robust, and hence arguably natural, parameter as it coincides with its alternative version based on trees of progress measures and—remarkably—with the register number defined by Lehtinen (2018). We show that parity games can be solved in quasi-linear space and in time that is polynomial in the number of vertices and linear in (d/2k)^k, where k is the Strahler number. One algorithm we obtain is by providing small Strahler Universal Trees. Since most known techniques for solving parity games can be viewed as a search for winning sets for the players with the help of universal trees, it makes sense to build faster algorithms parametrised by these trees. We will explore some algorithms where these trees are explicitly taken as input, and we compare their advantages and disadvantages. This talk is based primarily based on two works: 1) The Strahler Number of a Parity Game with Laure Daviaud and Marcin Jurdziński and 2) A Symmetric Attractor-Decomposition Lifting Algorithm for Parity Games with Marcin Jurdziński, Rémi Morvan and Pierre Ohlmann.
Facebook is building a new microkernel operating system as part of its AR/VR strategy. This talk describes a project to apply software verification to a central part of the OS: the interpocess communication design. We use Iris, an implementation of concurrent separation logic in the Coq proof assistant, to verify two queue data structures used for IPC. The verification effort uncovered algorithmic simplifications as well as bugs in client code. The simplifications involve the removal of redundant operations in a performance-critical part of the kernel. The proof work was completed in person months, not years, by engineers with no prior familiarity with Iris. These findings are spurring further use of verification at Facebook.
I will present recent work in approximate minimization, done in collaboration with Borja Balle, Prakash Panangaden, Doina Precup and Guillaume Rabusseau. We address the approximate minimization problem for weighted finite automata (WFAs) with weights in $\R$, over a one-letter alphabet: to compute the best possible approximation of a WFA given a bound on the number of states. This work is grounded in Adamyan-Arov-Krein approximation theory, a remarkable collection of results on the approximation of Hankel operators. In addition to its intrinsic mathematical relevance, this theory has proven to be very effective for model reduction. We adapt these results to the framework of weighted automata over a one-letter alphabet. We provide theoretical guarantees and bounds on the quality of the approximation in the spectral and $\ell^2$ norm. We develop an algorithm, based on the properties of Hankel operators, that returns the optimal approximation in the spectral norm. Then, we study the approximate minimization problem of black boxes trained for language modelling of sequential data over a one-letter alphabet. We use the spectral norm to measure the distance between the black box and the WFA, and we provide theoretical guarantees to study the potentially infinite-rank Hankel matrix of the black box, without accessing the training data. Finally, we propose an algorithm returning an asymptotically-optimal approximation of the black box.
Constructive foundations have for decades been built upon realizability models for higher-order logic and type theory. However, traditional realizability models have a rather limited notion of computation, which only supports non-termination and avoids many other commonly used effects. Work to address these limitations has typically overlaid structure on top of existing models, such as by using powersets to represent non-determinism, but kept the realizers themselves deterministic. On the other hand, the algebraic structure of existing realizability models taking into account effects in a more direct fashion (as is done in Krivine realizability for instance) is hard to analyze.
In this work, we (Liron Cohen, Ross Tate & myself) alternatively address these limitations by making the structure underlying realizability models more flexible. To this end, we introduce evidenced frames: a general-purpose framework for building realizability models that support diverse effectful computations. We demonstrate that this flexibility permits models wherein the realizers themselves can be effectful, such as λ-terms that can manipulate state, reduce non-deterministically, or fail entirely. Beyond the broader notions of computation, we demonstrate that evidenced frames form a unifying framework for (realizability) models of higher-order dependent predicate logic. In particular, we prove that evidenced frames are complete with respect to these models, and that the existing completeness construction for implicative algebras—another foundational framework for realizability—factors through our simpler construction. As such, we conclude that evidenced frames offer an ideal domain for unifying and broadening realizability models.
Traditionally, program semantics is centered around program equivalences and refinements, based on which verification and transformation techniques can be justified. More and more often, however, programs are substituted with approximately equivalent programs, or verified against imprecise specifications. Program semantics has started dealing with program differences only in recent years, through the interpretation of programs in metric spaces. We give a brief survey about the state of the art on metric program semantics, and on the inadequacy of metrics as a way to deal with program differences. We thus point at a few recent results on a new kind of differential program semantics in which differences are context-sensitive, thus richer in structure than mere numbers.
Memory consistency models provide semantics to memory accesses of multiprocessor programs, sequential Consistency (SC) being a well known example. Weak Memory Models (WMMs) are relaxations of SC which provide weaker guarantees but allow for improved performance. Verification of safety properties has been shown to be a very hard problem from a complexity standpoint for many WMMs. Hence abstractions of such systems and, frameworks providing approximate results are of interest. This talk is based on the safety verification problem for Release-Acquire (RA) semantics, a popular fragment of C++11 that balances performance and programmability. The focus will be on the role of parameterization, which allows a program to be executed by arbitrarily many (apriori unknown number of) threads. This can be seen as a (conservative) over-approximation of actual system behaviour. I will talk about some general concepts, including a simplified semantics which we found to be useful in analysing parameterized RA systems. I will then discuss how one can refine the parameterized system with non-parameterized programs. The expressivity of allowed programs affects the hardness of the verification problem and I will talk about concrete algorithmic and complexity results for different program classes. To conclude I will give some extensions of these ideas to variant WMMs as well as avenues for future work.
This is based on joint work with S.Krishna (IIT Bombay) and Roland Meyer (TU Braunshweig). An arxiv version can be found at https://arxiv.org/abs/2101.12123.
Resolution is one of the most well-studied proof systems for proving classical propositional tautologies. It has naturally been the first-choice system to be built upon and generalised to obtain proof systems for quantified Boolean formulas QBFs. In this talk, I will briefly survey some resolution-based QBF proof system approaches, and then discuss the proof system MergeRes. This is the first QBF system where strategies are built into the proof, as opposed to being algorithmically extracted from them. I will describe how this approach, and the chosen way in MergeRes for representing the strategies, is double-edged: it gives MergeRes an exponential advantage over many resolution-based QBF proof systems while refuting certain formulas, but also leads to exponential lower bounds in MergeRes for some formulas that are easily refutable in other calculi.
This is joint work with Olaf Beyersdorff, Joshua Blinkhorn, Tomáš Peitl, and Gaurav Sood, and has appeared at STACS 2019/JAR 2020 and FSTTCS 2020.
The work to be presented is in collaboration with Pierre-Évariste Dagand and Lionel Rieg. It studies the computational content of the Fundamental Lemma of classical realizability for the simply-typed lambda-calculus with sums.
(If you are unfamiliar with classical realizability, it might help to think of it as a unary logical relation with an elegant symmetric setup -- it originates in studies of classical logic. The talk will gently introduce classical realizability.)
Two salient points of our presentation correspond to folklore ideas that we present as opinionated recommandations, to ourselves and others, of how to give a "modern" presentation of these questions (classical realizability, and in general studying the computational content of a proof device):
1. We present classical realizablity as compilation to Curien-Herbelin style symmetric abstract machine calculi, instead of the Krivine Abstract machine, to represent realizers. Experts will recognize the polarized reduction rules of Munch-Maccagnoni.
2. To present computation content, we are *not* using extraction of a program from a mathematical-style proof in a proof assistant. That beautiful approach rarely works and it generates ugly code. Instead we *write* the logical argument directly as a program in a dependently-typed (meta)language, and we think about the code we are writing.
I will discuss how certain functions, e.g. string-to-string transducers, or tree-to-tree transducers, can be presented using primitives of functional programming languages, such as composition of functions, or monad operations on the underlying datatypes (lists or trees). If the primitives are chosen well, then we can get classes of functions known from automata theory, e.g. two-way automata as string-to-string transducers or monadic second-order transductions.
Monitoring of a signal plays an essential role in the runtime verification of cyber-physical systems. Qualitative timed pattern matching is one of the mathematical formulations of monitoring, which gives a Boolean verdict for each sub-signal according to the satisfaction of the given specification. There are two orthogonal directions of extension of qualitative timed pattern matching. One direction on the result is quantitative: what users want is often not a qualitative verdict but the quantitative measurement of the satisfaction of the specification. The other direction on the algorithm is online checking: the monitor returns some verdicts before obtaining the entire signal, which enables to monitor a running system. In this talk, we talk about a combination of these two extensions taking an automata-based approach. This is the first quantitative and online timed pattern matching algorithm to the best of our knowledge. More specifically, we employ what we call timed symbolic weighted automata to specify quantitative specifications to be monitored, and we obtain an online algorithm using the shortest distance of a weighted variant of the zone graph and dynamic programming. Moreover, our problem setting is semiring-based and therefore, general. Our experimental results confirm the scalability of our algorithm for specifications with a time-bound.
Joint work with Chris Barrett and Victoria Barrett
Subatomic proof systems in deep inference exploit the idea that atoms can be seen as superpositions of values, for example, true and false. Adopting this point of view leads to some dramatic simplifications of proof systems and their proof theory, and this talk provides an example.
A single, simple, linear inference shape, together with a certain notion of saturation, generates a proof system for a language of standard boolean operators augmented by decision trees. We prove cut-elimination and several other proof-theoretic results for this proof system. Moreover, the proof system provides cut-free proofs of Statman tautologies that are polynomial in the size of the tautologies, while standard proof systems only yield exponential cut-free proofs. The ideas are natural, simple, and mostly rely on two notions that appear to have very general applicability: projecting proofs around atoms and eversion of formulae.
I could summarise the philosophy behind deep inference and subatomic proof theory as follows. We expand the class of proofs to make accessible better proofs for richer languages and to simplify the normalisation theory. One main reason for its success is deep inference's unique ability to deal with self-dual non-commutative operators, which is what atoms are when used as decision tree constructors.
The talk does not assume previous knowledge of deep inference and focuses on the general concepts rather than the technicalities. More information about deep inference is at http://alessio.guglielmi.name/res/cos/.
I will present recent and ongoing work (joint with G. Cruttwell, B. Gavranovic, N. Ghani, and P. Wilson) on providing semantic foundations to machine learning using the tools of logic, algebra and category theory. This framework provides a much needed unifying perspective on different training techniques for machine learning models, based on the categorical notion of lens. This abstract viewpoint does not only allow to study uniformly learning with neural networks, but also to extend these techniques to other, previously unexplored models, such as Boolean circuits.
Initial algebras for endofunctors are a simple category-theoretic concept that has proved useful in computer science logic and semantics for explaining all sorts of inductive (and dually, coinductive) concepts. I will begin by recalling an old theorem of Adamek in classical set theory about constructing initial algebras via transfinite iteration. I will then describe a new version that works in constructive logic with a weak form of choice (the "weakly initial set of covers" axiom of van den Berg, Moerdijk and Palmgren). I will also try to explain why the constructive version is useful even from a classical point of view. This is joint work with Shaun Steenkamp.
Register automata are an extension of automata that deal with data, possibly from an infinite domain. Many algorithms from classical automata theory remain to work on deterministic register automata. However, the nondeterministic register automata are strictly more expression and some properties become undecidable. Motivated by a recent result that the subclass of unambiguous register automata has decidable equivalence, we investigate weighted register automata, the common generalisation of register automata and weighted automata. These include unambiguous register automata. We show that equivalence is also decidable for these automata. This improves the previous results in three ways: (a) we allow for more data domains; (b) the complexity is exponentially better; and (c) we allow automata with guessing. Most importantly, we feel that our main contribution is the development of the mathematical theory of so-called orbit-finitely spanned vector spaces, on which our decision procedure is based. In the talk I will mostly talk about these structures. This is joint work with Mikołaj Bojańczyk and Bartek Klin.
The past decade has seen the advent of several large-scale digital libraries of formalised mathematics, written with the help of proof assistants. Most of these libraries are framed by hierarchies of formal algebraic structures. These structures play a similar role as Bourbaki's mathematical structures, codifying the mathematical language used throughout the library, its vocabulary and its notational apparatus. The latter hierarchies can also be seen as a formal-proof-engineering device, which organises inheritance and sharing between various interfaces.
This talk will discuss the recent techniques proposed to design these hierarchies in proof assistants based on dependent type theory, the corresponding achievements, but also their pitfalls and limitations.
Non-normal modal logics are called in this way because they do not validate some axioms or rules of the weakest normal modal logic K. They have been studied since the seminal works by C.I. Lewis, Lemmon, Kripke, Segerberg, Scott, and Montague, and are applied in many areas, such as deontic, epistemic, multi-agent, and probabilistic reasoning. In this talk I will present a bi-neighbourhood semantics for non-normal modal logics that generalizes their standard neighbourhood semantics. Then I will present two kinds of calculi, labelled and internal, that provide decision procedures for these logics and allow for the extraction of countermodels of non-valid formulas. Based on joint works with Nicola Olivetti, Björn Lellmann, Sara Negri, Elaine Pimentel, and Gian Luca Pozzato.
Negotiations are a model of distributed computation related to workflow Petri nets. Deterministic sound negotiations turn out to be amenable to efficient analysis algorithms, and several verification questions can be answered in polynomial time. In this talk we show an efficient Angluin-style learning algorithm in this model.
This is joint work with Igor Walukiewicz.
Bertrand et al. introduced a model of parameterised systems, where each agent is represented by a finite state system, and studied the following control problem: for any number of agents, does there exist a controller able to bring all agents to a target state? They showed that the problem is decidable and EXPTIME-complete in the adversarial setting, and posed as an open problem the stochastic setting, where the agent is represented by a Markov decision process. In this paper, we show that the stochastic control problem is decidable. Our solution makes significant uses of well quasi orders, of the max-flow min-cut theorem, and of the theory of regular cost functions. We introduce an intermediate problem of independent interest called the sequential flow problem, and study the complexity of solving it. Joint work with Thomas Colcombet and Nathanaël Fijalkow.
While model checking has often been considered as a practical alternative to building formal proofs, we argue here that the theory of sequent calculus proofs can be used to provide an appealing foundation for model checking. Since the emphasis of model checking is on establishing the truth of a property in a model, we rely on the proof-theoretic notion of additive inference rules, since such rules allow provability to directly describe truth conditions. Unfortunately, the additive treatment of quantifiers requires inference rules to have infinite sets of premises and the additive treatment of model descriptions provides no natural notion of state exploration. By employing a focused proof system, it is possible to construct large-scale, synthetic rules that also qualify as additive but contain elements of multiplicative inference. These additive synthetic rules -- essentially rules built from the description of a model -- allow direct treatment of state exploration. This proof-theoretic framework provides a natural treatment of reachability and non-reachability problems, as well as tabled-deduction, bisimulation, and winning strategies.
This talk is based on a paper co-authored with Quentin Heath appearing in the Journal of Automated Reasoning, 2019.
The slides are available here. The article on which the talk is based is here.
(based on joint works with Alen Arslanagić, Dimitris Kouzapas, Erik Voogd, and Nobuko Yoshida)
Session types are a type-based approach for ensuring correct message-passing programs. A session type specifies the protocol that each channel endpoint in aprogram should respect.
Session types have been originally developed for processes in the pi-calculus, the paradigmatic calculus of interaction and concurrency. While there is substantial value in studying session types for the pi-calculus, it is also insightful to investigate them for programming calculi with a more direct link with (functional) programming languages. To this end, we have studied a core calculus for higher-order concurrency and sessions, called HO. HO is a compact blend of sessions and higher-order concurrency, in which the values exchanged by processes can only be abstractions (functions from names to processes).
In this talk, I will overview two results on the expressivity of HO processes with session types. They show how session types not only delineate and enable encodings; they inform the techniques required to reason about their correctness properties. First, HO and the session pi-calculus are equally expressible: one language can be encoded into the other, up to tight behavioral equivalences. Second, there is a class of so-called minimal session types, which only covers protocols without sequential composition but is still expressive enough to encode all typable HO processes.
We present quantitative separation logic (QSL). In contrast to classical separation logic, QSL employs quantities which evaluate to real numbers instead of predicates which evaluate to Boolean values. The connectives of classical separation logic, separating conjunction and separating implication, are lifted from predicates to quantities. This extension is conservative: Both connectives are backward compatible to their classical analogs and obey the same laws, e.g. modus ponens, adjointness, etc. Furthermore, we present a weakest precondition calculus for quantitative reasoning about probabilistic pointer programs in QSL. This calculus is a conservative extension of both Ishtiaq's, O'Hearn's and Reynolds' separation logic for heap-manipulating programs and Kozen's / McIver and Morgan's weakest preexpectations for probabilistic programs. Our calculus preserves O'Hearn's frame rule, which enables local reasoning - a key principle of separation logic. Finally, we briefly touch upon open questions regarding lower bounds on weakest preexpectations and intensional completeness of our calculus.
Focusing is a general technique that was designed to improve proof search, but has since become increasingly relevant in structural proof theory. The essential idea is to identify and merge the non-deterministic choices in a proof. A focused proof is then given by the alternation of phases where invertible rules are applied eagerly (bottom-up) and phases where the other rules are confined and controlled. The focusing theorem, which asserts the completeness of focused proofs, delivers strong representational benefits. For instance, by ignoring the internal structure of the phases one obtains a well-behaved notion of ‘synthetic rule’ (e.g. they commute with cut-reduction steps). In this talk, we will present a careful study of a family of synthetic rules, the bipoles, giving a fresh view to an old problem: how to incorporate inference rules corresponding to axioms into proof systems for classical and intuitionistic logics. As different synthetic inference rules can be produced for the same axiom, we in fact unify and generalise previous approaches for transforming axioms into rules. This is joint work with Dale Miller, Elaine Pimentel, and Marco Volpe.
One way of thinking of a “pure name” (or “guid") is as a random number, and so names are related to probability. It’s important to understand how some names can be private or secret, and this turns out to correspond to probabilistic concepts. I will talk about semantic models for higher-order programming with probabilities and for programming with names. I’ll show a model of probability that is also a good model of programming with names: it's fully abstract at first order.
This talk will take us past the nu calculus, quasi Borel spaces, some bits of descriptive set theory and some non-parametric statistics. But it will be an introductory talk and I won’t assume familiarity with any of this. The talk will be partly based on our paper at POPL 2021, https://doi.org/10.1145/3434292, which is joint work with Marcin Sabok, Dario Stein and Michael Wolman.
While causal consistency is one of the most fundamental consistency models weaker than sequential consistency, the decidability of safety verification for (finite-state) concurrent programs running under causally consistent shared-memories is still unclear. We establish the decidability of this problem for two standard and well-studied variants of causal consistency. To do so, for each of the variants, we develop an equivalent "lossy" operational semantics, and show that it constitutes a well-structured transition system, which enables decidable verification. The two novel semantics are based on similar key observations, which, we believe, may also be of independent use in the investigation of weakly consistent shared memory models and their verification. Interestingly, our results are in contrast to the undecidability of this problem under the Release/Acquire fragment of the C/C++11 memory model, which forms another variant of a causally consistent memory that, in terms of allowed outcomes, lies strictly between the two models we study. Nevertheless, all these variants coincide for write/write-race-free programs, which implies the decidability of verification for such programs under Release/Acquire.
(Joint work with Udi Boker, partly presented at PLDI'20)
Church's synthesis problem asks whether a synchronous specification from infinite words to infinite words can be realized by a synchronous function. We relax this requirement and ask whether a synchronous specification from infinite words to infinite words can be realized by a computable function, in other words, we simply ask whether a specification is implementable.
This question has been previously studied by Holtmann et al. who showed that the problem is decidable (it was later shown to be EXPtime-complete) for synchronous specifications with total domain. We show that the question is EXPtime-complete for synchronous specifications with partial domain. A specification with partial domain is implementable if for every input sequence in its domain an output sequence can be computed that is correct w.r.t. the specification. A fundamental difference between the total and the partial domain setting is that, if the specification is implementable, in the former setting sequential transducers suffice to do so and in the latter setting a more powerful computation model is needed. We show that if a synchronous specification from infinite words to infinite words is implementable then a deterministic two-way transducer can compute an implementation.
This is joint work with Emmanuel Filiot.
In both computer science and economics, efficiency is a cherished property. In computer science, the field of algorithms is almost solely focused on their efficiency. In economics, the main advantage of the free market is that it promises "economic efficiency." A major lesson from COVID-19 is that both fields have over-emphasized efficiency and under-emphasized resilience. I argue that resilience is a more important property than efficiency and discuss how the two fields can broaden their focus to make resilience a primary consideration. I will include a technical example, showing how we can shift the focus in strategic reasoning from efficiency to resilience.
Iris is a framework for higher-order concurrent separation logic, implemented in the Coq proof assistant, which we have been developing since 2014. Originally designed for pedagogical purposes, Iris has grown into a ongoing, multi-institution project, with active collaborators at Aarhus University, BedRock Systems, Boston College, CNRS/LRI, Groningen University, INRIA, ITU Copenhagen, KU Leuven, Microsoft Research, MIT, MPI-SWS, NYU, Radboud University Nijmegen, Saarland University, and Vrije Universiteit Brussel, and over 35 published papers studying or deploying Iris for verification of complex programs and programming language meta-theory in Rust, Go, OCaml, Scala, and more (https://iris-project.org).
In this talk, we will present two brand new -- and very different -- developments that have the potential to extend the reach of Iris even further. The first is a new *ownership-based refinement type system* for C, which supports *automated* verification of C programs while at the same time being *foundational* (producing Iris proofs in Coq). The second is a complete "remodeling" of Iris, replacing its original step-indexed model with a *transfinite* step-indexed model in order to make Iris suitable for verification of liveness properties.
For this talk, we will not assume any prior knowledge of Iris. Rather, we will briefly review the distinguishing features of Iris, and then explain the key insights behind the aforementioned new developments -- and the problems they are solving -- at a high level of abstraction.
The first new development is joint work with Michael Sammler, Rodolphe Lepigre, Robbert Krebbers, Kayvan Memarian, and Deepak Garg. The second is joint work with Simon Spies, Lennard Gäher, Daniel Gratzer, Joseph Tassarotti, Robbert Krebbers, and Lars Birkedal.
During my PhD research I have studied distributive laws for monads, and developed various no-go theorems. These theorems prove that monads with certain algebraic properties cannot be composed with each other via a distributive law. In this talk I will focus on examples, and share the intuition that I have developed from my results. All the examples in this talk will be based on the Boom hierarchy, a hierarchy of data structures which lends itself perfectly for the study of monad compositions. Based on which monads in this hierarchy do and do not compose with each other via a distributive law, I will make predictions about the behaviour of monad compositions in general.
Mainstream formal analyses of software systems focus on extensional properties of program behaviour. Such properties ultimately deal with the input/output semantics of programs, this way giving information on what programs compute. There are several situations, however, where intensional properties of programs turn out to be crucial to ensure the correct behaviour of software systems. In these scenarios, one needs not only to know what output a program computes, but also how the program computes it. Typical examples of intensional aspects of computations include efficiency, resource usage, robustness with respect to errors, and privacy and security.
In the last decades, these properties have been studied in isolation by means of type systems, denotational semantics, and operational techniques. More recently, uniform accounts of a large family of intensional aspects of computations (oftentimes referred to as coeffectful computations) have been proposed in terms of graded modal types and graded (co)monads.
In this talk, I will present a general framework for studying operationally-based notions of program equivalence for languages with graded modal types. These notions of equivalence identify programs with the same intensional behaviour and thus allow for the intensional analysis of programs. The framework will be instantiated to study the intensional refinement of Abramsky's applicative bisimilarity (a coinductively-defined notion of equivalence for higher-order sequential languages) giving an abstract compositionality theorem generalising well-known results on intensional program behaviour such as Abadi's et al. Non-interference, Pfenning's Proof Irrelevance, and Reed and Pierce's Metric Preservation.
Finally, a formal connection between the theory of intensional program equivalence and the theory of abstract program distance built on top of Lawvere's analysis of generalised metric spaces will be established, this way allowing for the transfer of results and techniques developed in the context of program metric to the realm of intensional analyses of languages with modal types.
Holonomic techniques have deep roots going back to Wallis, Euler, and Gauss, and have evolved in modern times as an important subfield of computer algebra, thanks in large part to the work of Zeilberger and others over the past three decades. In this talk, I will give an overview of the area, and in particular will present a select survey of known and original results on decision problems for holonomic sequences and functions. I will also discuss some surprising connections to the theory of periods and exponential periods, which are classical objects of study in algebraic geometry and number theory; in particular, I will relate the decidability of certain decision problems for holonomic sequences to deep conjectures about periods and exponential periods, notably those due to Kontsevich and Zagier.
Parts of this talk will be based on the paper "On Positivity and Minimality for Second-Order Holonomic Sequences", https://arxiv.org/abs/2007.12282.
It is well known that the classic Łoś-Tarski preservation theorem fails in the finite: there are first-order definable classes of finite structures closed under extensions which are not definable (in the finite) in the existential fragment of first-order logic. We strengthen this by constructing for every n, first-order definable classes of finite structures closed under extensions which are not definable with n quantifier alternations. The classes we construct are definable in the extension of Datalog with negation and indeed in the existential fragment of transitive-closure logic. This answers negatively an open question posed by Rosen and Weinstein.
The talk surveys a series of works that lift the rich semantics and structure of graphs, and the experience of the formal-verification community in reasoning about them, to classical graph-theoretical problems.
Partially observable Markov Decision Processes (POMDPs) are a prime model in sequential decision making. They are heavily studied in operations research, artificial intelligence and verification, to name a few. For POMDPs, a good policy reaches the target with probability at some threshold, and makes its decisions solely based on the previously made observations. Deciding whether a good policy exists is undecidable. One of the methods to solve instances of this reachability problem is to restrict the memory that the strategy may use. This approach is commonly taking, e.g., in reinforcement learning for POMDPs. In the first part of the talk, we consider memoryless strategies in POMDPs. and show that this problem is as hard as the feasibility problem in parametric Markov Chain (pMC) analysis.
In the second part of the talk, we consider this feasibility problem for pMCs. Roughly, a pMC is a Markov chain with symbolic transitions. The feasibility problem asks: Do values for the symbols exist such that in the induced parameter-free Markov chain, a target state is reached with probability at least a half. We discuss the complexity landscape for variants of this decision problem. In particular, we establish that feasibility in pMCs is complete for the complexity class "existential theory of the reals” (ETR). Another example of an ETR-complete problem is deciding whether a multivariate polynomial has a real root. Together with the results of the first half of the talk, this establishes that deciding whether there exists a good memoryless policy in a POMDP is ETR-complete.
This is joint work with Alex Keizer and Jorge A. Pérez.
Compositional methods are central to the development and verification of software systems. They allow to break down large systems into smaller components, while enabling reasoning about the behaviour of the composed system. For concurrent and communicating systems, compositional techniques based on *behavioural type systems* have received much attention. By abstracting communication protocols as types, these type systems can statically check that programs interact with channels according to a certain protocol, whether the intended messages are exchanged in a certain order. For this talk, we will put on our coalgebraic spectacles to investigate *session types*, a widely studied class of behavioral type systems. We will seek a syntax-free description of session-based concurrency as states of coalgebras. The result will be a description of type equivalence, duality, and subtyping relations in terms of canonical coinductive presentations. In turn, this coinductive presentation makes it possible to elegantly derive a decidable type system with subtyping for π-calculus processes, in which the states of a coalgebra will serve as channel protocols. Going full circle, we will also exhibit a coalgebra structure on an existing session type system, and show that the relations and type system resulting from our coalgebraic perspective agree with the existing ones.
Joint work with Danel Ahman, University of Ljubljana.
In modern languages, computational effects are often structured using monads or algebraic effects and handlers. These mechanisms excel at implementation of computational effects within the language itself. For instance, the familiar implementation of mutable state in terms of state-passing functions requires no native state, and can be implemented either as a monad or using handlers. One is naturally drawn to using these techniques also for dealing with actual effects, such as manipulation of native memory and access to hardware. These are represented inside the language as algebraic operations or a monad, but treated specially by the language's top-level runtime, which invokes corresponding operating system functionality. While this approach works in practice, one wishes that the ingenuity of the language implementors were better supported by a more flexible methodology with a sound theoretical footing.
We address the issue by showing how to design a programming language based on runners of algebraic effects. We review runners, recast them as a programming construct, and present a calculus that captures the core ideas of programming with them. Through examples of runners we show how they capture both the interaction between the program and the external world, and encapsulation of programs in virtual environments that tightly control access to external resources and provide strong guarantees of proper resource finalization.
We accompanied our work with a prototype programming language Coop (https://github.com/andrejbauer/coop) and a Haskell library for runners (https://github.com/danelahman/haskell-coop).
The entire community is invited to participate in a debate on the future of the conference system in theoretical computer science. This will provide a rare community-wide opportunity for us to discuss the strengths and weaknesses of our current system, and consider if we can do better. Questions will be asked by members of the audience, and answered by our panel members.
The scope of the debate is all aspects of our publishing and community traditions, characterised by prestige earned mostly through publication in competitive conferences, and frequent local and international travel. Possible topics for discussion include the need to publish in conferences for career progression, which usually involves burning carbon; wasted reviewing effort when good papers are rejected from highly competitive conferences; the extent of our responsibility as a community to respond to climate change; alternative publishing models, like the journal-focussed system used in mathematics; high costs of conference travel and registration; virtual conference advantages, disadvantages and best practice; improving equality, diversity and access; consequences and response to COVID-19; and the role of professional bodies. These topics have many tight relationships, and need to be discussed together to gain a full understanding of the issues involved.
Poll responses. Here are the results of the three anonymous polls that were held during the debate.
The following links provide further reading on topics related to the debate:
If you have any feedback about OWLS, feel free to get in touch with an organizer.