Technical Presentations


Matthew FlattCompiler and Runtime Support for Continuation Marks

Continuation marks enable dynamic binding and context inspection in a language with proper handling of tail calls and first-class, multi-prompt, delimited continuations. By exposing continuation marks to users of a programming language, more kinds of language extensions can be implemented as libraries without further changes to the compiler. At the same time, the compiler and runtime system must provide an efficient implementation of continuation marks to ensure that library-implemented language extensions are as effective as changing the compiler. Our implementation of continuation marks for Chez Scheme (in support of Racket) makes dynamic binding and lookup constant-time and fast, preserves the performance of Chez Scheme's first-class continuations, and imposes negligible overhead on program fragments that do not use first-class continuations or marks.

Download slides


John HughesHow to specify it!

I presented a tutorial I have developed on formulating useful properties for testing pure functions, using a key-value map implemented as a binary search tree as an example. We discussed the material, concluding among other things that the idea of using an inefficient implementation as an oracle for an optimized implementation deserves a separate mention (I treated it as a special case of "model-based" properties), and that testing the completeness of generators (not discussed in these slides) is more daunting and difficult than the other material. A paper will appear in the proceedings of Trends in Functional Programming 2019.

Download slides


David MacQueenSome early ideas about types

Types first arose implicitly in the work of Frege in the form of the "range of significance" of a propositional function. Then Russell's Paradox (1901) presented a problem that Russel (1903,1908) tried to address with a Ramified Theory of Types in Principia Mathematica. This talk addresses these early ideas about types and follows the evolution through the Simple Theory of Types (Ramsey, and Hilbert and Ackermann) in the 1920s, Curry theory of "functionality" for combinatory logic in the 1930s, and finally to Church's simply typed lambda calculus of 1940.

Download slides
Download paper


Jennifer PaykinProgramming Abstractions for Quantum Computing

Quantum computing is a new and exciting programming domain that defies our intuitions at every turn. However, the underlying mathematical model, rooted in linear algebra, is fairly simple to understand. This mathematical semantics shows which programming abstractions are allowed and which must be rejected, and gives hints to to new programming abstractions that could replace them. This talk gives a short overview of the quantum computing programming model, and discusses the progression of a few abstractions, including functions, dependent types, and control flow.

Download slides


Nadia PolikarpovaLiquid Resource Types for Verification and Synthesis

This talk presents a type system that combines Liquid Types with potential annotations from Automated Amortized Resource Analysis to enable fine-grained reasoning about resource consumption. Using Liquid Resource Types we can verify, for example, that insertion sort only makes as many steps as there are unordered pairs in its input list. We can also use these types in conjunctions with type-driven program synthesis to synthesize provably efficient programs.

Download slides


Nadia PolikarpovaComponent-Based Synthesis for Haskell

This talk is a demo of Hoogle+, a component-based synthesizer for Haskell.

Download slides


John ReppyShapes and Flattening

NESL is a first-order functional language with an apply-to-each construct and other parallel primitives that enable the expression of irregular nested data-parallel (NDP) algorithms. To compile NESL, Blelloch and others developed a global flattening transformation that maps irregular NDP code into regular flat data parallel (FDP) code suitable for executing on SIMD or SIMT architectures, such as GPUs. While flattening solves the problem of mapping irregular parallelism into a regular model, it requires significant additional optimizations to produce performant code. Nessie is a compiler for NESL that generates CUDA code for running on Nvidia GPUs. The Nessie compiler relies on a fairly complicated shape analysis that is performed on the FDP code produced by the flattening transformation. Shape analysis plays a key role in the compiler as it is the enabler of fusion optimizations, smart kernel scheduling, and other optimizations. In this paper, we present a new approach to the shape analysis problem for NESL that is both simpler to implement and provides better quality shape information. The key idea is to analyze the NDP representation of the program and then preserve shape information through the flattening transformation.

Download slides
Download paper
Link to website


John ReppyComparing implementations of stacks and continuations

The efficient implementation of function calls and non-local control transfers is a critical part of modern language implementations and is important in the implementation of everything from recursion, higher-order functions, concurrency and coroutines, to task-based parallelism. In a compiler, these features can be supported by a variety of mechanisms, including call stacks, segmented stacks, and heap-allocated continuation closures. An implementor of a high-level language with advanced control features might ask the question ``what is the best choice for my implementation?'' Unfortunately, the current literature does not provide much guidance, since previous studies suffer from various flaws in methodology and are outdated for modern hardware. In the absence of recent, well-normalized measurements and a holistic overview of their implementation specifics, the path of least resistance when choosing a strategy is to trust folklore, but the folklore is also suspect. This paper attempts to remedy this situation by providing an ``apples-to-apples'' comparison of five different approaches to implementing call stacks and continuations. This comparison uses the same source language, compiler pipeline, LLVM-backend, and runtime system, with the only differences being those required by the differences in implementation strategy. We compare the implementation challenges of the different approaches, their sequential performance, and their suitability to support advanced control mechanisms, including supporting heavily threaded code. In addition to the comparison of implementation strategies, the paper's contributions also include a number of useful implementation techniques that we discovered along the way.

Download slides
Download paper


Peter ThiemannDuality of Session Types: The Final Cut

Duality is a central concept in the theory of session types. Since a flaw was found in the original definition, several other definitions have been published. As their connection is not obvious, we compare the competing definitions, discuss tradeoffs, and prove some equivalences. Some of the results are mechanized in Agda. In collaboration with Simon Gay (U Glasgow) and Vasco Vasconcelos (U Lisbon). To appear in PLACES 2020

Download slides


David WalkerNV: A platform for modelling and verifying routing protocols

Network reliability is a key problem in the modern world --- without it we cannot access vital online services. In this talk, we describe the design of NV, a functional language for modeling network routing protocols such as OSPF, BGP and ISIS. NV is relatively simple, containing a set of standard data structures such as booleans, fixed-width integers of various sizes, records, options, dictionaries and non-recursive functions. NV also includes symbolic variables, which can be used to express unknowns, such as failures or announcements from neighboring networks, as well as assertions, which may be used to express desirable properties of networks. With this well-understood set of primitives, it is possible to encode the semantics of a substantial fragment of Cisco's and Juniper's router configurations. NV has two main back ends that may be used for verification of network properties. The first is an encoding of NV programs as SMT constraints. The second is a simulation engine that will compute a solution to the routing system given concrete values in place of symbolic ones. The latter engine uses multi-terminal BDDs to implement dictionaries efficiently.

Download slides
Link to website


Stephanie WeirichAdventures in Quantitative Type Theory

A tutorial on what I've been thinking about Quantitative Type Theory

Download slides