Contents of this page: Administrative matters | Technical presentations(Last updated 17th January 2007.)
(The same rule does not apply to invited observers: if an invited observer cannot attend, and does not actively express an interest in being reinvited, they may not be reinvited.)
The Chairman also wishes to emphasize that members and observers should plan to attend the entire meeting, from Monday morning until Friday lunchtime.
The Chairman has granted an exception to this rule for the Secretary, but only for secretarial purposes.
It is expected that the wiki will eventually replace the existing website; in particular, it will replace the profile that we maintain, and allow members to update their own descriptions. The Secretary will distribute account information shortly.
We enjoyed artistic sights and sounds from an old friend, alluring scents from a master craftsman, and the exquisite tastes of fine regional foods and wines.
We even had two opportunities to feel our way around in the dark. But eventually we emerged into the light, and all was clarified.
A datatype-generic function is a family of functions indexed by (the structure of) a type. Examples of generic functions are equality tests, maps and pretty printers. QuickCheck is one of the most advanced tools for testing properties of functional programs. It supports the definition of properties and random test-data generators in Haskell, and checks if a monomorphic property is satisfied by the test cases. Generic functions satisfy generic properties and this paper discusses specifying and testing such properties. It shows how generic properties can be formulated, and how QuickCheck can be used to test generic properties. Furthermore, it shows how to automatically generate QuickCheck generators using Generic Haskell. (Joint work with Johan Jeuring.)The slides are available, and a paper.
At Oxford, we have just started a 42-month EPSRC-funded project on Generic and Indexed Programming, a continuation of the recently-completed Datatype-Generic Programming project. The `indexed' aspect of this project is about expressing more of the structure (specifically, invariants) of a program in the types. We have in mind things like the shape of a data structure, the state of a component, and some property of a represented value. Mechanisms such as Haskell's generalized algebraic datatypes can be used to provide a kind of lightweight dependently-typed programming. Similar ideas are expressible in modern OO languages such as Java and C#. In this talk, I will outline the aims of the project.The slides are available.
The extensional, state-based generation of policies for optimal discrete control is impractical when there are very many states since its complexity is polynomial in the number of states. To avoid this drawback, an intensional approach is proposed: optimal control guards of actions are generated by a symbolic iteration. The domain of states is stratified into domain strata, viz. sets of states having the same optimal value. The optimal guards are stratified into guard strata which are subsets of domain strata. The optimal values are generated by greedy iteration steps. The strata are generated by iteration steps which are guided by the greedy ones and are asynchronous: the guard strata for an action with an increasing cost depend on domain strata with decreasing ranks. The complexity of this symbolic technique is compared to that of symbolic methods for generating domains of discrete reachability.The slides are available.
A component of our research on "algorithmic problem solving" is to rework standard mathematics in the light of what we have learnt about algorithm design. This talk is about identifying properties of greatest common divisors by analysing Euclid's algorithm. The algorithm is first developed from scratch using a Galois connection to specify greatest common divisor as the infimum operator in the division ordering of natural numbers. Standard properties of gcd's are then derived from the algorithm. A sufficient condition for determining when a function distributes through gcd is formulated and proved. (Joint work with Joao Ferreira.)The slides are available.
Classical computer science theories (eg. transition systems, parsing, databases) are written pointwise. What do we gain by replaying them in the (relational) pointfree style? This talk will try to answer this question by picking one such widespread theory -- relational database theory (in fact, the data dependency part of it, as far as this talk is concerned) -- and refactoring it in a "let the symbols do the work" style. It can be observed that clear-cut patterns replace long-winded formulae and semi-formal proofs (full of "..." notation, case analyses and English words) give way to compact, agile calculations (namely, that of the "lossless decomposition" theorem); disparate definitions are shown to be equivalent, and so on.The slides are available.Above all, the theory becomes generic: sets of tuples become binary relations, attributes generalize to arbitrary functions and injectivity is shown to be "what matters" after all. (Also a pro: students like it!)
There are, however, some difficulties. Feedback from the meeting is welcome concerning some hard bits, namely how to make attribute-set complementation (required in some MVD axioms) functional and generic.
List homomorphisms are a class of recursive functions on lists, which match very well with the divide-and-conquer paradigm. However, direct programming with list homomorphisms is a challenge for many programmers. This talk is about a new system that can automatically derive cost-optimal list homomorphisms from a pair of sequential programs, based on the third homomorphism theorem. Our idea is to reduce extraction of list homomorphisms to derivation of weak right inverse. We show that weak right inverse always exists and can be automatically generated from a wide class of sequential functions. We demonstrate our system with several nontrivial examples, including the maximum prefix sum problem, the prefix sum computation, maximum segment sum problem, and the line-of-sight problem. The experimental results show practical efficiency of our automatic parallelization algorithm and good speedups of the generated parallel programs.This is a joint work with K. Morita, A. Morihata, K. Matsuzaki and M. Takeichi.
In response to an example by Jeremy, I wanted to show that dimensional analysis (units of measure) can be done at the level of values, rather than types, and with a minimum of additional linguistic mechanism. I would say that my attempt had one merit: that we can use the existing laws of arithmetic for unit cancellation and factoring and conversion, rather than teach these laws to a type system. But I would also say that my attempt also had several demerits, one of which is the need to state units in formulas, rather than letting a type system provide them implicitly.
Modal semirings give algebraic semantics to the operators box and diamond. We show their application in modelling the logic KT45n that allows expressing common knowledge in multiagent systems. The resulting algebra is used to give a simple algebraic solution of the Wise Men Puzzle. The solution technique can be generalised to apply also to structurally similar puzzles such as the Muddy Children or the Unexpected Hangman. As a second application we show how to model preferences and their upgrade in an algebraic way.The slides are available.
For the benefit of members and observers not present at the Rome meeting, I discussed the undergraduate module Algorithmic Problem Solving once again. The module is given in the first semester of the year, and is now compulsory for all Computer Science students at the University of Nottingham. The module is an example-driven introduction to problem solving, with an emphasis on problems that demand an algorithmic solution.The slides are available.
Two major concerns of study rest at the center of computer science: what to compute, and how to compute efficiently. Problem solving involves going from clear specifications for the "what" to efficient implementations for the "how". This is challenging because clear specifications usually correspond to straightforward implementations, not at all efficient, while efficient implementations are usually difficult to understand, not at all clear.The slides are available.This talk gives an overview of a general and systematic method for transforming clear specifications into efficient implementations. We will present the method through examples, taken from problems in hardware design and image processing expressed using loops and array, in query processing and access control expressed using set operations, in sequence processing and math puzzles expressed using recursive functions, in program analysis and trust management expressed using logic rules, and in building software components expressed using objects. We summarize our ongoing projects on a number of fronts.
No abstract provided.
We provide evidence of the usefulness of reasoning about process behaviour coalgebraically in the allegory of binary relations. The novelty of our approach consists in establishing a synergy between a relational construct, Reynolds' "relation on functions" involved in his abstraction theorem (traditionally employed in explaining and reasoning about parametric polymorphism), and the coalgebraic approach to bisimulation, invariants and other concepts of process algebra. This synergy arises from the fact that, once pointfree-transformed, formulae in one domain "look like" others in the other domain. We stress on this syntactic aspect of formal reasoning, a kind of let-the-symbols-do-the-work style of calculation, often neglected by too much emphasis on domain-specific, semantic concerns. (Joint work with Alexandra Silva and Luís Barbosa.)The slides are available.
We present a technique based on unfold/fold transformations for deriving a winning strategy for an impartial two-person game. The initial program is a logic program which: (i) encodes the rules of the game, and (ii) defines a predicate which establishes when a player is in a winning position. Note that nondeterminist rules and negative conditions can easily be expressed in that logic program. Then, by standard unfold/fold transformations, which includes the elimination of nondeterminism and the elimination of redundant clauses, we get a program which tells us the next move to make for winning the game, if it is at all possible. The technique has been tested for a few games. We hope that it will prove very effective for many classes of games considered in the literature.(Joint work with Maurizio Proietti.) The slides are available.
Datatype-generic programs are programs parametrized by a datatype or type functor. There are two main styles of datatype-generic programming: the Algebra of Programming approach, characterized by structured recursion operators parametrized by a shape functor, and the Generic Haskell approach, characterized by case analysis over the structure of a datatype. We show that the former enjoys a kind of parametricity, relating the behaviours of generic functions at different types; in contrast, the latter is more ad hoc, with no coherence required or provided between the various clauses. (Joint work with Ross Paterson.)The slides are available.
The concept of recursive coalgebra of a functor was introduced in the 1970s by Osius in his work on categorical set theory to discuss the relationship between wellfounded induction and recursively specified functions. In this talk, we motivate the use of recursive coalgebras as a paradigm of structured recursion in programming semantics, list some basic facts about recursive coalgebras and, centrally, give new conditions for the recursiveness of a coalgebra based on comonads, comonad-coalgebras and distributive laws of functors over comonads. We also present an alternative construction using countable products instead of cofree comonads. (Joint work with Tarmo Uustalu and Venenzio Capretta.)The slides are available.
Using automata for representing sets of numbers is an old idea, orginally introduced as a tool for reasoning about decidable fragments of arithmetic. There is now a growing interest in using such automata-based representations as data structures for dealing with the sets of configurations that have to be handled during symbolic state-space exploration, especially when the analyzed systems rely on integers and real variables. In this survey, we present automata-based representations of integer and real vectors, describe their main properties, and show how they can be used for exploring symbolically infinite state-spaces, as well as for manipulating systems of linear constraints.
No abstract provided. (Joint work with João Fernandes and João Saraiva.)The slides are available.
Multirelations model dual non-determinism, but how suited are multirelations to modelling two-player games? This talk presents a way of modelling finite two-player alternating no-stalemate games with complete information, using multirelations, and illustrates how the existence of a winning strategy can be calculated. (Joint work with Clare Martin.)The slides are available.
We consider three types of bottom-up tree transducers of increasing levels of generality: relabelling (branching-preserving), rebranching (layering-preserving) and relayering (general) tree transducers. For each type, we show the equivalence of three categories, in fact, arrows: the transducers of this type (modulo bisimilarity), the (co/bi)-Kleisli category of a comonad/distributive law, and a subclass of tree functions. (Joint work with Ichiro Hasuo and Bart Jacobs.)The slides are available.
Traditional type inference aims at reconstructing type declarations given some type definitions. We present a rule-based constraint rewriting algorithm that reconstructs both type declarations and type definitions.The slides are available.Our algorithm reconstructs uniform polymorphic definitions of algebraic data types and simultaneously infers the types of all expressions and functions (supporting polymorphic recursion) in the program. The declarative nature of the algorithm allows us to easily show that it has a number of highly desirable properties such as soundness, completeness and various optimality properties. Moreover, we show how to easily extend and adapt it to suit a number of different language constructs and type system features.
Refinement algebras are abstract algebras intended for reasoning about programs in a total-correctness setting. I present three differents algebras: 1. demonic refinement algebra for reasoning about demonically non-deterministic programs, 2. dually nondeterministic refinement algebra facilitating reasoning about demonic as well as angelic choice, and 3. probabilistic refinement algebra.
No abstract provided.