Contents of this page: Administrative matters | Technical presentations(Last updated 8th October 2007.)
The chairman will ask Swierstra if it is possible to use MediaWiki for the wiki of the group, since MediaWiki makes it easier to format mathematics. Maybe there was a good reason to use TWiki?
We digested novel experiences from breakfast until dinner. Not only did we learn how to carry the torch as a group across a bridge, we now also know better than to worry about inconsistencies.
Having seen rivulets come together, and become a fast flowing stream that has to be navigated in narrow straits between the rocks, we have been shown how this can be achieved by combining strength, skill, and experience.
It has been shown that list homomorphisms, a class of recursive functions suitable for parallel computation on lists, can be automatically derived from a pair of sequential programs based on the third homomorphism theorem and weak inversion. In this talk, we show that it is possible to generalize the framework from lists to trees. We give a tree version of the third homomorphism theorem, and propose a calculation method for systematic derivation of tree homomorphisms that are suitable for divide-and-conquer parallel computation.(Joint work with Akimasa Morihara.) The slides are available.
Feature-Oriented Software Development (FOSD) provides a multitude of formalisms, methods, languages, and tools for building variable, customizable, and extensible software. Along different lines of research different ideas of what a feature is have been developed. Although the existing approaches have similar goals, their representations and formalizations have not been integrated so far into a common framework. We present a feature algebra as a foundation of FOSD. The algebra captures the key ideas and provides a common ground for current and future research in this field, in which also alternative options can be explored.A paper is available.
Jeremy Gibbons, David Lester and Richard Bird recently published a so-called Functional Pearl on an effective way of enumerating the rationals in "Calkin-Wilf" order. They remarked that it is "not obvious" how to do the same in "Stern-Brocot" order. We show, with one algorithm, how to do both. The work is a continuation of our research on improving mathematical method reported on at the last meeting.A paper is available.
It may be surprising that variations of the maximum segment sum (MSS) problem, a textbook example for the squiggolists, are still active topics for algorithm designers. I would like to examine the new developments from the view of relational program calculation.The slides are available.It turns out that, while the classical MSS problem is solved by the Greedy Theorem, by applying the Thinning Theorem, we get a linear-time algorithm for MSS with upper bound on length. To derive an algorithm for the maximum segment density problem, on the other hand, we need a global notion of thinning which is stronger than the Thinning Theorem.
The concepts of left-negative and right-screw segments emerge from the search for monotonicity conditions. The efficiency of the resulting algorithms crucially relies on exploiting properties of the set of partial solutions and design efficient data structures for them.
Exercises in mathematics are often solved using a standard procedure, such as for example solving a system of linear equations by subtracting equations from top to bottom, and then substituting variables from bottom to top. Students have to practice such procedural skills: they have to learn how to apply a particular strategy to an exercise. E-learning systems offer excellent possibilities for practicing procedural skills. The first explanations and motivation for a procedure that solves a particular kind of problems are probably best taught in a class room, or studied in a book, but the subsequent practice can often be done behind a computer. There exist many e-learning systems or intelligent tutoring systems that support practicing procedural skills. The tools vary widely in breadth, depth, user-interface, etc, but, unfortunately, almost all of them lack sophisticated techniques for providing immediate feedback. If feedback mechanisms are present, they are hard coded in the tools, often even with the exercises. This situation hampers the usage of e-learning systems for practicing mathematical skills. This paper introduces a formalism for specifying strategies for solving exercises. It shows how a strategy can be viewed as a language in which sentences consist of transformation steps. Furthermore, it discusses how we can use advanced techniques from computer science, such as term rewriting, strategies, error-correcting parsers, and parser combinators to provide feedback at each intermediate step from the start towards the solution of an exercise. Our goal is to obtain e-learning systems that give immediate and useful feedback.(Joint work with Arthur van Leeuwen and Wouter Pasman.) The slides are available.
In this presentation I will present an overview of my thesis. The main result of my thesis is that we can capture the Visitor pattern as an extensible software component that is parametrizable on both the shape of some concrete visitor, and the decomposion strategy that can be used to define functions over the concrete visitors. This contrasts with the tradition presentation, where visitors are presented as a design pattern and both of these parametrization aspects consist of early design choices on the implementation, basically precluding any form of reuse. Furthermore, it is shown how we can use DGP techniques to write generic functions over visitors defined using our software component. My work builds on some type-theoretic results on encodings of datatypes, the relation between different encodings with different recursion patterns, and the fact that the Visitor design pattern and encodings of datatypes are closely related.The slides are available.
The torch (flashlight/bridge) problem is a problem of getting a number of people across a bridge with a limited capacity. A solution to the problem when the capacity of the bridge is 2 is well known. I will present a solution for the more general problem where the capacity is an input parameter. The solution has worst-case complexity O(N2) where N is the number of people. Before embarking on the problem, I was told that it is believed to be "hard" and I do not know of any other published solution. The problem is indeed surprisingly difficult to solve, but it is not "hard" in any technical sense. I strongly suspect that my solution can be improved. The conjectured improvements lead to interesting algorithmic challenges.A paper is available.
We develop virtual machines and compilers for a multi-level language, which supports multi-stage specialization by composing program fragments with quotation mechanisms. We consider two styles of virtual machines---ones equipped with special instructions for code generation and ones without---and show that the latter kind can deal with, more easily, low-level code generation, which avoids the overhead of (run-time) compilation by manipulating instruction sequences, rather than source-level terms, as data. The virtual machines and accompanying compilers are derived by program transformation, which extends Ager et al.'s derivation of virtual machines from evaluators.(Joint work with Masashi Iwaki.)
We show that cyclic structures, i.e., finite or possibly infinite structures with back-pointers, unwindable into possibly infinite structures, can be elegantly represented as nested datatypes. This representation is free of the various deficiencies characterizing the more naive representation as mixed-variant datatypes. It is inspired by the representation of lambda-terms as a nested datatype via the de Bruijn notation.(Joint work with N.Ghani, M.Hamana, and T.Uustalu.) The slides are available.
Specware is the "flagship" system of Kestrel Institute, developed over a sequence of years, as a system that supports the correct-by-construction paradigm. An earlier system, KIDS (Kestrel Interactive Development System, Doug Smith, 1990) was designed for, and excelled in, transformational programming in the small. Specware was designed for system development in the large; unfortunately, the initial designers paid scant attention to the convenience of doing the small steps. Recently, Kestrel Institute has begun to address this issue.(Joint work with Alessandro Coglio and Stephen Westfold, Kestrel Institute.) The slides are available.The talk is a progress report, discussing an improved notion of refinement between specs based on a new notion of the "observable" ops of a spec, and newly added transformational capabilities. This relatively small effort has already achieved a marked improvement in the practical usability of Specware.
No abstract received.The slides are available.
Agda is a functional language for programming with dependent types. In this tutorial/demo I present a subset of the notation starting from basic lambda calculus, through simple datatypes, datatypes with strong invariants, the Curry-Howard isomorphism and equational reasoning. The strong typesystems means that theorems can be expressed as types and proofs can be expressed as programs, with the Agda implementation acting as a proof checker.More information about Agda2 is available (website, paper).
No abstract received.The slides are available.
Recently, Garcia, Järvi et al. presented "An extended comparative study of language support for generic programming" [JFP 2007] where Haskell scores best out of eight compared languages (C++, SML, Ocaml, Haskell, Eiffel, Java, C#, Cecil). There are numerous (> 8) approaches to generic programming in Haskell alone and until now there hasn't emerged a clear winner. This talk presents work in progress on comparing libraries for generic programming in Haskell when it comes to implementation techniques, expressiveness and efficiency.(Joint work with Johan Jeuring et al.) The slides are available.
Program calculation is a methodology for easy construction of efficient algorithms, where efficient algorithms are systematically derived from naive but apparently correct algorithms by calculational laws. In this presentation, I propose a new calculational laws for combinatorial optimization problems, which are not only helpful for deriving dynamic programming algorithms for a wide class of combinatorial optimization problems, but also suitable for automatic implementation. I show a derivation of an algorithm for resource-constrained shortest path problems, and explain how my calculational law works.The slides are available.
We have been using algebraic axiom systems for a variety of application areas over the last few years. Recently, it turned out that these lend themselves easily to mechanization via existing theorem provers, notably Prover9/Mace4 by W. McCune. The axioms can be typed in directly, without extra preparation, and the system is able to prove relevant consequences fully automatically at surprising speed, although guidance through lemmas may help in proving more complicated results. A repertory of the topics that have already been treated in this way is available online.In this talk we exemplify the technique with part of the axiom system from the presentation An Algebra for Feature-Oriented Software Development and several theorems from term rewriting and refinement calculus.
Further examples can be found in the following papers:
- P. Höfner, G. Struth: Automated Reasoning in Kleene Algebra. In F. Pfenning (ed.): CADE 2007. LNAI 4603. Springer 2007, 279-294
- P. Höfner, G. Struth: Can Refinement be Automated? In E. Boiten, J. Derrick, G. Smith (eds.): International Refinement Workshop - Refine 2007, ENTCS, 2007.
I propose a semiring formulation for reasoning about an agent's changing beliefs: a dynamic epistemic semiring (DES). A DES is a modal semiring extended with epistemic-action operators. I concentrate on the revision operator by proposing an axiomatisation, developing a basic calculus and deriving the classical AGM revision axioms in the algebra.
Rabin's (distributed, probabilistic) mutual exclusion algorithm addresses the liveness of resource sharing by guaranteeing a probabilistic lower bound on the chance that a process competing for a shared resource will actually get it. At its core is a clever and unexpected mathematical "gem" around which the details of concurrency and overlapping executions are set: the gem is not obvious, but is nevertheless easily proved; the details of organising the concurrency, on the other hand, are obvious but not so easily proved. In fact there was an error in Rabin's original presentation.The slides are available."Horses for Courses" means choosing the right tool for the job. Proof of Rabin's algorithm seems so difficult, in full rigour, that McIver decided to split it up into "layers", one of which --the most intricate-- would be handled by a probabilistic extension of Cohen's Separation-and-Reduction technique, itself based on (non-probabilistic) Kleene Algebra. (Study of that algebra is the connection with WG2.1's interests.)
The talk summarises the history of the problem, and the proposed solution strategy, and sketches what the progress so far has delivered. There turn out to be interesting links between pKA and other Kleene-style algebras invented for a wholly different purpose.
Large specifications (today also referred to as "models") exhibit the same problem as large programs: They need to be developed, ideally in a systematic derivation process. We discuss means for such a "model evolution process" based on the formalism of Espces (comparable to Statecharts) and generalized refinement morphisms. To this end we also employ some kind of modal operators. The concepts can also be used to assist other development aspects such as the weaving of global policies into local specifications. The approach treats discrete models as well as continuous or hybrid models.(Joint work with Doug Smith and Dusko Pavlovic.)