Contents of this page: Administrative matters | Technical presentations(Last updated 12th June 2001.)
There will be no ordinary meeting attached to the Working Conference; it was decided that the one-day meeting at Le Bischenberg had not been a success. Therefore the Working Conference will not be numbered in the usual sequence.
This meeting represented the highest level yet attained by the working group. In no way did we suffer a drought during the week, and we succeeded in fording the consequent river without being swept away, so that we could continue with our explorations.
The genericity of the visitor patterns studied—whether expressed as comonadic recursion, object-oriented paramorphisms or graph algebra—can be summed up in the following traversal:
[( Arriba ; Abajo ; AlCentro ; Adentro )]
In this talk I discuss some problems that show up when trying to generate generic functions from functions on particular datatypes.
We give a necessary and sufficient condition for when a set-theoretic function can be written using the recursion operator fold, and a dual condition for the recursion operator unfold. The conditions are simple, practically useful, and generic in the underlying datatype.A paper is available.Our proof of this result, and its dual for unfolds, applies only in the category SET. The proof depends on regularity properties of arrows: for all non-trivial arrows f in SET there exists a g such that f . g . f = f, but the same does not apply in other interesting categories such as REL and CPO. Nevertheless, we see no reason why the result itself should not apply in these alternative settings; but that is a subject for future work.
(Lambert Meertens showed how to simplify the proof considerably. His simplification also dispenses with the use of regularity. However, the proof still is specific to SET.)
Over the last decade I have worked with colleagues on several different projects to develop, implement, and automate the use of calculi for program synthesis and transformation. These projects had different motivations and goals and differed too in the kinds of programs synthesized (e.g., functional programs, logic programs, and even circuit descriptions). However, despite their differences they were all based on three simple ideas. First, calculi can be formally derived in a rich enough logic (e.g., higher-order logic). Second, higher-order resolution is the central mechanism used to synthesize programs during proofs of their correctness. And third, synthesis proofs have a predictable form and can be partially or completely automated. In this talk I explain these ideas and illustrate the general methodology employed.Some papers were distributed at the meeting, on synthesis of functional programs, program schemata as derived rules, logic framework based program synthesis, and hardware synthesis.
There are many nice examples of derivations of functional programs by calculational (transformational) methods. We appear to lack successful examples of similarly elegant derivations of state-based programs, where the notion of `state' is not introduced for the sake of efficiency, but is an integral part of the specification from the very start. The challenge is to start from the specification and derive an implementation, not to verify an implementation post-hoc.
The object-oriented style is gaining much visibility. It is related to a number of interesting methodological and technical issues: requirements modelling; design patterns; interaction between genericity, inheritance, dynamic scoping, side-effects, and concurrency. On the other hand, research in system design stresses refinement-based approaches, from requirements analysis to actual implementation. These approaches aim at rather clear and precise techniques. Should we integrate these two world-views? If yes, how?
No abstract provided.
Many functions on context-free languages can be expressed in the form of the least fixed point of a function whose definition mimics the grammar of the given language. Examples include the function returning the length of the shortest word in a language, and the function returning the smallest number of edit operations required to transform a given word into a word in a language.A paper is available.The talk presented the basic theory that explains when a function on a context-free language can be defined in this way. The main theorem presented was that the f domain and range of the function should both be a regular algebra (to be precise an S-algebra in Conway's terms) and the function should be a monoid homomorphism. Methods for constructing regular algebras, in particular graph algebras, were discussed and applied to the language recognition problem. The methods provide a basis for a methodology of program construction.
Program refactoring (in general) means to restructure a program so that one can understand, adapt, or extend the program more easily. In refactoring functional programs, we deal with the various abstractions offered by modern functional languages, e.g., datatypes, functions, classes, modules. Based on a suitable catalog of refactorings, we can communicate desired adaptations of functional programs. The underlying transformations can be specified using a deductive style like in natural semantics. Semantics-preservation (and hence the validity) of refactoring often needs to be identified in a specific manner, e.g., a refactoring for monad introduction requires to relate computations and values. Tool support is desirable and feasible to perform actual refactoring.This is work in progress. An earlier paper on the subject is available online.
Different people mean different things by the term generic programming. In particular, the C++ community uses the term to refer to the kinds of abstract programs capturable with the Standard Template Library, a collection of data structures and algorithms in C++. If anything is, this is the majority definition of the term.I'd like to use the term to mean type-parameterized--type---parameterized theory of programming, that is, of programs parameterized by type-parameterized--types or type constructors. For example, the generic map function is instantiated with parameters like List and Tree.
In this talk, I explained the C++ STL from the point of view of TPTPTP, and explain in what way the latter is much more general than the former.
Elements of Kleene algebras can be used, among others, as abstractions of the input-output semantics of nondeterministic programs or as models for the association of pointers with their target objects. In the first case, one seeks to distinguish the subclass of elements that correspond to deterministic programs. In the second case one is only interested in functional correspondences, since it does not make sense for a pointer to point to two different objects.This is joint work with Jules Desharnais.We discuss several candidate notions of determinacy and clarify their relationship. Some characterizations that are equivalent in the case where the underlying Kleene algebra is an (abstract) relation algebra are not equivalent for general Kleene algebras.
No abstract provided.
Ideally, all design properties for a problem should be implemented in a modular fashion based on functional decomposition. However, there are properties like error handling, optimisation and synchronisation which tend to crosscut the functional decomposition. The naive implementation of such properties leads to tangled programs. In the aspect-oriented programming context, these properties are called aspects, and special language constructs are supplied to enable a separation of concerns. Using an object-oriented core language, for example, an important class of aspects can be modelled via method call interception, that is, via language constructs for the selective interception of transitions from a caller to a callee in a way that extra functionality can be superimposed with the actual method call. We study semantics of language extensions for aspect-oriented programming with some emphasis on method call interception. The important outcome of this investigation is that language constructs which are already used in practice (without a semantics) can be defined in a way that static type safety is enforced and separate compilation is enabled. Natural semantics is used to specify the meaning of the constructs.This is work in progress. A corresponding draft paper is available online.
In Algorithmic Languages and Calculi I presented a metaprogramming calculus for describing program transformation application and general program manipulation and analysis. In our recent work at Teknowledge Corp. we have been extending PowerPoint to provide a software architecture design tool, called the Design Editor, that is used to construct style-specific architectures. System designers can then write analyzers of the graphical designs in logical terms pertinent to the problem domains that are supported by the specific style. The style takes on the role of a domain-specific language in the modern world of graphic interfaces.In order to write analyzers and apply graph transformations, I am interested in designing the analog of the metaprogramming calculus for graphical representations. In the calculus, the paramorphism on trees was identified as the fundamental operator for transformation application. At the meeting I conjectured that the somewhat tricky notion of a graph paramorphism should be the (infinite) limit - if it exists - of the sequence obtained by applying the corresponding tree paramorphism to successively deeper trees.
Amazingly enough, Jeremy Gibbons was able to write a Haskell program to define this paramorphism! (Be sure to see his talk later in the meeting.)
We present a `many-in-one' recursion scheme, called comonadic iteration, that neatly unifies a variety of recursion schemes on inductive types looking as diverging generalizations of the basic recursion scheme of iteration (fold). The scheme is doubly generic. In addition to being generic in a functor that captures the signature of an inductive type, it is also generic in a comonad and a distributive law which together determine the specific scheme. The comonad represents an abstraction of the different manners in which a recursion scheme makes use of the argument in each recursive step. For instance, some recursion schemes make a copy of the argument so as to make it available to the step function (primitive recursion). Others repeatedly deconstruct the argument so as to perform recursive calls on various subparts of it in a single step (course-of-value iteration). By specializing the new scheme for particular comonads we can obtain, among others, simple iteration and mild generalizations of primitive recursion and course-of-value iteration.This is joint work with Tarmo Uustalu and Varmo Vene. A paper is available online.
Although undirected graphs can be modelled by symmetric edge relations, this is not always convenient. For instance, characterising cycle-freeness cannot be done in the same simple manner as in the directed case, by stipulating that the transitive closure of the edge relation does not intersect the identity relation.A remedy is the definition of a modified path concatenation that avoids paths with useless back-and-forth moves.
But even with that, the treatment of cycles is not convenient. Instead, we start with a general notion of a `part-of-relation'. In the underlying set a certain subset (e.g. that of elementary cycles) is marked as `undesired'; `good' elements are those which do not contain an undesired part. This leads to a matroid-like structure in which a general greedy algorithm for computing maximal good elements can be conveniently derived in an algebraic fashion.
No abstract provided.
A novel system for representing the rational numbers based on Hensel's p-adic arithmetic is proposed. The new scheme uses a compact variable-length encoding that may be viewed as a generalization of radix complement notation. It allows exact arithmetic, and approximate arithmetic under programmer control. It is superior to existing coding methods because the arithmetic operations take particularly simple, consistent forms. These attributes make the new number representation attractive for use in computer hardware.A paper is available.
No abstract provided.
No abstract provided.
Circuit design can be done more effectively by describing the function that a circuit is intended to perform than by describing a circuit that is intended to perform that function. We use a standard programming language (Pascal, C, Java, ...) not to describe circuits, but to describe algorithms. We present two new ways to implement ordinary programs with logic gates. One, like imperative programs, has an associated memory to store state; the other, like functional programs, passes the state from one component to the next. The resulting circuits, which behave according to the programs, have the same structure as the programs. The timing uses calculated delays; synchronous circuits (global clock) can be programmed, as can asynchronous (self-timed) circuits. We give a formal semantics for both programs and circuits in order to prove our circuits correct. By simulation, we also demonstrate that the circuits perform favorably compared to others.A paper is available.
This talk described an experiment to try to determine the definitional power of nested datatypes in Haskell. The goal was to construct a nested datatype that defines the set of all lists whose length is a prime number. The idea behind the construction was to program the construction of types using Church encodings of numbers and booleans at the type level rather than at the value level.This was work done by Kevin Backhouse and Roland Backhouse, presented in Roland's absence by Jeremy Gibbons.The experiment was unsuccessful. Its failure points to shortcomings and ad-hocery in the type definition mechanisms in Haskell. Difficulties occur because kinds in Haskell are monomorphic (and default to the kind
*
unless specific steps are taken by the programmer) rather than polymorphic. Moreover, bounded quantifications of polymorphic types are essential if Church encodings are to be used in a typed functional programming language, whether at value level or at type level.
Dave Wile gave a presentation earlier in the week on graph catamorphisms. He wanted to manipulate abstract syntax terms or object webs, each of which may be cyclic when represented as a graph. He proposed a notion of fold on graphs to capture computations over such structures.The Haskell script of these definitions is available.This presentation shows that no special notion of fold is required in order to handle cyclic structures in a lazy language like Haskell. A cyclic structure can be represented as an ordinary datatype; but in order to distinguish between a finite cyclic structure and its infinite acyclic unfolding, we provide a unique identifier at each node. For simplicity in what follows, we assume a single source for the whole graph; but that is not essential.
Computing a fold on a cyclic structure involves finding a fixpoint, but that's just what a recursive definition in Haskell provides. I defined a fold that first generates the adjacency-list representation of a graph, then uses it to compute a table recording the result of the fold at each node. This is equal to the standard fold which treats the cyclic graph as an infinite tree; but the latter does not compute the result as a cyclic structure.
Since this fold is equivalent to the standard fold on the possibly infinite unfolding of the graph to a tree, the result is non-bottom on a graph only for folds which give non-bottom results on the corresponding tree. For example, you cannot use it directly to compute the `size' (that is, the number of distinct nodes) of a graph, because the `size' function is too strict.
The problem is that you are forced into computing the fixpoint via Haskell's recursive
let
, ie from bottom upwards. Sometimes you want to use a different method of computing a fixpoint. I defined a second kind of fold, which generates an infinite sequence of tables, each table giving the value at every node in terms of the values from the previous table. A parameter of this fold is the method for computing the `fixpoint' of that sequence of tables. Using this fold, you can compute the reachability sets of each node of the graph, which will be finite for a finite cyclic graph. Then the `size' of the graph is the size of the reachability set of the source node.(Doaitse Swierstra observed that this could be made more efficient. Rather than computing a single infinite sequence of tables, one should compute an infinite sequence of values, one sequence for each node, or equivalently a single table of infinite sequences. The abstraction function for the data refinement is a zip or transpose between the sequence of tables and the table of sequences.)