IFIP Working Group 2.1 -- Meeting #59 Details


Contents of this page:    Local information  |  Proposed topics for discussion  |  Proposed talks
Minutes of the meeting are now available (last updated 17th December 2004).


Local information

The 59th meeting of IFIP WG2.1 will take place from 6th to 10th September 2004, at Lenton and Wortley Hall at the University of Nottingham. The local organizer is Roland Backhouse, assisted by Fermin Reig. They have made detailed local information available. (Here is a local copy, in case their site is down.)


Proposed topics for discussion

None yet.


Proposed talks

The following talks have been proposed and will be ready for presentation at the start of the meeting. The first few talks may be selected from this list.

Sam Loyd's Chicken-Catching Problem (Roland Backhouse)
In the context of teaching algorithmic problem solving (see talk at Meeting #58) I was recently alerted to Sam Loyd's Chickens in the Corn problem.

The problem is interesting because the combination of invariants and making progress in a well-formulated algorithm to solve the problem is non-trivial.

I would like to discuss a solution to the problem. Others may have better solutions, which I would like to hear about.

Deriving the MEX sum (Roland Backhouse)
At the Rome meeting (#58), I showed how the specification of the MEX function on games is derived. In this talk, I will show how the MEX sum is derived. The derivation centres on the observation that MEX numbers form a vector space over GF(2) (the field of booleans with xor as addition and conjunction as multiplication); the problem is solved by constructing a basis of this space.

The talk describes recent joint work with Diethard Michaelis.

Calculating an Exceptional Machine (Graham Hutton)
At the last meeting, I showed how to verify a compiler for a simple language with exceptions. I have since discovered how to calculate, as opposed to verify, an abstract machine for this language. The key step is the use of John Reynold's "defunctionalisation" technique from 1972, which seems to have been largely forgotten by the program transformation community, but has recently been re-popularised by Olivier Danvy. In this talk I will review defunctionalisation, and show how it can be use to calculate an abstract machine for exceptions that is both simple and efficient.

Combinatorial Enumeration and Datatype Generics (Kieran Clenaghan)

Generic Views on Data Types (Johan Jeuring)
A generic function is defined by induction on the structure of types. The structure of a data type can be defined in several ways. For example, in PolyP a pattern functor gives the structure of a data type viewed as a fixed point, and in Generic Haskell a structural representation type gives an isomorphic type view of a data type in terms of sums of products. Depending on this generic view on the structure of data types, some generic functions are easier, more difficult, or even impossible to define. Furthermore, the efficiency of some generic functions can be improved by choosing a different view. This paper introduces generic views on data types, and shows why they are useful. Furthermore, it shows how to add new generic views to Generic Haskell, an extension of the functional programming language Haskell that supports the construction of generic functions. The separation between inductive definitions on type structure and generic views allows us to view many approaches to generic programming in a single framework.

Exact on-line list labelling (Richard Bird)

The Celebrity Clique problem (Richard Bird)

A functional quantum programming language (Thorsten Altenkirch)

Designing databases with 'simplicity' (José Nuno Oliveira)
This talk will address my attempts to replace current semi-formal database schema design (à la Codd, based on normalization etc) by an inequational calculus based on simple (dually, injective) relations and their transposes. Many results generalize to other relations, the 'simple' ones being more relevant in practice just because database entities can always be modelled as finite simple relations. I find it much "simpler" to use this (pointfree) calculus than the standard theory. It includes a generic result which enables the refinement of recursive data models and which is the basis for a XML <-> SQL conversion tool which we are currently developing at Minho in Haskell/Strafunski.

Online, Linear, Functional Pretty Printing (A non-algebraic derivation) (Doaitse Swierstra)
We present an implementation of Oppen's Pretty Printing solution in Haskell. The algorithm is linear in the size of the input, independent of the line width, uses limited look ahead, and has online behaviour.

Type Safe, Self Inspecting Code (Doaitse Swierstra)
We present techniques for representing typed abstract syntax trees in the presence of observable recursive structures. The need for this arose from the desire to cope with left-recursion in combinator based parsers. The techniques employed can be used in a much wider setting however, since it enables the inspection and transformation of any program structure, which contains internal references. The hard part of the work is to perform such analyses and transformations in a setting in which the Haskell type checker is still able to statically check the correctness of the program representations, and hence the type correctness of the transformed program.

fifol (language for queue machine) (Eiiti Wada)
fifol (first in first out language) is similar to PostScript. fifol is a queue machine while PostScript is a stack machine. Operations are made for items removed from the front end of the current fifo and the result is inserted to the rear end.

Programming in fifol is harder than for stack machine. Why? I wrote a few programs using fifol. Let me give my experience.

Some notes are available.

Transformation by Interpreter Specialisation (Neil Jones)
A program may be transformed by specialising an interpreter for the language in which it is written. Efficiency of the transformed program is determined by the efficiency of the interpreter's dynamic operations; the efficiency of its static operations is irrelevant, since all will be ``specialised away''. This approach is automatic (as automatic as the specialiser is); general, in that it applies to all of the interpreter's input programs; and flexible, in that a wide range of program transformations are achievable since the transformed program inherits the style of the interpreter. The chief practical challenge is understanding cause and effect: How should one write the interpreter so that specialisation produces efficient transformed programs? The core of this paper is a series of examples, both positive and negative, showing how the way the interpreter is written can influence the removal of interpretational overhead, and thus the efficiency and size of the target programs obtained by specialisation.


Jeremy Gibbons (email: Jeremy.Gibbons@comlab.ox.ac.uk) - August 2004