|
|
Minutes of Meetings of the Algebra of Programming Research Group in 1997
-
The schedule for Hilary Term 1997 was as follows:
-
- Week 1: Jan 24, Satnam Singh, University of Glasgow
- Circuit Design in HaskellHard.
- Week 2: Jan 31, Michael Spivey, University of Oxford
- Deriving an hardware compiler from operational
semantics.
- Week 3: Feb 7, Michael Schenke, University of Oxford
- Using ProCos methods in hardware design.
- Week 4: Feb 14, Mark Jones, University of Nottingham
- First-class polymorphism with type inference.
- Week 5: Feb 21, (no seminar)
- IFIP conference.
- Week 6: Feb 28, Sharon Curtis, University of Oxford
- Dynamic Programming
- Week 7: Mar 7, Jeremy Gibbons, Oxford Brookes
- List theory speeds up parallel tree algorithms.
- Week 8: Mar 14, Joe Stoy, University of Oxford
- Lambda-S: A calculus with sharing and side-effects
-
The schedule for Trinity Term 1997 was as follows:
-
- Week 0: April 21, Andreas Hamfelt, University of Uppsala
- Towards a logic programming methodology based on higher-order predicates time: 12:30, place: 147
- Week 1: May 2, Keith Hanna, University of Kent
- Reasoning about real digital systems.
time: 12:30, place: 147 (hardware compilation seminar)
- Week 2: May 9, Bill McColl, University of Oxford
- BSP Algorithm Design.
- Week 3: May 16, Richard Bornat, Queen Mary
- Cuts, forward proof and equational reasoning in Jape.
- Week 4: May 23, Tony Hoare, University of Oxford
- Unifying theories of parallel programming.
- Week 5: May 30, Matteo Vaccari, University of Milan
-
Relational derivation of regular language recognizing circuits.
- Week 6: June 6, Graham Megson, University of Reading
- Regular Arrays: Old Ideas - New Perspectives.
- Week 7: June 13, Richard Bird, University of Oxford
- A Polytypic Pattern-Matching Algorithm.
- Week 8: June 20, Netty van Gasteren, University of Eindhoven
- On Calculation
- Week 1:
Jan 24, Satnam Singh, University of Glasgow
- Circuit Design in HaskellHard.
This talk describes our experience of using the Haskell lazy functional
programming language to describe and actually generate real circuits.
`HaskellHard' is the name of a collection of Haskell modules that
provide a convenient way to describe digital circuits using combinators
to capture behaviour and lazyout. The choice of combinators is based on those
found in the Ruby relational hardware description language desgined by
Geraint Jones and Mary Sheeran. Graphics circuits have been designed for use
in an EPSRC project that is developing hardware to accelerate PostScript
rendering.
Circuits can be simulated or symbolically evaluated using the Haskell
Class system. This allows us to have many interpretations for one description.
We can generate output in the industry standard hardware description language
VHDL,
or in EDIF for entry into conventional CAD tools to allow us to realise our
desings on Field Programmable Gate Arrays (FPGAs). FPGAs are Lego-like
circuits that can be rapidly reconfigured at run-time to assume new logical
operations. We use the Xilinx XC6200 series of FPGAs.
We shall relate our approach to the Handle language which has been very
successfully used by Ian Page's group.
- Week 2:
Jan 31, Michael Spivey, University of Oxford
- Deriving a hardware compiler from operational semantics.
Recent work on hardware compilers at Oxford has centred on the idea of
using a single programming language to describe both hardware and
software designs. Because we contemplate a wide range of
implementations for our language, including high and low-level
simulators as well as production compilers into hardware and software,
it becomes important to have an implementation-independent account of
the meaning of the language and a disciplined way of developing
implementations that are faithful to this meaning.
I shall report on an approach to deriving correct compilers that
starts with operational semantics. By developing a data
representation for the abstract states in the semantics, it becomes
possible to represent the action of the program by a set of transition
rules. The many rules that govern different stages in the program's
execution can be merged into a single schematic rule, and this single
rule may be re-interpreted as a design for a hardware implementation
of the program's control and a specification for a datapath that
supports it.
- Week 3:
Feb 7, Michael Schenke, University of Oxford
- Using ProCos methods in hardware design.
A key issue in the ProCoS project was to bridge different levels of abstraction in a
development process, like requirements' analysis, specification, implementation. A
combination of several description techniques each of which is best suited for one
particular level of abstraction raises the risk of introducing errors when crossing the
interfaces between different levels. An alternative is the use of
only one mathematical model for all levels.
Such a solution has been aimed at in ProCoS, and this idea has been applied to hardware
design. The specification level is represented by the occam-based language
Handel, the implementation level is given by gates, multiplexers etc. On the top of
Handel we set an interval temporal logic, the duration calculus (DC), which allows to
express properties of specifications and implementations. All levels are based on the
same semantical model, so-called timing diagrams.
Between the levels an implementation relation is defined, basically set inclusion but
with a mathematical twist which will be explained. A combination of a set of CSP-like
algebraic and other ``obvious'' refinement rules allows to prove properties
of implementations.
The mathematical
analysis of the development method is influenced by the following observations:
A Handel version, which combines the OCCAM-like point to
point communication with a communication via shared variables, has been ivestigated.
It is very difficult to find a denotational semantics for a language which has
a complex parallelism like this.
2.) During the design synthesis transformations timing nondeterminism is
retained as long as possible so that for example decisions for a parallel or sequential
implementation of the specifications can be postponed to a later stage.
This has the consequence that the interdependencies between the timing
of the parallel and sequential composition inevitably become very complex.
3.) Any approach where the semantics of all abstraction levels is expressed in a
uniform model leads inevitably to a certain degree of complexity in the description of
the operators.
The approach will be illustrated by the example of a Bus Arbitration Circuit.
- Week4:
Feb 14, Mark Jones, University of Nottingham
- First-class polymorphism with type inference.
Languages like ML and Haskell encourage the view of values as
first-class entities that can be passed as arguments or
results of functions, or stored as components of data structures.
The same languages offer parametric polymorphism, which
allows the use of values that behave uniformly over a range of
different types. But the combination of these features is not
supported---polymorphic values are not first-class. This
restriction is sometimes attributed to the dependence of such
languages on type inference, in contrast to more
expressive, explicitly typed languages, like System F, that do
support first-class polymorphism.
In this talk, we use relationships between types and logic to develop
a type system, FCP, that supports first-class polymorphism, type
inference, and also first-class abstract datatypes. The immediate
result is a more expressive language, but there are also long term
implications for language design.
- Week 6:
Feb 28, Sharon Curtis, University of Oxford
- Dynamic Programming.
Dynamic programming has long been used as an algorithm design
technique, with various mathematical theories proposed to model it.
In this talk, a different perspective will be examined,
using the relational calculus to model both problems and their solutions
obtained from dynamic programming. This approach serves to shed new light on
the different styles of dynamic programming, representing them
by different search strategies of the tree-like space of
partial solutions.
- Week 7:
Mar 7, Jeremy Gibbons, Oxford Brookes
- List Theory Speeds Up Parallel Tree Algorithms.
A downwards accumulation is an operation on trees,
distributing information downwards from the root towards the
leaves; every node in the tree is replaced by some function h
of its ancestors. A simple parallel algorithm computes a
downwards accumulation in time proportional to the depth of
the tree.
Efficiency considerations suggest that the function h
should be evaluated from top to bottom on the path of
ancestors; manipulability considerations suggest bottom to
top. These two considerations, together with Bird's Third
Homomorphism Theorem (stating that a function on lists that
can be evaluated both from left to right and from right to
left can be evaluated according to an arbitrary
parenthesization of the list), lead to a parallel algorithm
for downwards accumulations running in time proportional to
the logarithm of the depth of the tree.
- Week 0:
April 21, Andreas Hamfelt, University of Uppsala
- Towards a logic programming methodology based on higher-order predicates.
We outline a logic programming methodology which applies standardized logic
program recursion forms afforded by a system of general purpose recursion
schemes. The recursion schemes are conceived of as quasi higher-order
predicates which accept predicate arguments, thereby representing
parameterized program modules. This use of higher-order predicates is
analogous to higher-order functionals in functional programming. However,
these quasi higher-order predicates are handled by a metalogic programming
technique within ordinary logic programming.
Some of the proposed recursion operators are actualizations of mathematical
induction principles (e.g. structural induction as generalization of
primitive recursion). Others are heuristic schemes for commonly occurring
recursive program forms. The intention is to handle all recursions in logic
programs through the given repertoire of higher-order predicates.
We carry out a pragmatic feasibility study of the proposed recursion
operators with respect to the corpus of common textbook logic programs.
This pragmatic investigation is accompanied with an analysis of the
theoretical expressivity. The main theoretical results concerning
computability are:
- Primitive recursive functions can be re-expressed in logic programming
by predicates defined solely by non-recursive clauses augmented with a fold
recursion predicate akin to the fold operators in functional programming.
- General recursive functions can be re-expressed likewise since fold
allows re-expression of a linrec recursion predicate facilitating linear,
unbounded recursion.
- Week 1:
May 2, Keith Hanna, University of Kent
- Reasoning about real digital systems.
The relation between the structure and the behaviour of a digital circuit
formed from ideal gates is simple: the behaviour is described by a term
whose intension corresponds to the structure of the circuit. In practice,
however, digital designers often make use of adirectional configurations such
as tristate busses or devices such as pass transistors. For this more general
class of non-ideal digital systems, the relation between structure and
behaviour is less straightforward. In particular, it is generally necessary
to take account of both voltages and currents and it may also be necessary
at times to reason at the analogue level of abstraction.
In this talk I will describe how, using higher-order logic, the behavioural
specifications of non-ideal digital systems can be expressed in a way that
is rigorous and yet intuitive. In formulating such specifications, and in
describing the abstraction relation between analogue and digital levels, it
is necessary to pay careful attention to describing "non-digital" or
"don't care" values. Here, category theory can provide some useful insight.
Given behavioural specifications of the component devices in a system,
useful properties of the system, such as correctness or compositionality, can
be determined either by formal deduction or, in many cases, by using
decision procedures based on constraint programming.
- Week 2:
May 9, Bill McColl, University of Oxford.
- BSP Algorithm Design.
I will show how to design BSP algorithms which are efficient in terms of
computation, communication and synchronisation (and in many cases optimal with
respect to one or more of these three basic complexity measures.) The problems
considered will include various standard matrix computations such as
matrix-vector product, matrix multiplication, LU decomposition, and the
algebraic path problem.
- Week 3:
May 16, Richard Bornat, Queen Mary and Westfield
- Unification, provisos and substitution: animating proof-on-paper in Jape.
Jape is a proof calculator which allows its users to make steps of
inference which they might make on paper or blackboard, using the same
rules, the same syntax and the same logical provisos that they would use on
paper or blackboard. Jape appears on the surface to be very simple, but to
preserve this illusion it is necessary for it to exploit the properties of
substitution and interactions between unification, substitution and
provisos. Ingenious use of provisos allows deferred decisions about
unification and thereby inference steps; use of hidden provisos allows it
to interpret predicate notation. The talk will reveal the algorithms that
are used by Jape to exploit the semantics of logical provisos, while
keeping them as far as possible hidden from the user's eye.
- Week 4:
May 23, Tony Hoare, Oxford
- Unifying theories of parallel programming.
Consider a spreadsheet whose rows are summed by an operator
$\oplus$ and whose columns are
summed by a different operator $\otimes$. The value at the bottom right
corner can be computed
either as the rowsum of the bottom row of columnsums or as the
columnsum of the rightmost
column of rowsums. The spreadsheet principle ($aka$ interchange or
abides) states that the value is the same, independent of the manner of
computation. How does this principle apply to theories of
parallel programming?
Consider parallel execution of a pair of processes $P$ and $Q$ which
update a single shared variable $m$ by actions of the form $m:= m \oplus
e$. But run them instead on private copies of $m$, called $O.m$ and
$I.m$. When they both have terminated, they usually offer different final
values. These values must then be merged to a single final value by
some operation $M$ for example\\[2ex]
$\left. \begin{tabular}{@{}ll}
& $m:= m \oplus (x \otimes y)$ \\
where &$O.m = m \oplus x$ \\ % \hspace{3cm}
& $I.m = m \oplus y $
\end{tabular} \right\} $ for some $x, y$
\\[2ex]
This account of parallel composition is compositional: properties of the
program are derived from
those of its component processes.
But in a practical implementation it is usually more efficient to maintain a
single copy of $m$, and
update it by some interleaving of actions contributed by the two
processes. Such an implementation is
best described by algebraic laws which look like an operational
semantics, e.g.,
$$(m:= m \oplus e; (P \| Q)) \Rightarrow (m:= m \oplus e; P) \| Q$$
The validity of the algebraic laws, and so the consistency of the efficient
with the conceptual implementation, are critically dependent on the
spreadsheet
principle and it non-deterministic extensions.
The theory applies directly to a broad family of parallel programming
paradigms, including BSP,
SCCS, ACP, and CSP.
- Week 5:
May 30, Matteo Vaccari, University of Milan
- Relational derivation of regular expression recognizing circuits.
The goal is to obtain a syntax-directed translation from regular
expressions to Ruby-style circuits. Our first design produces
circuits that are essentially functional programs. This design is
then modified by means of standard techniques such as retiming,
slowdown and pipelining, so to obtain faster circuits. Different
designs can be obtained, with performance optimized for regular
expressions where "sequence" predominates over "choice", or the other
way around.
- Week 6:
June 6, Graham Megson, University of Reading
- Regular Arrays: Old Ideas - New Perspectives.
Since the introduction of Systolic Computation (circa 1979)
there has been much development work the idea of regular
computations and the growth of a considerable volume of
literature on synthesis techniques. Nevertheless the
concept and exploitation of regular array computations
seemed to be destined for niche markets and the fruit
seemed set to wither on the vine.
More recently synthesis has begun to merge with compiler
technology. The drop in cost performance of modern devices,
the arrival of low cost FPGA devices, coupled with the
growing area of hardware-software co-design have breathed
new life into the area. This lecture will outline these
established techniques and outline a new philospohy towards
their application. Several examples from Genetic Algorithms
to Data Compression will be considered.
- Week 7:
June 13, Richard Bird, University of Oxford
- Polytypic Pattern Matching.
A polytypic program is a program that is parameterised by
one or more datatypes. Such programs express the deep
structure of a whole class of algorithms, and may prove
to be a key idea in the area of software reuse. In this
talk we will consider the pattern-matching problem as
an exemplar of the polytypic approach.
- Week 8:
June 20, Netty van Gasteren, University of Eindhoven
- On Calculation.
We all know that, in principle,
a calculational style of reasoning can greatly contribute to
the trustworthy design of programs and proofs.
The question is : what is an effective calculation and,
more importantly, how can we make our calculations effective?
In this talk we will try to answer this question. We will
illustrate the choices and problems that face
the calculating computing scientist, by means
of examples taken from the literature.
We will also discuss various ways in which we can
influence and improve the quality of our calculations.
Oege de Moor
(oege@comlab.ox.ac.uk)
|
|