The logic as well as the evolutionary programming paradigm can be 
described in a C
 -algebraic framework. Therefore, the relation between
logic programs and neural networks will be investigated based on this 
uniform model. For example: On one side, there is Girard's
representation of (liner) proofs as C
 -operators and his Execution 
Formula corresponding to proof normalization [Girard, 1995]. 
On the other side, we have an operator algebraic model of neural networks, 
where (in simple examples) encoding of patterns is achieved by 
computing the so called Moore-Penrose Pseudo-Inverse [Kohonen, 1989]. 
But both procedures are just an application of the so 
called Neumann series computing the inverse of an C
 -operator
[Murphy, 1990].
The time schedule for the project (as sketched below) divides the work into roughly two 12 month phases, the first one will concentrate on the development of an intermediate operator algebraic model, the second one will exploit and apply this approach to develop efficient translation and comparison methods between logic and evolutionary computing.
 Phase 1 (Modelling) -- following an initial start up period -- 
is devoted to the development of a coherent C
 -algebraic bridge between 
logic and evolutionary programming. First investigating C
 -models for 
both frameworks; from (constraint) logic programs to C
 -operators -- 
starting point would be Girard's Geometry of Interaction -- and -- 
based on our own work -- from neural networks to C
 -algebras.
This modelling phase is accompanied by the comparison of the different
objects: logic programs (proofs), C
 -algebras, and neural networks. 
How do certain characteristics translate? What notions correspond to 
each other? For example: Girard's Execution Formula, Neumann Series, 
and the Moore-Penrose Pseudo-Inverse are all related entities,
corresponding to proof normalization, operator inversion and pattern 
encoding, respectively; or: how do the Apt-Pedereschi-Bezem level 
mapping for proofing termination of prolog programs [Apt, 1995], 
the Ljapunov function to characterize stability of dynamical
systems[Hirsch and Smale, 1974], and the energy function for investigating 
the convergence in neural networks [Aarts and Korst, 1989] related?
 Phase 2 (Translation), will concentrate on the transformation 
between neural networks and logic computing (via C
 -algebras) and vice 
versa. First, the the theoretic foundations of such translations are
investigated, after that the practical implementation of automated 
translation will follow.
The final task is to develop a prototypical system for automated translation of logic programs (or proofs) into neural networks. In linear logic that would be the passage from proof nets to neural nets. In the neural network world this can be interpreted as an initialization method; from the point of computational logic this means integrating optimization algorithms into the resolution mechanism, of which a special case, namely the translations to Boltzmann Machines (i.e. simulated annealing or combinatorial optimization) could be of special interest.