The objectives of the proposed project are to compare discrete, logic-based constraint satisfaction with continuous optimization strategies, and to design a framework for the integration of these two paradigms. We propose to take constraint logic programming as an exemplar of the former paradigm and neural networks of the latter. Concretely, we intend to to illustrate our theories by the design and construction of a translator of logic programs into neural networks.
In the world of logic computation we have as the central (virtual)
data structure proofs. The process of constructing a proof (tree)
entailing a certain statement can be seen as the essence of logic
programming. Additionally, by the Curry-Howard correspondence
proof normalization corresponds to the evaluation in -calculus,
i.e. functional programming. In the neural network approach we have
the training of neural networks as the main constructive algorithmic
aspect, it optimizes the (weight) structure of the network such that
a certain desired behavior is reached. The evaluation of a neural
network is achieved in general by an iterative process applying the
encoded knowledge to process some input pattern.
The continuous approach with neural networks enables some type of efficient optimization strategies (e.g. gradient techniques), but for the price of expressiveness. The represented knowledge is hard to formulate in linguistic terms, for example: how to relate a certain weight configuration with some simple (logical) rules. This makes it hard to initialize neural networks -- which would be important as the training of neural networks can become extremely expensive -- and to extract knowledge from a trained network to examine it more closely or to transfer it to other systems.
The proof construction methods in computational logic gain this strict expressive power for the price of loosing a usable topological structure. The similarity of programs or proofs is hardly expressible which makes it impossible to use arguments and techniques based on analogy or similarity of programs or proofs. It is therefore hard to approximate the behaviour of a (partial) object and optimization techniques like in the continuous case cannot be used.
The particular approach towards comparing and integrating both
frameworks is to be based on a functional analytical model of
algorithms. Operator algebras, in particular especially well
behaved ones like C -algebras, provide a general framework
for investigations concerning computational processes -- both
from a logical point of view as well as seen in evolutionary
computing.
Such algebras are not only of relevance in several
mathematical disciplines dealing with dynamic processes, but
also play an important role within logic or semantic models.
Girard's Geometry of Interaction represents and characterizes (linear)
proofs within a C -algebra setting [Girard, 1989,Girard, 1988,Girard, 1995].
Mulvey's Quantales are an abstraction of the lattice of closed right
ideals of a C
-algebra [Borceux et al. ,
1989,Abramsky and Vickers, 1993].
Mundici investigated (the Lindenbaum algebras, i.e. MV-algebras,
of) multi-valued logics via the K-Theory of C
-algebras
[Mundici, 1986,Mundici, 1989]. In the Geometry of Implementation
for the
-calculus optimization strategies are based on
elements and features of a C
-algebra of paths
[Malacaria and Regnier, 1991,Asperti et al. ,
1994,Danos and Regnier, 1995].
Our own recent results on a formal semantic model of neural networks
demonstrates how C -algebras can be used for specifying neural
networks [Wiklicky, 1996b,Wiklicky, 1996a].
This operator algebraic approach is an abstraction from
the concrete representation of neural networks (e.g. by a weight
matrix). In principle, a wide variety of network models -- including
also stochastic ones (related to simulated annealing) as well as
recurrent ones -- can be handled and analysed in this framework.