Many AI tasks can be formulated as constraint-satisfaction problems (CSPs), which involve the assignment of values to variables subject to a set of constraints. Constraint specification is an appropriate way to express declarative knowledge in such a way that local relationships among different entities can more easily be pointed out. Solving a CSP allows the system designer to generate explicit interpretations of knowledge expressed in implicit declarative form.
Networks of constraints are described here as labeled graphs whose nodes represent variables and whose hyperarcs correspond to constraints imposed on adjacent variables. Some preliminary definitions and results concerning networks on constraints are given in the first two sections of the paper. Next, in a general framework for describing relaxation algorithms (RAs), the authors present a general relaxation algorithm and point out some basic properties of RAs.
Since the perfect-RA is linear in #R, where R is the finite set of relaxation rules, special attention is paid to perfect sets of rules as a basis for obtaining perfect-RA. A more efficient version of the general RA, called “algorithm ER,” whose strategy is dynamically generated during execution, is presented in the fourth section of the paper. Next, a method for obtaining a perfect strategy for a given network is derived. The authors point out that an implementation based on metaprogramming techniques has already been developed in Prolog.
Perfect relaxation algorithms can not only return a more explicit network but also solve a given network, and their implementations could be quite efficient. Hence, this work is of particular interest for AI researchers.