Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
SOAR: an architecture for general intelligence
Laird J. (ed), Newell A., Rosenbloom P. Artificial Intelligence33 (1):1-64,1987.Type:Article
Date Reviewed: Aug 1 1988

This paper constitutes a milestone in the progress of cognitive science because of its intentions rather than its results, even though those results are remarkable. The paper is by one of the founding fathers of cognitive science and of artificial intelligence, Alan Newell, whose work with Herbert Simon on the “General Problem Solver” led to one of the fundamental works in both fields, Human Problem Solving [1]. This paper describes a program, SOAR, which was written by Newell and his graduate students at Carnegie Mellon University over the last five years. SOAR is an implementation of several varieties of machine learning techniques that grew out of Newell’s past work. It is implemented in a variety of the OPS5 programming language.

As he stated in his recent William James lectures at Harvard University and restates here, Newell intends to outline an architecture for general intelligence that will demonstrate in its implementation the ability to perform “the full range of tasks, from highly routine to extremely difficult open-ended problems,” to employ “the full range” of problem-solving methods and representations in these tasks, and to learn about “all aspects of the tasks and its performance on them.” Newell wisely refrains from linking his model back to cognitive theory, since the very term general intelligence tossed into a roomful of psychologists has the effect of a verbal grenade. Although Newell has not pulled the pin on the explosive notion of modeling all human learning capacity in a computer program, he is still left with the mammoth task of elucidating the correspondences between what the SOAR program does and what he claims for it.

The program has an extensive and remarkable performance record, having coped with small routine tasks like root finding and sequence extrapolation and demonstrated several learning techniques through major tasks such as emulating the R1 configurer, a large and complex AI program. The list of “major aspects still missing” includes some very important aspects of human learning capability, namely, deliberate planning, automatic task acquisition, creation of representations, and extension to additional types of learning. It was a relief to find that these abilities were not claimed for SOAR, since their absence makes the claims made for the system much more believable.

However, one has to swallow hard when the components of the system are laid out. All problems are represented as goals within a problem space (defined as “a space with a set of operators that apply to a current state to yield a new state”), and the path through this space is always a heuristic search, guided by a subgoaling procedure whereby each successive step toward the goal or solution is made as simple as possible until it can be achieved by a single operation. The resulting strings of operations representing passage through the problem space are “chunked” into production rules: “SOAR learns continuously by automatically and permanently caching the results of its subgoals as productions.” This recalls the work of that other great CMU cognitive scientist, J. R. Anderson, who is duly cited. This methodology is related to past work in OPS5 by Newell’s students and others.

The power of the SOAR system lies in the simplicity of its fundamental mode of operation and the subtlety of its metaknowledge about problem solving. The task generation, realization, and search control functions form the architecture referred to in the title. This is what allows SOAR to recover from failures, to diagnose impasses, and to achieve perhaps more convincing results than any other learning program. It combines many different representation techniques and search methods, which can be substituted until a useful one is found. There is a notable lack of any fixed representation scheme or even any overt method of abstraction, such as categorization of problem types. Instead, Newell relies on the simplicity of the fundamental representation combined with the chunking mechanism to subsume such inflexible tools of thought to a pragmatic generalization process. The chunking mechanism “summarizes the processing in a subgoal” by encoding as a rule the process by which a subgoal was terminated. When the goal is eventually reached, it has swallowed its own tail of subgoals, which can be disgorged when that goal is set on another occasion. This encoding of the experience of reaching a problem solution provides an elegant medium for such techniques as generalization and analogy. However, elegance and simplicity are bought at a price: one has to ask how much massaging of the data had to be done to fit them to the representation scheme and, more importantly, how easy it was to interpret the program’s results. This issue is not touched on here.

This paper provides food for thought as disproportionate to its size as the famous loaves and fishes. It is sure to remain a classic for some time to come. Even if one is not fully convinced of all of Newell’s claims for the system, it is undoubtedly a unique achievement and one that brings together several ideas and results of previous research in a way that well serves Newell’s goals of a model for “general intelligent behavior” as he describes it. It might almost convince a skeptic that there could be, one day, a unique foundation for psychological learning theory.

Reviewer:  V. S. Begg Review #: CR112116
1) Newell, A., and Simon, H.Human problem solving. Prentice-Hall, Englewood Cliffs, NJ, 1972.
Bookmark and Share
 
Problem Solving, Control Methods, And Search (I.2.8 )
 
 
Applications And Expert Systems (I.2.1 )
 
 
Learning (I.2.6 )
 
Would you recommend this review?
yes
no
Other reviews under "Problem Solving, Control Methods, And Search": Date
The use of a commercial microcomputer database management system as the basis for bibliographic information retrieval
Armstrong C. Journal of Information Science 8(5): 197-201, 1984. Type: Article
Jun 1 1985
Naive algorithm design techniques--a case study
Kant E., Newell A. (ed)  Progress in artificial intelligence (, Orsay, France,511985. Type: Proceedings
Mar 1 1986
Planning as search: a quantitative approach
Korf R. Artificial Intelligence 33(1): 65-68, 1987. Type: Article
Aug 1 1989
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy