Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Mind over machine: the power of human intuition and expertise in the era of the computer
Dreyfus H. (ed), Dreyfus S., Athanasiou T., The Free Press, New York, NY, 1986. Type: Book (9789780029080603)
Date Reviewed: Sep 1 1987

The book under review represents an interesting and highly challenging account of the limitations of computer capabilities in the areas of cognition and artificial reasoning, written by two well-known critics of the artificial intelligence field. Hubert Dreyfus is a philosopher, and Stuart Dreyfus is an expert in computer science and operations research. Hubert Dreyfus wrote one of the first, much-criticized reviews of artificial intelligence efforts at the Rand Corporation [1], and this early paper was later expanded into a well-known book [2]. In the current volume, the authors join a number of other eminent critics of recent work in artificial intelligence in arguing that in many areas of human endeavor computers will never perform at the level of human experts (see, e.g., [3]). The current book will no doubt be attacked and rejected by the artificial intelligence community. It is, however, addressed to a much wider audience and should be read by anyone concerned with the role of computers in the modern world.

The authors first consider the long-standing differences between the rationalist tradition of philosophy, which maintains that the world is controlled by objective principles and rules, and the empiricist tradition, which denies the existence of objective criteria and argues that human beings gain expertise through perception, intuition, and experience, rather than by following rules based on accepted facts. The authors reject the rationalist views of Plato, Descartes, and Kant, and side with Pascal, Hume, and others in claiming that human perception cannot be explained by the application of precisely formulated rules.

To clarify the limits of rule-based thinking, five levels of skill acquisition are introduced, distinguishing the behavior patterns of novices, advanced beginners, competent individuals, proficient operators, and experts, respectively. The five-step model is used as the main line of argument in the remainder of the book:

  1. The novice uses context-free rules and components to perform a task. A typical context-free rule applicable to motor vehicle operation might state that “when the car reaches a speed of 20 miles per hour, then it must be shifted into third gear.”
  2. Advanced beginners start using situational components (such as the sound of the car engine in deciding when to shift) in addition to the context-free considerations.
  3. The competent individual has a goal in mind in performing a task and follows a chosen perspective using context-free, as well as situational, components.
  4. Proficiency is characterized by a rapid, fluid, involved kind of behavior, in which the detached reasoning often used by beginners for problem solving gives way to holistic similarity recognition methods distinguishing relevant from extraneous facts.
  5. Finally, experts use completely intuitive, instead of analytical, decision-making methods: “When things proceed normally, experts don’t solve problems and don’t make decisions; they do what normally works” (pp. 30–31).

The authors contend that except in certain artificial, structured problem areas, where computers sometimes outperform human beings (such as the solution of puzzles and of certain mathematical problems), computers cannot reach the level of accomplishment of the proficient or expert human individual. That is, although computers can be programmed to use hundreds of rules relating hundreds of features, the rule-based approach will normally get stuck at the advanced beginner level because proficient individuals do not follow rule-based strategies in attaining their goal. On the other hand, because initial successes in relatively structured environments and artificially constrained “microworlds” have been easy to come by, the artificial intelligence experts have been caught time and again in what the authors call the “fallacy of the successful first step,” characterized by a relatively easy initial accomplishment, followed by great optimism, and followed in turn by failure when confronted with more intuitive forms of understanding. Typical of the optimism that may be produced by a successful initial step is the following 30-year-old quotation: “in a visible future . . . the range of problems (computers) can handle will be coextensive with the range to which the human mind has been applied” [4].

In assessing the accomplishments of the computer in artificially constrained microworlds, the authors claim that the subworld approach represents a dead end because, contrary to the initial hope, different subworlds cannot be assembled separately to form a larger world:

. . . different subworlds are not related like isolable physical systems to larger systems they compose, but are rather, local elaborations of a whole, which they presuppose . . . and there is no way they can be combined and extended to arrive at the world of everyday life. (p. 76)

The problems that face us in dealing with the real world are considered by the authors under three main headings: the common sense understanding problem, the problem of temporal change or of changing relevance, and the knowledge engineering problem. Each of these problems appears to be unsolvable using currently available methods of artificial intelligence. Concerning first the common sense understanding problem, the authors contend that computers cannot be made to store and access the mass of beliefs of human beings about their world. Too many available facts exist, and the rules that relate these facts are too complex to be clearly stated:

. . . the sort of rules human beings are able to articulate always contain “ceteris paribus” (everything else being equal) conditions. What “everything else” and “equal” mean in any specific situation, however, can never be fully spelled out, since one can only give further rules with further “ceteris paribus” conditions. Moreover, there is not just one exception to each rule but several, and all the rules for dealing with the exceptions are also “ceteris paribus” rules. (p. 80)

To use an example from this reviewer’s area of work (text analysis and text storage and retrieval), it does seem inconceivable that the contents and world understanding inherent in the text of even a single document (let alone of a complete document collection) could be incorporated into a structured knowledge base of the type currently usable in practical artificial intelligence programs. Hence it appears highly unlikely that knowledge-based processing approaches can become useful for unrestricted text retrieval in the foreseeable future.

The authors point out that the common sense understanding problem is made harder by the fact that in most situations we are not concerned with static, non-situated, discrete pieces of knowledge, but rather with temporal, situated, and continuously changing know-how. They foresee that the task of representing such continuously varying situations (known as the “frame problem”) becomes so hopelessly complicated as to be unmanageable:

. . . the shifting relevance of aspects of ordinary changing situations would certainly be grounds for despair in AI . . . were it not for the fact that human beings get along fine in a changing world . . . and so these (AI) researchers assume human beings must somehow be able to update their formal representations. . . . We however hold that it can more plausibly be read as showing that human, skillful know-how is not represented as a mass of facts organized in frames and scripts specifying in a stepwise sequence which parts must be taken account of as the state of affairs evolves. (pp. 87–88)

The currently popular knowledge engineering, or expert system, approach appears attractive because the difficulties of knowledge base construction and updating can apparently be by-passed by asking experts to collect large numbers of facts in some areas and to suggest rules of thumb that will operate in given restricted domains. In information retrieval, for example, proposals have been made to ask for expert help in constructing the knowledge base and in controlling the search effort [5]. However, the authors argue that the enthusiasm about the expert system approach is misplaced:

. . . given our five stages of skill acquisition, we can understand why knowledge engineers . . . have had such trouble in getting the expert to articulate the rules he is using. The expert is simply not following any rules. He is . . . (instead) recognizing thousands of special cases. . . . That in turn explains why expert systems are never as good as experts. (p. 108)

Indeed the authors argue that forcing an expert to enunciate rules will cause him to regress to the level of a beginner and to state a set of rules that the expert himself has ceased to use long ago. The authors’ overall conclusion in the expert system area is that expert systems will be superior only for highly combinatorial problems where even experienced specialists have failed to develop complete understanding:

. . . the common sense problem and the expertise problem are just different manifestations of the same problem, that computers cannot capture skills by using rules and features. (p. 119)

Following the general discussion on intelligent systems, the authors turn to two special applications, namely, the use of computers in education and in management science. In the former area, a distinction is made between the use of computers as a tool (for example, to display information to the learner), as a replacement for a human teacher, and finally as tutee where the human operator is made to learn by teaching new facts to the machine. The authors are happy with the role of the computer as a tool, but using the skill acquisition model, they reject the use of machines as teachers and tutees:

. . . in attempting to replace teachers by computers, one makes the assumption that the teacher’s understanding of both the subject being taught and of the profession of teaching consists in knowing facts and rules . . . but since understanding does not consist of facts and rules, the hope that the computer will eventually replace the teacher is fundamentally misguided. (p. 132)

The outlook is no better when the computer functions as tutee because programming a machine contributes to procedural thinking, and thinking like a computer “will retard passage to the high levels of proficiency and expertise” (p. 147).

The argument used for computers in education also carries over to the management science area because computer models of business operations and decisions are necessarily based on facts and rules, whereas many important business decisions involve a deeper type of understanding:

. . . when the future can accurately be predicted using a model . . . decision support systems have much to offer. They offer nothing but regressive thinking . . . when the decomposed analytic model . . . represents an understanding inferior to an expert’s holistic involved intuition. (p. 190)

In conclusion, we may ask where one is left when reaching the end of this book. Without attempting to settle the philosophical questions which arise between rationalists and empiricists, it seems to this reviewer that the authors make a very good case for claiming that real experts do not follow conventional “if-then” rules. The same is true for the contention that a complete rule set cannot be generated in unstructured environments and that rule-based problem solving will therefore find only limited application. In any case, after many years of unreasonably optimistic predictions in the knowledge processing area, the burden of proof rests on the artificial intelligence experts rather than with the authors.

There are, however, open questions that are not treated in this book and that deserve to be raised. The most important relates to the assumption that if a conventional rule-based approach does not lead to a solution, then any automatic process must be inadequate. In fact, it is conceivable that a computer solution might be obtained by methods different from those used by human experts; this possibility is never seriously considered in the current text. One approach consists in replacing the missing rule-base by a very large store of snapshot pictures representing records or conditions that may arise during problem solution. By using fast, global search methods covering the whole database and appropriate connections between memory elements, it may be possible to extract from the computer store a few records exhibiting substantial similarities with particular query conditions. These relevant records may then in turn provide information for query refinement and for additional directed search operations. It has been suggested that parallel processing machines lend themselves to this type of “memory-based reasoning” [6]. Global matching operations on large storage spaces providing ranked output of records in decreasing order of presumed query-record similarity have actually been used in information retrieval for many years [7]. More generally, exact matches with precisely formulated condition statements might be replaced by approximate comparisons with patterns of tentative rules to simulate the qualitative reasoning that may be characteristic of human thought processes. It is too early to tell whether interesting alternatives to static rule-based systems can be devised that lead to useful problem solutions, but the possibility must at least be considered.

Another avenue of progress might consist in using rule modification or rule learning approaches. The authors say that useful rules are too complex to formulate and cannot be generated by asking human experts. It is, however, conceivable that a program might begin with simple rules that are inadequate by themselves but can be modified in the course of the operations as experience is gained in particular problem areas. Some feedback information might become available that gives indications about the relevance or importance of certain rules, and abstraction or generalization steps might lead to the construction of improved rules from simpler initial statements. In information retrieval, the well-known “relevance feedback” steps for query modification invariably lead to the retrieval of improved output [8].

Two additional, somewhat restrictive assumptions relate to (1) the authors’ five-step skill acquisition model, and (2) the difference between the so-called structured problem areas whose solution depends on a small number of facts and limited relationships, and the large mass of open-ended problems that require common sense knowledge and human expertise for solution. The proposed skill acquisition model may well be useful for the description of rule-based behavior, but there may be other kinds of learnable human behavior not based on rule formulations. The question of what constitutes a structured problem area accessible to computer solution and what lies outside is also not adequately treated. The reader is left with the uncomfortable impression of a moving definition by which anything solvable by computer is ipso facto structured, and everything else is termed unstructured and hence inaccessible.

Even though the artificial intelligence question is certainly not settled in this volume, the authors must be congratulated for not being intimidated by the current euphoria regarding knowledge-based processing and expert system design, and for submitting a provocative and interesting text. One hopes that the book will gain a wide readership; it will provide a challenge for everyone, and it may even convince some readers of its merits.

Reviewer:  Gerard Salton Review #: CR111454
1) Dreyfus, H. L. Alchemy and artificial intelligence, Rand Corportion Paper P-3244, The Rand Corporation, Santa Monica, CA, Dec. 1965.
2) Dreyfus, H. L. What computers can’t do: a critique of artificial reason, Harper & Row, New York, 1972. See CR 13, 7 (July 1972), Rev. 23,463.
3) Winograd, T.; and Flores, F. Understanding computers and cognition: a new foundation for design, Ablex, Norwood, NJ, 1986.
4) Simon, H. A.; and Newell, A. Heuristic problem solving: the next advance in operations research, Oper. Res. 6 (1958), 6.
5) Croft, W. B. User-specified domain knowledge for document retrieval, in Proc. of the 1986 ACM conference on research and development in information retrieval (Pisa, Italy, Sept. 8–10, 1986), F. Rabitti (Ed.), ACM, New York, 1986, 201–206.
6) Waltz, D. L. Applications of the connection machine, Computer 20 (1987), 85–97.
7) Salton, G.; Wong, A.; and Yang, C. S. A vector space model for automatic indexing, Commun. ACM 18 (1975), 613–620. See CR 17, 3 (March 1976), Rev. 29,658.
8) Salton, G.; and McGill, M. J. Introduction to modern information retrieval, McGraw-Hill, New York, 1983.
Bookmark and Share
 
General (K.4.0 )
 
 
Applications And Expert Systems (I.2.1 )
 
 
Deduction And Theorem Proving (I.2.3 )
 
 
General (I.2.0 )
 
 
General (I.6.0 )
 
 
General (K.3.0 )
 
  more  
Would you recommend this review?
yes
no
Other reviews under "General": Date
The stuff of bits
Dourish P. (ed),  MIT Press, Cambridge, MA, 2017.Type: Book (9780262036207)
Sep 26 2017
What algorithms want: imagination in the age of computing
Finn E.,  MIT Press, Cambridge, MA, 2017.Type: Book (9780262035927)
Aug 9 2017
Smart service portfolios: Do the cities follow standards?
Anthopoulos L., Janssen M., Weerakkody V.  WWW 2016 Companion (Proceedings of the 25th International World Wide Web Conference Companion, Montréal, Québec, Canada,  Apr 11-15, 2016) 357-362, 2016. Type: Proceedings
Jul 19 2017
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright © 2000-2017 ThinkLoud, Inc.
Terms of Use
| Privacy Policy