The author sees AI applications moving from the laboratories to the real world (to the real space, too?), producing a demand for higher computational throughput and for lower costs. The article discusses hardware support for increased efficiency of AI systems.
As most AI software is written in LISP, some people think that hardware to make LISP faster is the key problem. The author analyzes LISP’s requirements regarding instruction set design and identifies several important instructions operating on tagged values which would speed up conventional microprocessor implementations of LISP by a factor of 2. But he sees a higher potential in adequately designed compilable languages which offer a high degree of optimization.
Deering claims that hardware support for several common AI operations should be profitable including: “unification as an instance of the matching problem, associative memory to support fast search, and signal processing hardware to speed up the conversion of raw sensory data to symbolic representation.” In addition, AI (like many other areas) will benefit from packet switching hardware for fast communication in multiprocessors. The author makes a strong point against all kinds of parallel architectures communicating via a common memory.