Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Cognitive memory
Widrow B., Aragon J. Neural Networks41 3-14,2013.Type:Article
Date Reviewed: Feb 25 2014

If you have basic knowledge of neural networks and are interested in building a new kind of computer memory that uses pattern recognition to return not only the recognized item but all related items, then this paper is for you.

The authors propose a new kind of memory architecture modeled after human memory (or at least how we think human memory works!). Computer memory can be viewed as an array, where every element of that array has an address. What you store in that address remains there until you read it later. However, this basic perspective lacks information about the relationships among the stored data. These relationships are what differentiate human memory from computer memory. The brain receives a continuous stream of patterns. When an excitatory pattern is received, the brain returns all the related patterns previously stored for that prompt.

The authors propose cognitive memory, a new computer memory architecture that stores not only the data, but the relationships among the data elements as well. To accomplish this, inputs concerning a single object or subject are stored together in file folders. These inputs can be in the form of sound, video, text, images, or any other type of data. As long as they are related to a single object or subject, they are stored in the same file folder.

To understand this proposed architecture, we need to understand two things: how this memory stores data and how it retrieves it. According to the paper, “a folder stores all of the sensory input patterns taken over a finite time interval [ ... and] the patterns could be stored redundantly in more than one folder.”

In the proposed architecture, several file folders form a memory segment. However, folders for a particular segment do not necessarily have any other relationship to each other. This allows different segments to operate both in parallel and independently, increasing retrieval speed. Each segment has a neural network. Patterns in the folders of a segment are used to train the segment’s neural network. The patterns are used as input to the neural network and also function as the expected output. This is called an autoassociative neural network.

When a prompt pattern is presented, data retrieval is initiated and completed in two steps. The prompt pattern is presented to all of the neural networks of all the segments. Each neural network generates an output. If the difference between the input pattern and the output is within a predetermined threshold, there is a primary hit in the corresponding segment. The authors don’t clearly discuss the possibility of more than one segment having a hit. We will assume that either one segment has a hit, or no segments have any hits. If there is a primary hit, we move to the second step. The hit “pattern is compared, by exhaustive search, with all the patterns in all the folders of the corresponding segment.” When a match is found, we have a secondary hit. Since there was a primary hit, we are guaranteed to have this secondary hit. In this case, the entire content “of the match folder [is] delivered as the memory output in response to the prompt.”

This architecture can be used in many domains that involve pattern recognition. The authors present a detailed application of cognitive memory in aircraft navigation and another involving face recognition.

In the first application, aircraft navigation, the authors obtained an aerial photo of an area of the earth’s surface from the Internet. They divided this photo into 15x15-pixel square images. Each one of these square images is stored in a folder along with its X (north-south) and Y (east-west) coordinates. All of these images, along with their different versions (rotated), are used to train the autoassociative memory of their corresponding segments. As the authors state in the paper, “a telescope with an attached video camera mounted on the fuselage of the airplane” supplies a picture. This picture is the prompt pattern used to retrieve the coordinates. A modification to this scheme is used by the authors to develop an aircraft identification system. They obtained a 25x25-pixel image of each airplane type. Each picture was translated to have several versions. Each group of images, corresponding to an airplane type, is placed in the same folder and used to train the corresponding neural network. We have one neural network for the whole segment. Then, the authors obtained, from the Internet, a satellite picture of a US military base containing airplanes. They divided this picture into 25x25-pixel images used to identify the airplane types.

Another application that the authors tried is face recognition. This is done in two steps: detecting that there is a face, and then recognizing the person’s identity. For the first part, they obtained a 20x20-pixel image of a person’s face. It is difficult to recognize a person’s identity at this low of a resolution, but it is good enough to recognize there is a face. The picture is rotated to generate several versions. Those versions are saved in a folder and used to train the neural network associated with the segment containing the folder. If the system recognizes that there is a face in a picture that contains several objects, the second step starts. This second step uses a 50x50-pixel picture of a person (with its different versions obtained by rotation) to train another neural network, and performs the recognition later.

Cognitive memory is an interesting idea. However, it needs a lot of help from technology. It is obvious from the above description that this architecture requires a huge amount of storage. Solid state drives (SSDs) are fast enough, but not yet large enough. Traditional disks can be huge, yet much slower. Parallelism is also needed to reduce times for both autoassociative neural search and exhaustive search.

Both multicore processors and graphics processing units (GPUs) are possible solutions. As the number of segments and folders increases, using GPUs becomes more feasible due to higher parallelism. Also, the single-thread multiple data method of execution on GPUs is very suitable for cognitive memory.

The last section of this paper is interesting. The authors proposed a very bold hypothesis. They argue that neural networks in the human brain are not used for storage but only as a retrieval method. Their main argument is that neural networks in the brain undergo many changes (neurons die, new connections are made, others fade, and so on), making them unreliable for storage. They contend that these networks are continuously trained with patterns stored elsewhere in the brain (although the authors do not specify where). This is an imaginative assumption; as such, the authors do not back it with experimental evidence and do not discuss other assumptions in the literature on memory, such as the work of Eric Kandel and John Lisman.

Overall, I recommend this paper for its interesting applications and bold hypothesis.

Reviewer:  Mohamed Zahran Review #: CR142031 (1405-0380)
Bookmark and Share
  Reviewer Selected
Featured Reviewer
 
 
Cognitive Simulation (I.2.0 ... )
 
 
Neural Nets (I.5.1 ... )
 
 
Memory Structures (B.3 )
 
Would you recommend this review?
yes
no
Other reviews under "Cognitive Simulation": Date
Foundations of cognitive science
Posner M., MIT Press, Cambridge, MA, 1989. Type: Book (9780262161121)
Jul 1 1992
Language (vol.1)
Osherson D., Lasnik H., MIT Press, Cambridge, MA, 1990. Type: Book (9780262650335)
Jul 1 1992
Visual cognition and action (vol.2)
Osherson D., Kosslyn S., Hollerbach J., MIT Press, Cambridge, MA, 1990. Type: Book (9780262650342)
Jul 1 1992
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy