The bidirectional associative memory (BAM) model was developed by Bart Kosco with optical implementation in mind. It extends the Hopfield autoassociative memory from a one-layer unidirectional network to a two-layer bidirectional structure. The BAM is also similar to Stephen Grossberg’s adaptive resonance theory.
The conventional BAM network has one input layer, one output layer, and two central layers fully connected to each other. The weights of these central layers store (correlate) associated pairs of vectors. A BAM network may use a Hebbian learning rule that defines an energy function in which the example associations correspond to local minima. A BAM with Hebbian correlation encoding is identical to a closed-loop feedback system composed of two matched filter banks [1]. When a noisy input X is presented to a trained BAM network, the BAM layers oscillate until a stable equilibrium state is reached, corresponding to the closest learned association (that is, library vector pair { X u , Y u } ), provided that X is close enough to stored pattern X u. The recall process can be visualized as skiing down the energy slope to a local minimum.
A BAM with more than one pair of input/output patterns is called a generalized bidirectional associative memory, or BAMg. BAMg may be used in pattern recognition applications where the appearance of two patterns (such as infrared images or radar signals) results in the retrieval of a third pattern (such as a television image). Kulkarni and Yazdanpanahi briefly describe several candidate BAMg architectures. They have developed a simple software implementation of one of these architectures. They provide an example that is so trivial that the reader is likely to be unconvinced that a practical application can be developed from the approach. Numerous typographical, spelling, and grammatical errors detract from the paper’s readability.