The cerebellar model articulation controller (CMAC) was proposed in the 1970s by Albus [1] as a flexible design for controllers of devices, such as articulated robotic arms. The flexibility comes mainly from the learning capability of the controller. The term CMAC algorithm or learning algorithm refers to the algorithm, implemented within the device, which takes a series of inputs and desired outputs and computes appropriate parameters or weights. In recent years, CMACs have found a variety of applications from pattern recognition to time series analysis. A CMAC is usually described theoretically by a linear model that leads to an underdetermined system of linear equations presented one equation at a time.
This paper continues the authors’ previous work [2]. The CMAC algorithm described here, CMAC-QRLS, maintains an approximation of the triangular factor of an LQ decomposition of the current matrix as well as the minimum length solution of the current system. As each equation is presented, the triangular factor is updated, and the solution is recomputed. The authors present MATLAB code, which simulates the CMAC computation, and they explain how they take advantage of sparsity in the matrix and solution. The CMAC-QRLS algorithm requires half as many operations as the recursive least square (RLS) algorithm, which is described in the authors’ previous paper [2]. This efficiency gain is confirmed in their MATLAB simulations. According to the authors, the hardware implementation can be realized with a pipelined systolic array, and this could lead to an even greater speed advantage over RLS.
Unfortunately, the authors present little theoretical analysis. They state that QRLS is stable, but they do not say if this is with respect to approximation error or round-off. As for convergence, they refer to another paper [3], which deals with the original Albus algorithm, but they do not explain how those results might apply to QRLS. If they plan to work on stability and convergence results in the future, it might be useful to relate this and other CMAC algorithms to other iterative methods for linear equations such as conjugate gradient. Work on sparse update methods for orthogonal factorizations [4] might also apply.