Big data problems have exposed the limitations of traditional computer architecture based on von Neumann’s computational paradigm, where data and programs are stored away from the computational element, and need to be transported between cache and memory locations for every computational task. This accounts for 70 to 90 percent of total energy consumed by a computing unit. While data is being transported, the computing element and memory locations remain idle. In addition, sequencing of tasks through a stored program requires additional energy and time. The authors feel overheads of data and instruction fetching, instruction decoding, and resulting data transport and storage have prompted moves away from computation-centric and toward data-centric approaches, resulting in alternative half-way approaches such as processor-in-memory, memory-in-processors, and in-memory-computing/database.
The authors propose a new architectural concept called computation-in-memory (CIM). CIM architecture is developed for a specific data-intensive application. The storage and computation are at the same physical location through crossbars of computational elements and data-stores, which could be implemented using complementary metal-oxide semiconductor (CMOS) memristor technology. CIM architecture supports massive parallelism, has zero leakage, and offers significant performance improvement at lower energy and (wafer) area.
The paper briefly discusses two CIM examples, for healthcare and for mathematics, and discusses how potential efficiency could be obtained. The paper does not offer a design procedure to implement CIM concepts or describe an implementation.
This well-written, easy-to-read paper has 95 references and will interest those looking for innovative ideas in the areas of big data and alternative computing architecture paradigms.