Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Neuromorphic computing gets ready for the (really) big time
Monroe D. Communications of the ACM57 (6):13-15,2014.Type:Article
Date Reviewed: Aug 7 2014

Some global initiatives to build a biologically inspired computer are highlighted in this article. The term “neuromorphic” is used to refer to biologically inspired computing as a whole, and specifically to analog circuits for neural network implementations.

The Defense Advanced Research Projects Agency (DARPA)-funded Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) and the Neurogrid project at Stanford University, both US initiatives, are highlighted before referring to a commercial initiative by Qualcomm. The research goals vary from implementation of a few hundred to billions of neurons to the development of analog neuromorphic systems. Apart from physical implementation, software remains the most critical component for the success of these systems. The task is to rethink programming and algorithms for fitting into these systems. Therefore, researchers are collaborating to develop a neural compiler that will enable developers to move their proven algorithms to the new architecture. From the industry side, Qualcomm’s neural core processor Zeroth is reported. It aims “to bring the sort of intelligence that people usually associate with the cloud down to the handset.”

Two projects from the European Union’s billion-euro Human Brain Project (HBP) are highlighted. The first one is SpiNNaker (Spiking Neural Network Architecture) from the University of Manchester. It is completely digital and connects ARM cores in a massively parallel network whose ultimate aim is to connect a million cores for brain research purposes. The second one is a neuromorphic project from the University of Heidelberg. It is focused on using mixed-signal neurons “to reduce the need to drive off-chip interconnections.”

Can these projects bring more successes than their predecessors, “the artificial neural networks of a quarter century ago”? This article both raises the question and gives an indirect answer.

Reviewer:  Mohammed Ziaur Rahman Review #: CR142597 (1411-0990)
Bookmark and Share
  Reviewer Selected
 
 
Learning (I.2.6 )
 
 
Parallel Architectures (C.1.4 )
 
Would you recommend this review?
yes
no
Other reviews under "Learning": Date
Learning in parallel networks: simulating learning in a probabilistic system
Hinton G. (ed) BYTE 10(4): 265-273, 1985. Type: Article
Nov 1 1985
Macro-operators: a weak method for learning
Korf R. Artificial Intelligence 26(1): 35-77, 1985. Type: Article
Feb 1 1986
Inferring (mal) rules from pupils’ protocols
Sleeman D.  Progress in artificial intelligence (, Orsay, France,391985. Type: Proceedings
Dec 1 1985
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy