Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Best of 2016 Recommended by Editor Recommended by Reviewer Recommended by Reader
From neuron to cognition via computational neuroscience
Arbib M., Bonaiuto J., The MIT Press, 2016. 808 pp.  Type: Book (978-0-262034-96-8)
Date Reviewed: Nov 15 2017

An old aphorism says that, if our brains were simple enough to be understood, we wouldn’t be smart enough to understand them. Fortunately, many scientists are not so pessimistic, and they are unrelentingly trying to decipher it. As 3D bodies, we find it much easier to comprehend the properties of lines and surfaces, which we look at “from the outside,” than properties of the 3D space we live in [1]. We can easily understand what a curved line or a curved surface is. With a little more effort, we can also grasp the notion of a curved 3D space or even a 4D space: space-time. Understanding the human brain is a much more challenging endeavor and much work still remains to be done.

From neuron to cognition via computational neuroscience is an edited collection of chapters that survey insightful ideas, proposed models, experimental results, and connections among a range of specialties within computational neuroscience. Researchers in computational intelligence build models, loosely inspired by the human brain, to create artificial neural networks that solve engineering problems such as speech recognition or machine translation, nowadays under the appellative of deep learning [2]. Computational neuroscientists, on the other hand, build models of the functions of one or more interconnected regions of the actual human brain. Their main challenge is dealing with its unsurmounted complexity and many unknowns, including unknown unknowns yet to be discovered. Hence, their humbler approach: choosing models simple enough to allow the best possible insights yet not missing the crucial parameters that lead to the behavior they wish to explain.

Computational neuroscientists have devised a multitude of models, both top-down and bottom-up, to uncover many different facets of the brain. The book patiently and thoroughly dissects them in 25 chapters written by 43 different contributors. The editor warns us: “the maps are not the territory.” We should not miss the forest for the trees. The answers to big questions could be built upon the models and insights obtained by the specialized studies that are described in this two-column, 800-page volume.

The writing style is academic, as you might expect. Some chapters are a delight to read, fast-paced and enlightening, whereas others are denser and somewhat more tedious, yet always informative. For instance, you will learn that dopamine controls the magnitude of the reward prediction error, serotonin controls the reward discounting amount (how much its value is decreased when it is expected late), noradrenaline controls the randomness of the policy (that is, the function mapping from the state to the next action), and acetylcholine controls the overall learning rate at which synapses are modified. That is how the brain seems to implement a well-known machine learning technique: trial-and-error learning, better known as reinforcement learning.

Some of the book chapters cover enabling techniques in computational neuroscience. Most of them focus on specific behavioral traits and brain regions.

You will learn how individual neurons can be modeled (from McCulloch–Pitts to Hodgkin–Huxley), how dynamical systems can help us characterize transitions between states in neural models (for example, when we perceive ambiguous visual scenes with different interpretations), how schema theory follows a divide-and-conquer strategy to provide a top-down approach to computational neuroscience, and how synthetic brain imaging combines low-level animal data with human brain imaging data, given that most neuron-level methods cannot be applied in living humans.

Some chapters delve into the action-perception cycle and action-oriented perception, that is, the interaction of brain, body, and environment. Apart from the study of neural rhythms observed in electroencephalogram (EEG) recordings since 1924, which reminded me of the device models used by simulation program with integrated circuit emphasis (SPICE) simulators, you will also find a specific chapter on motor pattern generation. The chapter on motor control will remind those with a human-computer interaction (HCI) background of Fitts’ law, whereas the chapter on neurorobotics will certainly appeal to hardware specialists.

For those fond of artificial intelligence (AI), many chapters discuss interesting ideas, from the aforementioned reinforcement learning to Hebbian learning, plasticity, memory, and vision. Hopfield nets, Kohonen’s self-organized maps (SOMs), simple recurrent networks (SRNs), reservoir computing, and convolutional neural networks (CNNs) might already be familiar models for them. They will discover how neuroscientists use them and how they resort to many other models, some of which might spark new ideas for the resolution of AI problems.

Some chapters are focused on specific brain regions: the architecture of the visual cortex, the hippocampus, the corticostriatal system, and the cerebellum, which contains 80 percent of the neurons in just 10 percent of the brain’s volume. Yet another chapter addresses brain diseases from a computational neuroscience perspective, namely, epilepsy, Parkinson’s, strokes, and schizophrenia.

The final two chapters explore the increasingly brain-constrained models of language and cognitive processing, a relevant topic for natural language processing (NLP). NLP practitioners will learn about construction grammars, which blur the division between lexicon and grammar. Someday, they might be useful for solving the connectionist variable binding problem, that is, the neural representation of structured and relational data.

The book contains a wealth of information, enough to whet the appetite of anyone interested in the operation of the brain and human intelligence. And it will keep them busy for months or even years, just by tracing hundreds of bibliographic references that are provided at the end of each chapter. As David Eagleman says, “minds and biology are connected--but not in a manner that we’ll have any hope of understanding with a purely reductionist approach ... The extreme reductionist idea that we are no more than the cells of which we are composed is a nonstarter for anyone trying to understand human behavior.” However, the ultimate goal of the computational neuroscience models, a complete understanding of the brain, has to be done as a first step, just as the Human Genome Project set the stage for further progress in molecular biology.

Reviewer:  Fernando Berzal Review #: CR145658 (1802-0054)
1) Gamow, G. One, two, three... infinity: facts and speculations of science. Viking Press, New York, NY, 1947.
2) Goodfellow, I.; Bengio, Y.; Courville, A. Deep learning. MIT Press, Cambridge, MA, 2016.
Bookmark and Share
  Reviewer Selected
Editor Recommended
Featured Reviewer
Applications And Expert Systems (I.2.1 )
Reference (A.2 )
Would you recommend this review?
Other reviews under "Applications And Expert Systems": Date
Design of interpretable fuzzy systems
Cpałka K.,  Springer International Publishing, New York, NY, 2017. 196 pp. Type: Book (978-3-319528-80-9)
Dec 20 2017
Studies on time series applications in environmental sciences
Bărbulescu A.,  Springer International Publishing, New York, NY, 2016. 187 pp. Type: Book (978-3-319304-34-2)
Oct 4 2017
Artificial intelligence in financial markets: cutting edge applications for risk management, portfolio optimization and economics
Dunis C., Middleton P., Karathanasopolous A., Theofilatos K.,  Palgrave Macmillan, London, UK, 2016. 344 pp. Type: Book (978-1-137488-79-4)
Sep 20 2017

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright © 2000-2018 ThinkLoud, Inc.
Terms of Use
| Privacy Policy