Computing Reviews

Music, cognition, and computerized sound
Cook P., MIT Press,Cambridge, MA,1999.Type:Book
Date Reviewed: 07/01/99

For decades, undergraduate music majors in most colleges and universities have had either the option or the requirement to take a course in the physics of sound, usually taught in the physics department. Stanford University has been a leader in expanding this traditional course into a more comprehensive study, adding faculty from psychology, biology, and computer science. Based on the authors’ experience in this interdisciplinary effort, we are now offered an excellent textbook that can form the basis for similar undergraduate or graduate courses at other institutions, assuming that they can bring together the necessary expertise from faculty in different disciplines.

Seven authors, each an outstanding scholar in his field, have contributed 23 chapters covering a wide range of material. Max Mathews, whose pioneering work in computer music at Bell Laboratories in the 1950s is widely known, contributes chapters on “The Ear and How It Works”; “The Auditory Brain”; “What Is Loudness?”; and “Introduction to Timbre.” Roger Shepard, of the Stanford psychology department, writes on “Cognitive Psychology and Music”; “Stream Segregation and Ambiguity in Music”; “Pitch Perception and Measurement”; and “Tonal Structure and Scales.” John Pierce, whose many achievements as a Bell Labs engineer are well known (he also coined the word “transistor”) writes on “Sound Waves and Sine Waves”; “Hearing in Time and Space”; “Consonance and Scales”; “Passive Nonlinearities in Acoustics”; and “Storage and Reproduction of Music.”

Cook, the editor of the volume, now holds a joint appointment in the computer science and music departments of Princeton, after several years of service and leadership in the Stanford program. His chapters cover “Voice Physics and Neurology”; “Formant Peaks and Spectral Valleys”; “Articulation in Speech and Sound”; and “Pitch, Periodicity, and Noise in the Voice.” Brent Gillespie, a postdoctoral fellow at the Laboratory for Intelligent Mechanisms at Northwestern University, contributes “Haptics” and “Haptics in Manipulation.” The subject of haptics covers both taction (touch) and sense of body position and motion (kinesthesia). Daniel J. Leviton, a cognitive psychologist at Interval Research Corporation and a visiting scholar in the Stanford program, contributes “Memory for Musical Attributes” and “Experimental Design in Psychoacoustic Research.” Finally, John Chowning, the founding director of the Stanford program, inventor of FM sound synthesis, and well-known composer of computer music, contributes “Perceptual Fusion and Auditory Perspective.” The book concludes with two useful appendices: “Suggested Lab Exercises” and “Questions and Thought Problems.”

A valuable and informative compact disc accompanies the printed book. It presents some 80 tracks of examples that serve as aural illustrations of terms and concepts discussed by the authors. Following the audio examples on the CD, there is a data segment that includes ANSI C code for generating many of the sound examples on the CD. There are also a number of MIDI level 0 files for performing experiments or demonstrations in real time.

The writing is clear and concise throughout. The presentation is appropriate to the mathematical background of the typical music major (undergraduate or graduate)--the book contains only a handful of algebraic equations. That is not to say that the concepts are simplified--much of the material is intellectually demanding, and most students will welcome the clarification and expansion provided by a good lecturer. Many music departments will be inspired by this book to expand traditional physics of sound courses into the broader coverage envisioned here.

Reviewer:  Harry B. Lincoln Review #: CR124813 (9907-0508)

Reproduction in whole or in part without permission is prohibited.   Copyright 2024 ComputingReviews.com™
Terms of Use
| Privacy Policy