Paralinguistics is concerned with almost everything a 20th century theoretical linguist would NOT have been interested in (and, I suspect, would in fact have perceived as a nuisance, a distractor from her or his own basic research goals). When it comes to speech, no two people speak the same language alike, yet they can perfectly understand each other. The answer to this apparent paradox is deceptively simple: mutual understanding has to do with invariant aspects of language performance. From this perspective, personal emphasis, speed, pitch, cadence, and the like are, according to received wisdom, “nonlinguistic” facts, superadded useless signal complications that are nonetheless inseparable from speech.
Paralinguistics can, as it were, separate all these complications from the “ideal” linguistic signal, to hand over “pure” language data for analysis. Paralinguistics is then “prelinguistic,” acting as a filter for better acoustic data to be processed and boosting one of the most successful language-based technologies: automatic speech recognition.
If this were everything paralinguistics is about, however, this book would cover a comparatively boring knowledge domain. In fact, noise is not simply in the message, but is part of the message. Upon hearing the voice of a speaker, we get a lot of information about her sex, age, bodily features, cultural level, social status, personality, and even ethnicity. The meaning of the utterance, “Nice weather, isn’t it?” pronounced when a storm is raging, lies in the ironical undertones that are modulated onto it. Real communication would fail in many cases if interlocutors could only exchange literal meanings. To get down to what someone really thinks, one often has to go beyond what she is saying, and look at how she is conveying her verbal message. Superadded complications do make content.
All this has put paralinguistics center stage in the contemporary study of human communication. The secret of its recent success history, however, would be hardly understandable if we ignored the adjective “computational” in Schuller and Batliner’s book title. Given the mostly empirical and data-based nature of this approach to language, paralinguistics requires powerful hardware and software technologies dealing with huge databases of acoustic and written text materials. The advent of such technologies has made an enormous impact on scientific and technological progress, paving the way to successful, marketable applications. Suppose you have a software kit installed on your tablet, able to assess the perceived charisma of a delivered political speech on the basis of the machine processing of a few acoustic and lexical parameters. You will then be in a position to advise a politician to consciously control such parameters so as to improve her or his charisma rating, or warn voters to protect themselves from deceptive persuasion techniques. An adaptation of the same technology can be used to tap the general public’s attitude toward a commercial product, or to assess a predisposition risk to schizophrenia.
Over the past few decades, computational approaches to language study have changed the way we conceive of verbal communication as an object of scientific inquiry. The 20th century focus on language competence as a combinatorial system of meaningful building blocks, whose formal properties are studied independently of their use in real communication contexts, has been giving way to the mounting awareness that language is about conveying information and that little can be understood about it when, to use Wittgenstein’s phrase, language goes on holiday. This timely, invaluably informative, and highly technical book reminds us that there is a huge amount of nonverbal information conveyed by language that is still waiting to be mined. This information is now within our grasp.
Finally, a cautionary note. The impact of information technologies on language inquiry spawned a myriad of methodological approaches and successful “intelligent” applications, ranging from speech recognition and machine translation to information retrieval and sentiment analysis. However, it has also stirred a centrifugal drift toward decomposing language complexity into a fragmentary constellation of small subproblems, easier to tackle and implement in isolation. There is a neo-behavioristic risk in this trend, the mistaken underlying assumption that a full understanding of language communication will eventually be attained through integration of ad hoc software solutions and bottom-up data mining. This book warns us against this risk. Only if we are able to tackle language in its full complexity, by putting it into the wider cognitive context of human communication and by developing a huge interdisciplinary effort whereby each specialist will take advantage of the insights of other scientists from both neighboring and apparently remote disciplinary domains (for example, electrical and electronic engineering and computer sciences, human physiology, neurophysiology, genetics, psychology, anthropology, cognitive science and, last but not least, linguistics), can we hope to arrive at a full understanding of the roots of our being “speaking animals.”