The original French edition of this work was published in 1986. The title is unusual for a book on information theory, and its cryptic quality is symptomatic of the problems that will make the contents obscure for most readers. Oswald believes that many roadblocks to our understanding of information theory have been brought on by the linguistic imprecision that he feels has burdened the subject over its 40-year history. To rectify this problem, he has changed the names of most of the well-known standard items: entropy becomes diacritical density, information bits become Shannons (in natural units, they are logons), and raw channel errors become faults, to cite a few. This attempt at precision generally backfires, and forces the reader to make frequent visits to the glossary to translate this unconventional language.
Many important topics are not covered in this book. These include the Bhahut-Arimoto algorithm to find channel capacity, the Lempel-Ziv algorithm for source coding, and the asymptotic equipartition property, which is one of the true foundation stones of information theory. Although the book is intended for engineers, little here will be of use to the practitioner. The section on convolutional codes, for example, is misleading and hides the advantages that convolutional codes have over block codes.
The main publishing event in information theory in 1991 was the appearance of the book by Cover and Thomas [1], which treats the subject with greater depth, breadth, and clarity than Oswald’s book. That is the book that I would recommend for serious students of this subject at any level.