Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Willful ignorance : the mismeasure of uncertainty
Weisberg H., Wiley Publishing, Hoboken, NJ, 2014. 452 pp. Type: Book (978-0-470890-44-8)
Date Reviewed: Mar 24 2015

No empirical assertion is absolutely certain, and scientific progress has always required assessing the degree of uncertainty associated with a claimed result. To most researchers trained in the last 50 years, this assessment takes the form of attaining a significance level (the probability that the observed data are consistent with the null hypothesis) of p = 0.05. The mathematics that motivates this standard has become second nature to every trained scientist, and it is widely assumed that the certainty of a result is properly measured as, and completely captured by, its statistical significance.

Weisberg, a credentialed (Harvard PhD) and practicing (http://www.correlation.com) statistician, challenges this characterization by calling attention to two very different components of uncertainty and tracing the history that has led to the widespread neglect of one of them in modern scientific thought.

Weisberg’s central insight is that uncertainty has two components: doubt and ambiguity. Doubt measures the degree of belief that I have in a proposition. Ambiguity has to do with my understanding of the proposition. Doubt is generated by influences in the world that are either essentially stochastic (as in quantum theory), chaotic (such as an iterated nonlinear process), or unobservable (as in the cards held by another player in poker). Ambiguity comes from lack of clarity about the characterization of a situation, the appropriate categories used to describe it, and their relation to one another (such as the rules of the card game being played, or the nonlinear function being iterated). Doubt lends itself to quantification (and in fact statistical Bayesians assert that a probability is nothing more than the uniquely appropriate quantification of degree of belief). Ambiguity is essentially qualitative.

This book’s survey of the history of statistics shows that the scholarly world has moved from almost complete emphasis on ambiguity, to a neglect of ambiguity and a concentration on doubt. Concurrent with this shift, and arguably because of it, a rift has developed between researchers (focused on mathematical computations based on the model of the world as an idealized lottery) and practitioners (who are forced to engage individual differences and ambiguity at every turn). The book’s title refers to probability’s deliberate neglect of individual distinctions in defining a reference population, and the temptation introduced by that neglect to overlook issues of ambiguity in framing problems. A responsible account of uncertainty must embrace both components.

Historically, before the Pascal-Fermat correspondence in the 1650s, the concept of “probability” was almost entirely qualitative, referring to the strength of logical support for a statement, or in other words (as the Latin etymology of the word suggests) its “prove-ability.” This older sense of probability rested on multiple inputs, including the opinions of experts, testimony of witnesses, and the strength and elegance of a rhetorical argument. While one statement might be viewed as more probable than another, no one ever thought of probability as a number, let alone a number between zero and one satisfying a small set of mathematical axioms. After posing this problem in chapter 1 and surveying the entire book in chapter 2, Weisberg offers in chapters 3 through 7 a carefully documented history of how this older sense of “probability” (which he sometimes distinguishes with a Gothic font) morphed into the modern quantitative sense of the word. Chapters 8 to 11 synthesize and discuss the results of the history, showing that even into the 20th century there was considerable disagreement as to just what these modern numbers really mean. The culmination of this discussion engages the recent reversals in a large number of supposedly secure scientific results [1], and shows how this instability is a direct result of over-reliance on statistics to the neglect of ambiguity. Chapter 12 urges the reader to pay attention to the qualitative issues of ambiguity when exploiting the powerful quantitative tools of modern probability theory, and offers hope of a rapprochement between practitioners and researchers. An appendix summarizes the correspondence between Pascal and Fermat in 1654, to which Weisberg traces the birth of the modern sense of probability.

This volume is an outstanding example of the need to keep our scientific methods in context and the value of careful historical research to provide this context. It should be a required part of the statistical training of every scientist.

More reviews about this item: Amazon, Goodreads

Reviewer:  H. Van Dyke Parunak Review #: CR143267 (1506-0451)
1) Ioannidis, J. P. A. Why most published research findings are false. PLOS Medicine 2, (2005), 696–701.
Bookmark and Share
  Reviewer Selected
Editor Recommended
Featured Reviewer
 
 
Probability And Statistics (G.3 )
 
 
Life And Medical Sciences (J.3 )
 
 
Physical Sciences And Engineering (J.2 )
 
 
Social And Behavioral Sciences (J.4 )
 
Would you recommend this review?
yes
no
Other reviews under "Probability And Statistics": Date
Probabilities from fuzzy observations
Yager R. (ed) Information Sciences 32(1): 1-31, 1984. Type: Article
Mar 1 1985
Randomness conservation inequalities; information and independence in mathematical theories
Levin L. Information and Control 61(1): 15-37, 1984. Type: Article
Sep 1 1985
The valuing of management information. Part I: the Bayesian approach
Carter M. Journal of Information Science 10(1): 1-9, 1985. Type: Article
May 1 1986
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy