Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Programming productivity
Jones C., MCGRAW-HILL, INC., New York, NY, 1986. 280 pp. Type: Book (9780070328112)
Date Reviewed: Oct 1 1986

Capers Jones has been one of the most provocative students of programming productivity, and in this book he summarizes the conclusions reached during his 15 years of studying and measuring software development. He first critiques the current practice of software measurement and then discusses the factors that he believes exert the greatest leverage on productivity and quality. His book is the most important in this area since the publication of Boehm’s Software engineering economics [1].

Jones’s book differs from others with a similar title by not presenting a litany of standard software engineering practices that, if applied, are “guaranteed” to boost productivity and quality. Rather, Jones dissects the primary productivity and quality factors (many of which are not practices) to show how they affect software development. Jones discusses practices only if he has seen them exert leverage on project outcomes.

The first two chapters argue that progress in software engineering and management requires a scientific foundation in measurement. The first chapter elaborates Jones’s classic description of the paradoxes created by the most popular productivity and quality measures [2]. An excellent example of his analysis is that the “lines-of-code measures penalize high level languages and often move in the wrong direction as productivity improves” (p.5).

Computing productivity as lines of code per person-year is only sensitive to factors which help programmers write lines of code at a faster rate. Higher level languages, however, improve productivity by reducing the number of lines to be written. Software development includes many overhead factors, such as writing documentation and preparing test plans that are not affected by higher level languages, but are included in total project effort. If the rate at which lines can be written in a high level language is identical to that in a lower level language, the higher level language can appear less productive. This paradox occurs because the time spent on non-coding project functions (a constant across languages) increases the size of the divisor (person years) as a proportion of the numerator (lines of code) faster for higher level languages.

Jones presents a similar argument for why “cost-per-defect measures penalize high quality programs and always move in the wrong direction as quality is improved” (p. 5). He criticizes much of the available productivity and quality data because of serious measurement problems, such as inconsistent definitions of what constitutes a line of code, the range of activities that may or may not be included in project effort, and the failure to distinguish between economic and coding productivity. This chapter should be required reading for any manager, engineer, or scientist who uses programming measurements. Jones’s arguments are provocative, and will frustrate those who want to do simple arithmetic on easily obtained, but poorly conceived numbers. The underlying theme is that while programming measurements are crucial, they are readily misinterpreted unless the limitations of current metrics are understood.

The second chapter focuses on how a science of measuring software is evolving. Jones provides an excellent review of how lines of code measures grew with advances in programming languages. He develops a creative category scheme to describe how languages are evolving from those that are strictly textual toward those that include a significant graphics component. These categories are not points along a single conceptual dimension. Rather, they demonstrate that lines of code measures become useless with many of the interactive graphics languages currently under development. Languages for representing programs and interacting with computers require multidimensional measurements. Jones argues that the study of programming measurement should gravitate toward semiotics (which he calls “conceptual symbology”).

Several software metrics that provide alternatives to lines of code are reviewed. In particular, Jones makes a strong case for function points on applications to which their current definition is appropriate. He ends his critique of programming measurement with a discussion of how economic productivity should be factored into studies of programming. Economic measures focus on crucial issues underlying tough management decisions. Nevertheless, as is true with most critiques of software measurement, Jones does not present the foundation for a new measurement discipline. Rather, he challenges the field of software metrics to expand its definition of what is measureable.

Astute software metricians will notice the irony that Jones castigates current software measurement practices, but liberally uses such numbers throughout the book in describing software phenomena. Jones does not present data in a way to which research scientists are accustomed. That is, he does not present distributional statistics, but he often presents his numbers as absolutes. Although he warns readers about the tentativeness of some of his numbers (such as expansion ratios for recently developed languages), their use may bother the more quantitatively critical reader.

Nevertheless, the quality of the software measurements data available across companies and application areas is so low (lack of common definition, validation, etc.) that distributions based on these data are almost meaningless. Jones is emphasizing certain points using numbers that seem reasonable from his experience. Quibbling with his numbers and percentages is inviting, but it causes the reader to miss the forest for the trees. His purpose in presenting numbers is to represent trends, and his goal is to sensitize readers to the forces that, often invisibly, control project outcomes. He confronts readers with results that current measures often sweep under the carpet. Jones does not invent measures to resolve the problems he raises, but he has provided one of the clearest explanations of what these problems are.

The third chapter takes up over half of the book. This chapter presents the central analysis of programming productivity, where Jones discusses what he considers to be the 20 most important factors affecting software development. Among these factors are programmer experience, reusability, multisite decomposition, documentation, defect removal techniques, and code size. He selected these factors from studying quantitative data from projects in many environments. However, the historical data from which these factors were drawn are not presented. Rather, each factor is discussed in some detail, and a quantitative example is developed which shows how the variable affects project size, schedules, development and maintenance effort, and costs. There are some jewels in this chapter; for example, there is an exhaustive list of support software tools which includes many (like bidding, communications, and reference tools) that are too rarely discussed as integral parts of a programming environment.

Each quantitative example used to articulate the effect of a programming factor consists of several projects, each exhibiting a different level of the factor under consideration. Jones describes his assumptions about the projects through three sets of variables: project and personnel, environment, and data and code structure. The numbers in these examples were generated with a project estimating program that Jones has developed called SPQR. The algorithms used in SPQR are proprietary, but the inputs are listed in an Appendix. Contrary to many presentations of productivity data, these examples indicate the specific phases where the factor has its impact. Unfortunately, the first printing contains typographical errors in some of the many columns of numbers used in the examples. These errors have purportedly been corrected in the second printing.

The tough challenge which Jones faces in these examples is how to tease apart the separate contributions of these 20 factors. Almost no publicly available programming datasets are sufficiently accurate or complete to support this analysis statistically at his level of detail. The examples have the appearance of controlled experiments whose conduct is beyond any current technique but simulation. Jones bases his analyses on his decade and a half of studying programming measurements data, most notably in IBM and ITT. Thus, his assessments are drawn from studying projects in many different environments, rather than from a single environment, as was typical of the classical programming productivity studies.

Serious readers should study the examples carefully and determine how well the assumptions fit their own environment before interpreting the data presented. Jones uses these examples as vehicles for describing how different factors affect projects, and readers should not miss the conceptual points because they become engrossed in the numbers. Readers who wish to cover the chapter more quickly can do so by skimming over the examples. However, they will miss an opportunity to see how these various project factors exert their effects in the project data.

The final chapter discusses 25 factors that Jones believes are important, but whose impact he is unable to quantify. These are factors such as attrition, staff training, office facilities, and program restructuring. After briefly describing each of these factors, the book ends with a short Conclusion. Jones does not offer a prescription for implementing a productivity improvement program, although he does describe what factors seem to be important for corporations of different sizes. He assesses potential impacts and leaves technology planning to the reader.

Jones presents the initial choices from which a productivity improvement program can be built. The value of this book is in enabling professionals to understand the limits of software measurement, and in using measurements to explore how productivity and quality factors exert their influence. The book will be extremely beneficial to project managers and software professionals interested in productivity at the project and organizational level. Specialists in software measurement should be required to read the first two chapters, although they must relax their methodological guard in order to receive the real messages behind Jones’s productivity analyses. I recommend the book as a tour de force into what really affects software development projects.

Reviewer:  Bill Curtis Review #: CR110272
1) Boehm, B. W.Software engineering economics, Prentice-Hall, Englewood Cliffs, NJ, 1981. See <CR> 23, 7 (July 1982), Rev. 39,485.
2) Jones, T. C.Measuring programming quality and productivity, IBM Syst. J. 17 (1978), 39–63.
Bookmark and Share
 
Productivity (D.2.9 ... )
 
 
Life Cycle (K.6.1 ... )
 
 
Software Development (K.6.3 ... )
 
 
Statistics (K.1 ... )
 
 
Design Tools and Techniques (D.2.2 )
 
 
General (D.3.0 )
 
  more  
Would you recommend this review?
yes
no
Other reviews under "Productivity": Date
Productivity sand traps and tar pits
Walsh M., Dorset House Publ. Co., Inc.,  New York, NY, 1991. Type: Book (9780932633217)
Jun 1 1992
Effects of individual characteristics, organizational factors and task characteristics on computer programmer productivity and job satisfaction
Cheney P. Information and Management 7(4): 209-214, 1984. Type: Article
Jan 1 1986
Managing programming productivity
Jeffery D. (ed), Lawrence M. Journal of Systems and Software 5(1): 49-58, 1985. Type: Article
Sep 1 1985
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy