Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
A practical guide to controlled experiments of software engineering tools with human participants
Ko A., LaToza T., Burnett M. Empirical Software Engineering20 (1):110-141,2015.Type:Article
Date Reviewed: May 21 2015

Software engineering (SE) is, by its nature, empirical and interdisciplinary. A brief reflection on the software engineering life cycle identifies numerous activities, having varied interactions, that require challenging choices: for example, requirements, design, build, and evaluation activities; hardware, software, and human interactions; and management, human resources, team, process, technology, and tool choices. In this paper, Ko, LaToza, and Burnett focus on the evaluation of software engineering tools with human participants.

The authors reviewed a sample of over 1700 SE papers from 2001 to 2011 from four major sources, and found only 44 experiments involving human use of a tool. The authors make a case for the value of controlled experiments in growing SE knowledge and propose practical guidance to increase their application in SE research.

To overcome obstacles to wider use, this paper provides guidance in conducting controlled experiments in SE tool evaluations. It presents a ten-step generic model for experiment design with guidance for some specific issues related to the use of SE tools. The paper makes an important contribution to SE empirical research by identifying these issues and alternative choices for addressing them. The paper is of interest to SE empirical researchers. Since the model is generic and, also, shows how implementation of the steps can raise domain-specific issues, it will be of interest to researchers in other disciplines.

The paper is an excellent beginning of a compendium of practical guidance. There are, however, many additional implementation issues that pertain to other aspects of the SE life cycle. For example, the formalization of SE process improvement practices in the late 1980s included quantitative process performance models used to improve many of the life cycle processes. These models often included aspects of a controlled experiment. Hopefully, the guidance begun by the authors will encourage more use of controlled experiments in SE, motivate work to capture alternatives from SE practice, and result in a compendium of alternatives for challenges of controlled experiments over the SE life cycle.

Reviewer:  J. M. Perry Review #: CR143459 (1508-0708)
Bookmark and Share
  Featured Reviewer  
 
Testing And Debugging (D.2.5 )
 
 
Tools (D.2.1 ... )
 
Would you recommend this review?
yes
no
Other reviews under "Testing And Debugging": Date
Software defect removal
Dunn R., McGraw-Hill, Inc., New York, NY, 1984. Type: Book (9789780070183131)
Mar 1 1985
On the optimum checkpoint selection problem
Toueg S., Babaoglu O. SIAM Journal on Computing 13(3): 630-649, 1984. Type: Article
Mar 1 1985
Software testing management
Royer T., Prentice-Hall, Inc., Upper Saddle River, NJ, 1993. Type: Book (9780135329870)
Mar 1 1994
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy