Agile software development is a well-established process in an industry (and in academia) where humans need to tackle software development. Agile processes are various: they span from coding to project and product management. Agile processes have been widely studied in literature since the 1990s. However, few results focus on a measurement of how agile processes perform with respect to traditional processes (such as the waterfall model).
The objective of this paper is to evaluate the results of an empirical method and systematic analysis carried out in a software development company to compare the performance of agile processes (composed of unified process, extreme programming, and Scrum) to the traditional waterfall model. The software company under examination is a midsize telecommunications software vendor. The aim is to measure the performance of the processes of two different teams using the agile and traditional processes as described above. The teams were working on two different projects, both parts of the same product family and both with the same amount of requested new features. The metrics used to evaluate the process are goal, question, metric (GQM) and Capability Maturity Model Integration (CMMI). Using such metrics, the authors reveal that in the context under analysis, agile performed better in a particular set of objectives and similar in another set, but never worse.
The paper first introduces the two metrics used, GQM and CMMI, and then describes how they are tailored for this research. The steps and specific goals of the metrics are introduced, and the results are combined with specific formulas.
However, the results are not unexpected. As an example, the “test defect density” specific goal reveals that agile performs much better than the waterfall model due to its test-driven development (TDD) technique: tests are executed more regularly, whereas in the case of the waterfall model, tests are made at a later stage. For a similar reason, the specific goal “test execution verification and validation (V&V) effectiveness” shows how agile performs well in the evaluation of testing.
The paper concludes with a section related to threats that may jeopardize the results. In fact, not all of the best practices of agile are taken into account when evaluating the results. For example, the teams’ different skills are not considered.
The paper is worth reading for a couple of reasons: it shows the useful application of GQM and CMMI techniques to real-world use cases, and it attempts to provide a method that can be reused by the industry to evaluate the effectiveness of an adopted agile process.