Review: Cost, Quality, and User Satisfaction of Software Products: An Empirical Analysis, by M. S. Krishnan

Today I'm reviewing a paper from 1993 about software quality. The paper tries to relate quality to cost and to user satisfaction. Overall, it's a thumbs-up from me.

M. S. Krishnan. 1993. Cost, quality and user satisfaction of software products: an empirical analysis. In Proceedings of the 1993 conference of the Centre for Advanced Studies on Collaborative research: software engineering - Volume 1 (CASCON '93). IBM Press, 400-411.

The study is modest support for several findings:

  1. Make better software; it decreases the total cost.
  2. Bigger projects benefit more from quality.
  3. Invest more than average on up-front design to increase quality and decrease total cost.
  4. Returns diminish on higher quality. After some point, fewer bugs do not decrease cost.
  5. Users value capability and usability over other factors such as reliability and documentation.
  6. Secondary factors might differ based on software type. For example, for network software, reliability was an important factor in user satisfaction.

Methods

The researcher selected twenty-seven projects developed in the three years before the study. Three models of quality were fitted to the actual data. The variables measured were:

  • Lines of code
  • Total cost of the project
  • Number of defects reported by users

Two variables were derived:

  • High/low complexity (binary variable of whether it was bigger or smaller than average)
  • High/low up-front spending (binary variable of whether the pre-development stages cost more than average)

Some caveats:

  • The total cost of software is measured in dollars spent by everyone on the project across all waterfall phases. It would be nice to have the data for how much was spent by what type of employee all along the development process.
  • The complexity of the project is measured in lines of code. Not a great measure. And we're not sure if the same languages were used across projects.
  • In fact, no actual numbers are given at all. Only hand-drawn plots with no numbers are shown, purely for illustrative purposes.
  • Twenty-seven is quite a small sample. I'm doubtful it's enough to get above the noise level. And without data, we can't see if any outliers skew the aggregates.

Some further questions:

  • Do the numbers change with the ease of delivery and upgrading of software via the web?
  • What does the cost v quality curve look like when plotted next to customer growth? That is, for startups looking to defer costs of debugging to gain speed of development, the revenue growth might outpace support/debugging cost. Should you focus on features over bug fixes?
  • Does the elbow in the curve suggest an optimum point?
  • The number of user-reported bugs is quite a lagging indicator. What might be a leading indicator of software quality?