Billions of dollars are wasted each year on IT software projects that are developed and either released late or never used. In light of recent large-scale errors, and methods, tools, and practices used for software development have become the subject of significant study and analysis. One quantitative method for analysis is software assessment, which explores the methodologies used by businesses for software development. Another method of analysis is software benchmarking, which collects quantitative data on such topics as schedules and costs.
Renowned author Capers Jones draws on his extensive experience in economic analysis to present Software Assessments, Benchmarks, and Best Practices, a useful combination of qualitative and approaches to software development analysis. When assessment data and benchmarking data are analyzed jointly, it is possible to show how specific tools and practices impact the effectiveness of an organization's development efforts. The result is a clearer, bigger picture--a roadmap that allows an organization to identify areas for improvement in its development efforts.
Review By: Mark L. Johns 01/10/2002This is a well-written book that combines benchmarking and software assessments to help identify “opportunities” for process improvement. In addition to discussing techniques, the author includes individual benchmarks and best-practices chapters for six different types of software.
The book begins with a general overview of software assessments, data collection, and the need for client confidentiality. This is followed by a short history of the origins of software assessment and does a cursory comparison/contrast between SPR (Software Productivity Research) and SEI (Software Engineering Institute) models of assessment. Jones also discusses benchmarks and baselines, looking at inherent problems with benchmarking having to do with the size metrics used (LOC, Function Points, etc.). He also suggests thirty-six factors that if recorded by all software assessment entities would allow for a direct comparison of projects and aggregation of all projects. These thirty-six factors are divided into six categories: Classification, Project Specific, Technology, Sociological, Ergonomic, and finally International factors. The author finishes the first half of the book with discussions on Software Practices and how to distinguish between best and worst practices based upon data (empirical). The final chapter before the software-specific chapters deals with Software Process Improvement and builds upon concepts discussed earlier in the book. Jones also outlines a six-step process for taking the data from the assessment and benchmarking and actually implementing process improvement.
The final six chapters are dedicated to individual software project types. These are MIS, outsource, systems and embedded, commercial, military, and end user software development.
In each of these areas, the author defines the domain, special considerations for the type, demographics, benchmarks, success and failure factors, and finally best practices related to the specific software project type.
The appendix contains the SPR Questionnaire for Assessments, Benchmarks, and Baselines.
This document is the one used in face-to-face meetings with personnel at the company being assessed. In addition, an extensive glossary is provided.
I found this book to be a very interesting and enjoyable read. The author was the founder of the Software Productivity Research and Artemis Management Systems, which does, among other things, software assessments and benchmarking research. Despite this, the author has laid out very non-SPR specific techniques and generic principles that will help the process improvement specialist start to formulate their own assessment process.
This book is a “should” read for anyone interested in continuous software process improvement. While it is not a guidebook per se, it does have a wealth of valuable data and introduces principles that could save a time- or resource-strapped organization a lot of work. The success and failure factors, combined with the best practices and the plethora of other information available by project type, will save the user a lot of time and aggravation trying to assemble it on their own.
The combination of benchmarking and assessment as outlined in the text allows the user to see the big picture more easily. This will allow a more concentrated effort in achieving software process improvement.