metrics

Conference Presentations

Deception and Estimation: How We Fool Ourselves

Cognitive scientists tell us that we are hardwired for deception. It seems we are overly optimistic, and, in fact, we wouldn't have survived without this trait. With this built-in bias as a starting point, it's almost impossible for us to estimate accurately. That doesn't mean all is lost. We must simply accept that our estimates are best guesses and continually re-evaluate as we go, which is, of course, the agile approach to managing change. Linda Rising has been part of many plan-driven development projects where sincere, honest people with integrity wanted to make the best estimates possible and used many "scientific" approaches to make it happen-all for naught.

Linda Rising, Independent Consultant
The Uncertainty Surrounding the Cone of Uncertainty

Barry Boehm first defined the "Cone of Uncertainty" of software estimation more than twenty-five years ago. The fundamental aspect of the cone is quite intuitive-that project uncertainty decreases as you discover more during the project. Todd Little takes an in-depth look into some of the dynamics of software estimation and questions some of the more common interpretations of the meaning of the "cone." Todd presents surprising data from more than one hundred "for market" software projects developed by a market-leading software company. He compares their data with other published industry data. Discover the patterns of software estimation accuracy Todd found, some of which go against common industry beliefs. Understanding the bounds of uncertainty and patterns from past projects help us plan for and manage the uncertainties we are sure to encounter.

Todd Little, Landmark Graphics Corporation
Using Source Code Metrics to Guide Testing

Source code metrics are frequently used to evaluate software quality and identify risky code that requires focused testing. Paul Anderson surveys common source code metrics including Cyclomatic Complexity, Halstead Complexity, and additional metrics aimed at improving security. Using a NASA project as well as data from several recent studies, Paul explores the question of how effective these metrics are at identifying the portions of the software that are the most error prone. He presents new metrics targeted at identifying integration problems. While most metrics to date have focused on calculating properties of individual procedures, newer metrics look at relationships between procedures or components to provide added guidance. Learn about newer metrics that employ data mining techniques implanted with open source machine-learning packages.

  • Common code metrics and what they mean
Paul Anderson, GrammaTech, Inc.
Measure Quality on the Way In - Not on the Way Out

If you have been a test manager for longer than a week, you have probably experienced pressure from management to offshore some test activities to save money. However, most test professionals are unaware of the financial details surrounding offshoring and are only anecdotally aware of factors that should be considered before outsourcing. Jim Olsen shares his experiences and details about the total cost structures of offshoring test activities. He describes how to evaluate the maturity of your own test process and compute the true costs and potential savings of offshore testing. Learn what is needed to coordinate test practices at home with common offshore practices, how to measure and report progress, and when to escalate problems. Jim shares the practices Dell uses for staffing and retention, including assessing cultural nuances and understanding foreign educational systems.

Jan Fish, Lifeline Systems
Measuring the End Game of Software Project - Part Deux

The schedule shows only a few weeks before product delivery. How do you know whether you are ready to ship? Test managers have dealt with this question for years, often without supporting data. Mike Ennis has identified six key metrics that will significantly reduce the guesswork. These metrics are percentage of tests complete, percentage of tests passed, number of open defects, defect arrival rate, code churn, and code coverage. These six metrics, taken together, provide a clear picture of your product's status. Working with the project team, the test manager determines acceptable ranges for these metrics. Displaying them on a spider chart and observing how they change from build to build enables a more accurate assessment of the product's readiness. Learn how you can use this process to quantify your project's "end game".

  • Decide what and how to measure
  • Build commitment from others on your project
Mike Ennis, Savant Tecnology
Measuring the "Good" in "Good Enough Testing"

The theory of "good enough" software requires determining the trade off between delivery date (schedule), absence of defects (quality), and feature richness (functionality) to achieve a product which can meet both the customer's
needs and the organization's expectations. This may not be the best approach for pacemakers and commercial avionics software, but it is appropriate for many commercial products. But can we quantify these factors? Gregory Pope
does. Using the COQALMOII model, Halstead metrics, and defect seeding to predict defect insertion and removal rates; the Musa/Everette model to predict reliability; and MatLab for verifying functional equivalence testing, Greg
evaluates both quality and functionality against schedule.

  • Review how to measure test coverage
  • Discover the use of models to predict quality
  • Learn what questions you should ask customers to determine "good enough"
Gregory Pope, Lawrence Livermore National Laboratory
Peanuts and Crackerjacks: What Baseball Taught Me about Metrics

Because people can easily relate to a familiar paradigm, analogies are an excellent way to communicate complex data. Rob Sabourin uses baseball as an analogy to set up a series of status reports to manage test projects, share results with stakeholders, and measure test effectiveness. For
test status, different audiences-test engineers, test leads and managers, development managers, customers, and senior management-need different information, different levels of detail, and different ways of looking at data. So, what "stats" would you put on the back of Testing Bubble Gum

Robert Sabourin, AmiBug.com Inc
Achieving Meaningful Metrics from Your Test Automation Tools

In addition to the efficiency improvements you expect from automated testing tools, you can-and should-expect them to provide valuable metrics to help manage your testing effort. By exploiting the programmability of automation tools, you can support the measurement and reporting aspects of your department. Learn how Jack Frank employs these tools with minimal effort to create test execution
status reports, coverage metrics, and other key management reports. Learn what measurement data your automation tool needs to log for later reporting. See examples of the operational reports his automation tools generate, including run/re-run/not run, pass/fail, percent complete, and percent of overall system tested. Take with you examples of senior management reports, including Jack's favorite, "My Bosses' Boss Test Status Report"-names will be changed to hide the guilty. Regardless of the

Jack Frank, Mosaic Inc
Guerilla Software Metrics: Leaving the Developers Alone

This presentation describes an approach to initiating and conducting a metrics program that takes advantage of existing measurement/tracking infrastructure without adding significant extra tasks and reporting responsibilities. Scott Duncan identifies three areas where measurement data may already exist. Learn how to work with management and staff in these areas to make use of the data being collected.

Scott Duncan, SoftQual Consulting
A Metrics Dashboard for IT Project Reporting

Tom Olenick described the activities performed to design, develop, deploy, and maintain a Project Management Metrics Dashboard across the IT organization of a major Chicago-based securities organization. Learn how this metrics dashboard was used to facilitate project status tracking for IT management and to provide a basis for improving the efficiencies of software development activities and estimation.

Thomas Olenick, Olenick & Associates

Pages

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.