measurement

Conference Presentations

Ten Habits of Highly Effective Measurement Programs

Accurately measuring product quality and process capabilities is challenging in any software organization. Most organizations do not attempt any real measurement at all, and the ones that do often fail miserably. In fact, the industry success rate for software measurement programs is terribly low-some say less than 25 percent. Ian Brown presents ten keys to measurement success, including: having people dedicated to measurement work rather than those assigned part-time, a strong commitment from senior management, measurements directly related to articulated business goals, automated measurement collection tools, integrating measurement into the process rather than tacked on, and more. Based on the successes of Booz Allen Hamilton, learn how to start small and slowly grow your measurement program to build success on top of success.

Ian Brown, Booz Allen Hamilton
Managing by the Numbers

Metrics can play a vital role in software development and testing. We use metrics to track progress, assess situations, predict events, and more. However, measuring often creates "people issues," which, when ignored, become obstacles to success or may even result in the death of a metrics program. People often feel threatened by the metrics gathered. Distortion factors may be added by the people performing and communicating the measurements. When being measured, people can react with creative, sophisticated, and unexpected behaviors. Thus our well-intentioned efforts may have a counter-productive effect on individuals and the organization as a whole. John Fodeh addresses some of the typical people issues and shows how cognitive science and social psychology can play important roles in the proper use of metrics.

John Fodeh, HP - Mercury
Measuring the Effectiveness of Testing Using DDP

Does your testing provide value to your organization? Are you asked questions like "How good is the testing anyway?" and "Is our testing any better this year?" How can you demonstrate the quality of the testing you perform, both to show when things are getting better and to show the effect of excessive deadline pressure? Defect Detection Percentage (DDP) is a simple measure that organizations have found very useful in answering these questions. It is easy to start-all you need is a record of defects found during testing and defects found afterwards (which you probably already have available). Join Dorothy Graham as she shows you what DDP is, how to calculate it, and how to use it to communicate the effectiveness of your testing. Dorothy addresses the most common stumbling blocks and answers the questions most frequently asked about this very useful metric.

Dorothy Graham, Grove Consultants
Measuring the End Game of Software Project - Part Deux

The schedule shows only a few weeks before product delivery. How do you know whether you are ready to ship? Test managers have dealt with this question for years, often without supporting data. Mike Ennis has identified six key metrics that will significantly reduce the guesswork. These metrics are percentage of tests complete, percentage of tests passed, number of open defects, defect arrival rate, code churn, and code coverage. These six metrics, taken together, provide a clear picture of your product's status. Working with the project team, the test manager determines acceptable ranges for these metrics. Displaying them on a spider chart and observing how they change from build to build enables a more accurate assessment of the product's readiness. Learn how you can use this process to quantify your project's "end game".

  • Decide what and how to measure
  • Build commitment from others on your project
Mike Ennis, Savant Tecnology
Measuring the "Good" in "Good Enough Testing"

The theory of "good enough" software requires determining the trade off between delivery date (schedule), absence of defects (quality), and feature richness (functionality) to achieve a product which can meet both the customer's
needs and the organization's expectations. This may not be the best approach for pacemakers and commercial avionics software, but it is appropriate for many commercial products. But can we quantify these factors? Gregory Pope
does. Using the COQALMOII model, Halstead metrics, and defect seeding to predict defect insertion and removal rates; the Musa/Everette model to predict reliability; and MatLab for verifying functional equivalence testing, Greg
evaluates both quality and functionality against schedule.

  • Review how to measure test coverage
  • Discover the use of models to predict quality
  • Learn what questions you should ask customers to determine "good enough"
Gregory Pope, Lawrence Livermore National Laboratory
Software Security Testing: It's Not Just for Functions Anymore

What makes security testing different from classical software testing? Part of the answer lies in expertise, experience, and attitude. Security testing comes in two flavors and involves standard functional security testing (making sure that the security apparatus works as advertised), as well as risk-based testing (malicious testing that simulates attacks). Risk-based security testing should be driven by architectural risk analysis, abuse and misuse cases, and attack patterns. Unfortunately,
first generation "application security" testing misses the mark on all fronts. That's because canned black-box probes-at best-can show you that things are broken, but say very little about the total security posture. Join Gary McGraw to learn what software security testing should look like, what kinds of knowledge testers must have to carry out such testing, and what the results may say about security.

Gary McGraw, Cigital Inc
Software Metrics to Improve Release Management

In large organizations with multiple groups or multiple projects, developing consistent and useful metrics for release management is highly challenging. However, when targeted at specific release goals, metrics can help monitor the development schedule and provide both managers and developers with the data needed to improve quality. With nearly eighty products that must be released on the same date, Mathworks has developed a release metrics program with a consistent method to categorize and prioritize bugs based on severity and frequency. Learn how they track progress toward bug fix targets for each category of bugs and monitor them consistently across their product line throughout the release cycle. See examples of metrics reports designed for management and daily use by teams, including historical trending analysis of overall and customer-reported bug counts.

Nirmala Ramarathnam, The MathWorks Inc
A Metrics Dashboard to Drive Goal Achievement

Some measurement programs with high aims fall short, languish, and eventually fail completely because few people regularly use the resulting metrics. Based on Cisco Systems' five years of experience in establishing an annual quality program employing a metrics dashboard, Wenje Lai describes their successes and challenges and demonstrates the dashboard in use today. He shows how the metrics dashboard offers an easy-to-access mechanism for individuals and organizations within Cisco Systems to understand the gap between the current standing and their goals. A mechanism within the dashboard allows users to drilldown and see the data making up measurement to identify ownership of issues, root causes, and possible solutions. Learn what programs they implemented to ensure that people use the metrics dashboard to help them in their day-to-day operations.

  • How to build an effective metrics dashboard to help achieve quality goals
Wenje Lai, Cisco Systems Inc
The Complete Developer

With the global availability of talented development people there is a growing trend toward the commoditization of software development. No longer is it enough to simply be a developer with knowledge of specific languages or algorithms in order to maintain your competitive edge in the marketplace. To compete, you must become a complete developer-someone who can, for example, write some code in the morning and in the afternoon update the requirements Wiki with the results of the latest customer review meeting with your marketing team. This talk explores what it takes to be a genuinely valuable complete developer in today’s world of agile development, outsourcing, globalization, and an increasingly complex business environment.

Luke Hohmann, Enthiosys, Inc.
Test Metrics in a CMMI® Level 5 Organization

As a CMMI® Level 5 company, Motorola Global Software Group is heavily involved in software verification and validation activities. Shalini Aiyaroo, senior software engineer at Motorola, shows how tracking specific testing metrics can serve as key indicators of the health of testing and how these metrics can be used to improve testing. To improve your testing practices, find out how to track and measure phase screening effectiveness, fault density, and test execution productivity. Shalini Aiyaroo describes their use of Software Reliability Engineering (SRE) and fault prediction models to measure test effectiveness and take corrective actions. By performing orthogonal defect classification (ODC) and escaped defect analysis, the group has found ways to improve test coverage. CMMI® is a registered trademark of Carnegie Mellon University.

Shalini Aiyaroo, Motorola Malaysia Sdn. Bhd

Pages

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.