metrics

Conference Presentations

A Holistic Way to Measure Quality

Have your executives ever asked you to measure product quality? Is there a definitive way to measure application quality in relation to customer satisfaction? Have you observed improving or excellent defect counts and, at the same time, heard from customers about software quality issues? If you are in a quandary about quality metrics, Jennifer Bonine may have what you are looking for. Join her to explore the Problems per User Month (PUM) and the Cost of Quality (CoQ) metrics that take a holistic approach to measuring quality and guiding product and process improvements. Learn what data is required to calculate these metrics, discover what they tell you about the quality trends in your products, and learn how to use that information to make strategic improvement decisions. By understanding both measures, you can present to your executives the information they need to answer their questions about quality.

Jennifer Bonine, Up Ur Game Learning Solutions
Understanding and Using Code Metrics

Have you heard any of these from your development staff or said them yourself? "Our software and systems are too fragile." "Technical debt is killing us." "We need more time to refactor." Having quality code is great, but we should understand why it matters and specifically what is important to your situation. Joel Tosi begins by defining and discussing some common code metrics-code complexity, coverage, object distance, afferent/efferent coupling, and cohesion. From there, Joel takes you through an application with poor code metrics and shows how this application would be difficult to enhance and extend in the future. Joel wraps up with a discussion about what metrics are applicable for specific situations such as legacy applications, prototypes, and startups. You'll come away from this class with a better understanding of code metrics and how to apply them pragmatically.

Joel Tosi, VersionOne, Inc.
Sleep Better at Night: A Release Confidence Metric

A project manager decides a product is good enough to release-that it will be successful in the marketplace or the business. The manager is basing this judgment on confidence in the product. Confidence is a simple word, yet it is an extraordinarily intangible measure. Confidence drives a huge number of software releases each day. Can our confidence be quantified? Can it be measured? Terry Morrish thinks so and shares a formula for measuring release confidence by combining measures from the current development cycle with those of the past releases and from client feedback. The Release Confidence metric can help predict the number of clients who will be affected by post-release problems and how much time and money will be spent on maintenance and rework. By employing this approach, project managers can have a quantitative picture of release risk, providing for a more informed decision process-and a better night's sleep.

Terry Morrish, Synacor
Questioning Measurement

When we consciously measure something, we try to measure precisely and often assume that our measurements are accurate and useful. However, software development and testing activities are not subject to the same kinds of quantitative measurements and precise predictions we find in physics. Instead, our work is like the social sciences, in which complex interactions between people and systems make measurement difficult and precise prediction impossible. Michael Bolton argues that all is not lost. It is possible and surprisingly straightforward to measure development and testing accurately and usefully–even if not precisely. You can measure how much time is spent on test design and execution compared with time spent on interruptions, track coverage obtained for each product area, and more.

Michael Bolton, DevelopSense
The Net Promoter Score: Measure and Enhance Software Quality

Would you like to know–prior to release–how your customers will perceive product quality? Employing the Net Promoter Score (NPS) technique, Anu Kak shares a strategy he has successfully used to provide this information and, at the same time, help improve actual product quality. Today, many organizations are using NPS for their production products to identify customers who are most likely to be either promoters or detractors. This measurement tool provides the information needed to prioritize product fixes and enhancements. Anu shares his experiences applying NPS within software product development to enhance quality before release. He explains the step-by-step implementation of NPS within software engineering. Learn how to read and analyze NPS feedback and implement an NPS-centric process to enhance product quality. Take back a road map to evangelize NPS adoption among the stakeholders in your organization.

Anu Kak, PayPal, Inc.
Managing with Metrics

Many consider metrics a thorn in the side of software test and development efforts. However, when used properly, metrics offer critical insight into underlying issues within projects. In addition, metrics can provide vital real-time information for strategic and tactical project adjustments. Based on his experiences during a major acceptance test project within a lengthy ERP implementation, Shaun Bradshaw demonstrates how an optimal set of test metrics steered the effort toward success. Shaun shares key metrics to track progress and test coverage that enabled their test and management decisions and describes ways these same metrics can benefit your organization. Learn how to implement these valuable indicators and how to relay this information up the management chain in easily comprehensible forms. Take home a set of valuable metrics you can implement quickly to give yourself the upper hand in future testing efforts.

Shaun Bradshaw, Zenergy Technologies
The Test Manager's Dashboard: Making It Accurate and Relevant

Gathering and presenting clear information about quality-both product and process-may be the most important part of the test manager's job. Join Lloyd Roden as he challenges your current progress reports-probably full of lots of difficult-to-understand numbers-and asks you to replace the reports with a custom Test Manager's Dashboard containing a series of graphs and charts with clear visual displays. Your dashboard needs to report quality and progress status that is accurate, useful, easily understood, predictive, and relevant. Learn about Lloyd's favorite dashboard graphs-test efficiency, risk progress, quality targets, and specific measures of the test team's well being. Learn to correlate and interpret the various types of dashboard data to reveal the complete picture of the project and test progress.

Lloyd Roden, Grove Consultants
A Customer-driven Approach to Software Metrics

In their drive to delight customers, organizations initiate testing and quality improvement programs and define metrics to measure their success. In many cases, we see organizations declare success even though their customers do not see much improvement in either products of services. Wenje Lai and J.P. Chen share their approach of identifying quality improvement needs and defining the appropriate metrics that link improvement goals to customer experiences. As a result, the resources allocated to internal quality improvement efforts maximize the value to the business. Their approach is a simple three-step procedure that any test or development organization can easily adopt. It starts with using customer survey data to understand the organization’s customer pain points and ends with identifying the metrics that are linked to the customer experience and actionable by development and test teams inside the organization.

Wenje Lai, Cisco Systems
Eliminating Process Bottlenecks: The Theory of Constraints

Managers often fall into the trap of making sure that everyone is busy. It seems logical that we should keep all of our highly paid “resources” (ouch!) fully utilized. Surprisingly, optimizing for maximum utilization (busyness) actually creates less business value. Not surprisingly, it also can lead to quality problems, lowered job satisfaction, and even burnout. Join Chris Sims for this experiential session about the Theory of Constraints in which we explore better ways of optimizing how teams work. We will launch a fictitious aerospace company, build airplanes (albeit paper ones), and track our financial results. We'll apply the “Five Focusing Steps” from the Theory of Constraints: identify, exploit, subordinate, elevate, and repeat. We'll devise a process to evolve and improve our efficiency, our satisfaction in a job well done, and, ultimately, our profitability.

Chris Sims, Agile Learning Labs
Is It Good Enough To Ship? Predicting Software Quality

Software quality often gets lots of lip service and little else-until its absence triggers a disaster and stuff hits the wall. Don Beckett shares work he did to determine when the software for a satellite destined to orbit the earth was sufficiently stable to risk being launched. Failure would have cost hundreds of millions of dollars. Don shows how he modeled this problem to answer the “launch/don't launch” question. Beginning with an analysis of the factors that determine acceptable quality and the issues that confront defect collection, Don overviews how defect discovery follows a Rayleigh curve distribution that anyone can use for predicting defects remaining in a system. He shares a model of how staffing and scheduling trade-offs will almost certainly impact defect creation rates.

Donald Beckett, Quantitative Software Management

Pages

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.