measurement

Articles

Red rubber stamp that says "Rejected" Use the Rejected Defect Ratio to Improve Bug Reporting

There are many metrics to measure the effectiveness of a testing team. One is the rejected defect ratio, or the number of rejected bug reports divided by the total submitted bug reports. You may think you want zero rejected bugs, but there are several reasons that’s not the case. Let's look at types of rejected bugs, see how they contribute to the rejected defect ratio, and explore the right ratio for your team.

Michael Stahl's picture Michael Stahl
Gauge with a needle in the green zone, showing good performance 7 Simple Tips for Better Performance Engineering

Rigorous practices to reinforce performance and resilience, and testing continuously for these aspects, are great ways to catch a problem before it starts. And as with many aspects of testing, the quality of the performance practice is much more important than the quantity of tests being executed. Here are seven simple tips to drive an efficient performance and resilience engineering practice.

Franck Jabbari's picture Franck Jabbari
Dial with the needle moving from red to green A Better Way of Reporting Performance Test Results

Reporting the results of functional tests is relatively simple because these tests have a clear pass or fail outcome. Reporting the results of performance testing is much more nuanced, and there are many ways of displaying these values—but Michael Stahl felt none of these ways was particularly effective. He proposes a reporting method that makes performance test results easy to read at a glance.

Michael Stahl's picture Michael Stahl
Icon of a dial showing good system performance Measuring the Performance of Your Operations Center

Many organizations have problems with consistently tracking and measuring system outages. Issues aren't logged, admins make changes to systems without going through change management, and a high number of issues turn out to be recurring problems. Implementing a performance measurement process calculates system reliability and can help you improve consistency.

Nels Hoenig's picture Nels Hoenig

Better Software Magazine Articles

Building Highly Productive Teams Using a Commitment-to-Progress Ratio: Work Committed vs. Done

This article explains methods to build a team that will embrace "required work" and deliver robust software in a predictable fashion. It proposes a measure that helps calculate the throughput of an agile team by comparing work committed to work actually done.

Aleksander Brancewicz's picture Aleksander Brancewicz
Measure the Measurable: Improving Software Quality Through Telemetry

Observing customers in a usability lab can be invaluable for improving product design. But, once your software leaves the lab, do you know what your customers are actually doing and whether or not your software meets their expectations? Learn how engineers on the Microsoft Office team apply a variety of software telemetry techniques to understand real-world usage, how the results drive product improvements, and how you can apply similar techniques.

Jamie Campbell's picture Jamie Campbell
Issues about Metrics about Bugs

Managers often use metrics to help make decisions about the state of the product or the quality of the work done by the test group. Yet, measurements derived from bug counts can be highly misleading because a "bug" isn't a tangible, countable thing; it's a label for some aspect of some relationship between some person and some product, and it's influenced by when and how we count ... and who is doing the counting.

Michael Bolton's picture Michael Bolton
Learning from Experience: Software Testers Need More than Book Learning

People often point to requirements documents and process manuals as ways to guide a new tester. Research into knowledge transfer, as described in The Social Life of Information, suggests that there is much more to the process of learning. Michael Bolton describes his own experiences on a new project, noting how the documentation helped ... and didn't.

Michael Bolton's picture Michael Bolton

Interviews

How to Use Your Data in an Agile Environment: An Interview with Larry Maccherone
Podcast

In this interview, Larry Maccherone, the director of analytics and research at AgileCraft, explains how you can better use data within your software team. He digs into metrics and measurements within an agile environment and how to determine what data is valuable.

Josiah Renaudin's picture Josiah Renaudin
Michael Sowers, TechWell IT director Test Measurements, Metrics, and Automation: An Interview with Mike Sowers
Video

In this interview, TechWell's Mike Sowers goes in depth on the measurements, metrics, and management side of testing. He discusses both the good and the bad on his test automation journey in large and small enterprises and communicates the real challenges.

Jennifer Bonine's picture Jennifer Bonine
Annette Ash Motivate and Inspire Software Quality Goals: An Interview with Annette Ash
Video

In this interview, Annette Ash, a coach and trainer with SolutionsIQ, talks about the dirty term in the room: quality metrics. She reveals whether tracking metrics is beneficial, what it accomplishes, and what should be tracked with regards to software quality.

Jennifer Bonine's picture Jennifer Bonine
Better Test Automation Better Test Automation, Metrics, and Measurement: An Interview with Mike Sowers
Podcast

In this interview, TechWell CIO and consultant Mike Sowers details key metrics that test managers employ to determine software quality, how to know a piece of software's readiness, and guidelines for developing a successful test measurement program.

Josiah Renaudin's picture Josiah Renaudin

Conference Presentations

STARWEST 2018 What You Can't Measure, You Can't Improve: Measurements for a Continuous Delivery Organization
Slideshow

Ashwin Desai has faced the daunting challenge of using measurements and metrics to assess and improve product quality through process change. Join him as he shares what he learned on the journey to move the sports technology firm Hudl from a reactive approach to quality to quantitative, data driven, proactive means to improve product quality. Just as Hudl itself provides the ability for coaches and teams to analyze and improve their performance based on data, they wanted to move the teams building Hudl to use the same approach to improve quality. Ashwin shares how they selected measurements, the work agile teams completed to get buy-in for the measurements, and how the data was normalized to provide understanding of the quality of each initiative and the variance between them.

Ashwin Desai
Agile DevOps Setting and Measuring Individual Performance in Agile Teams
Slideshow

When software development teams work in waterfall environments, traditional performance management programs can help encourage personal development and innovation. However, Tina Rusnak says that when organizations move to agile, measuring performance takes on a new form that often causes...

Tina Rusnak
Video: The Mismeasure of Software: The Last Metrics Talk You'll Ever Need to Hear
Video

The Mismeasure of Software: The Last Metrics Talk You'll Ever Need to Hear Lee Copeland claims that most organizations have some kind of metrics program—and almost all are ineffective. After explaining the concept of measurement, Lee describes two key reasons for these almost universal...

Lee Copeland, Software Quality Engineering
How Metrics Programs Can Destroy Your Soul
Slideshow

Testers are often evaluated by metrics that don’t really quantify the value of their work. Metrics such as tests planned, tests executed, coverage achieved, and defects reported all...

Scott Barber, SmartBear

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.