One key benefit of metrics is that they can be measured using a standard process; we can explain the numbers, and leadership can understand what that means. The downside is that it is only a measurement, so issues can easily hide until they become problems, and great work can also go unrepresented. Sporting events are a great example: The end score tells you who won, but not the details of the game. We need to look deeper.
There are many metrics to measure the effectiveness of a testing team. One is the rejected defect ratio, or the number of rejected bug reports divided by the total submitted bug reports. You may think you want zero rejected bugs, but there are several reasons that’s not the case. Let's look at types of rejected bugs, see how they contribute to the rejected defect ratio, and explore the right ratio for your team.
A tester's job is to provide information about elements of the system that might make a user unhappy. But Jon Hagar finds that many testers implement limited tours, even when they have robust programs. He writes that when looking for bugs, testers need to look beyond the software to the system and the user scenarios, too.
In this first part of a two-part series, Mario Moreira writes that a reasonable application lifecycle management (ALM) product will have a common user interface for utilizing the ALM functionality. It will also include a meta-model and process engine to parse and share information across and amongst the various functions within the ALM framework. These technical needs must be accompanied by a strong business case for delivering higher customer value and new approaches for seamless integration.
The Internet of Things (IoT) enables amazing software-powered devices designed to make our business and personal lives easier. Lev Lesokhin discusses four fundamental practices you'll need when developing sophisticated software for the IoT.
Common practice suggests that lower severity defects shouldn't hold up a product release. Jennifer Gosden believes that, just as broken windows in a home can invite crime, letting lower severity defects linger results in poor overall product quality.
What happens when defects go unnoticed until it is too late? Mayank provides an insightful view of the true cost of not providing enough test coverage during a software development lifecycle. He also suggests some techniques to ensure that defects are identified and mitigated early.
Claire takes us on a nontraditional journey where designing and implementing testing approaches can be rapidly organized into a hierarchy of connected elements. Mind maps, used primarily for visual and conceptual thinking, may be just the answer for quality assurance professionals.
In this interview, Greg Paskal, a technology innovator in quality assurance, discusses a new open source tool from Elastic Stack that creates a “data lake” that can be mined to analyze the data coming from test automation on a more effective level than pass/fail.
In this interview, Jennifer Scandariato, the director of test engineering and leader of the Women in Technology initiative at iCIMS, explains how you can alter the way you develop your software to avoid creating defects—through culture, continuous integration, and automation.
In this interview, Geoff Meyer, a test architect in the Dell EMC infrastructure solutions group, explains how test teams can succeed by emulating sports teams in how they collect and interpret data. Geoff explains how analytics can better prepare you for the changing nature of software.
David Oddis talks about the importance of having an effective defect analysis process, as well as insight on how to manage testing across various SDLCs and the challenges it could present for teams. He also shares his opinions on today's hot topics.
Detection theory says: When trying to detect a certain event, a person can correctly report that it happened, miss it, report a false alarm, or correctly report that nothing happened. Under conditions of uncertainty, the decision to report an event is strongly influenced by how likely it...
As software increasingly becomes the face of the business, defects can lead to embarrassment, financial loss, and even business failure. Nevertheless, in response to today's demand for speed and “continuous everything,” the software delivery conveyer belt keeps moving faster and faster...
Can defect root cause analysis be made agile? Can we transform a multi-hour task from the classical world of software engineering into one that takes minutes and yields greater insights? Learn how Orthogonal Defect Classification (ODC) extracts semantics from defects and turns them into insights on the development process using analytics. After a quick overview of ODC, Ram Chillarege presents a case study to illustrate the method using real-world data on an agile project. They used ODC Triggers to measure test effectiveness at the end of every sprint to evaluate the effectiveness of testing compared to earlier sprints. This ODC process takes just minutes and brings its insight into the realm of the agile development practices. Put a powerful analytical technique in your agile toolbox to increase the velocity of your agile project and find new ways to reduce defects while measuring the quality of testing.
Continuous integration (CI) has become a buzzword, with most engineering organizations claiming they've adopted the practice. However, the sad truth is that unreliable tests, long feedback loops, and poor configuration management block their efforts and minimize CI's potential benefits. Jesse Dowdle shares how AtTask radically redesigned its engineering pipeline and, through massive CI scaling, drove three days of testing to just minutes. Learn the pros and cons of different CI systems and how to integrate them with the cloud. Watch a live demo of AtTask's internal test and CI systems, which they’ve designed to make "Every commit a potential release candidate"-meaning that every commit is an iteration. Arm yourself with the talking points to sell massive CI to executives.