Investing in a new product always involves risk. We may have targeted the wrong market segment, envisioned the wrong product or the wrong features, or the market may have changed by the time the product is launched.
The goal of software metrics is to have a rich collection of data and an easy way of mining the data to establish the metrics for those measures deemed important to process, team, and product improvement. When you measure something and publish the measurement regularly, improvement happens. This is because a focus is brought on the public results.
In "Practical Test Reporting," David Freeman wrote about how to start a basic test metrics program. In this follow-up article, he shows how to combine the historical information to predict how your future projects may track–kind of like creating your own metric "magic eight ball."
Developing a usable and consumable test-metric-reporting system is a challenge for all testing organizations. This article describes a system employable by small and large organizations and all test efforts. By using existing tools, test teams can show current progress and predict future test efforts.
The problem with urging outside-the-box thinking is that many of us do a less-than-stellar job of thinking inside the box. We often fail to realize the options and opportunities that are blatantly visible inside the box that could dramatically improve our chances of success. In this column, Naomi Karten points out how we fall victim to familiar traps, such as doing things the same old (ineffective) way or discounting colleague and teammate ideas. Thinking outside of the box can generate innovative and ingenious ideas and outcomes, but the results will flop when teammates ignore the ideas inside the box.
When several different test automation vendors provide similar services, it is sometimes difficult to choose the right test automation software. Clinton Sprauve illustrates how to research various vendors, establish your testing needs, and create a solid plan of attack for the test tool selection process.
Here's a puzzle: If one defect has a severity rating of 3 and a priority rating of 2, and another defect has a severity rating of 2 and a priority rating of 3, which one do you fix first? In this column, Johanna Rothman tells why she thinks severity/priority combinations can be confusing, and she offers her own simpler, three-tiered rating system.
Measuring activities are vital to the software test process. On this site, there are more than 200 items (articles, tools, templates, etc.) classified under the topic "measurement." But what good are all the bits and pieces of data that you collect? In this week's column, veteran software tester Rick Craig outlines some of the practical uses for metrics.
A challenge in implementing function point analysis (FPA) is making it understandable to developers, cost analysts, and customers alike. Because function points are based on functional user requirements (what the software does), irrespective of the physical implementation (how the software is implemented), users of the method must think in terms of the logical functional requirements. This article discusses difficulties that arise with developers and clarifies a number of terms that often cause confusion.