Test Design

Articles

Squares in organized rows next to scattered squares The Difference between Structured and Unstructured Exploratory Testing

There are a lot of misunderstandings about exploratory testing. In some organizations exploratory testing is done unprofessionally and in an unstructured way—there's no preparation, no test strategy, and no test design or coverage techniques. This leads to blinds spots in the testing, as well as regression issues. Here's how one company made its exploratory testing more structured.

Gearbox for a car with a manual transmission Shifting Your Testing: When to Switch Gears

Shifting your testing either left or right can meet different needs and improve different aspects. How do you know whether to make a change? Let your test cycles be your guide. Just like when driving a car with a manual transmission, if the engine starts to whine or you’re afraid you’re about to stall out, switching gears may be just what you need.

Maximilian Bauer's picture Maximilian Bauer
Collection of random numbers When a Number Is Not a Number: Benefits of Random Test Generators

We like to hope that we will consider all possible situations when devising our tests, but it’s all too easy to overlook the unusual cases. That’s the benefit of random test generators. We might feel comfortable after testing a few dozen test cases; these tools generate hundreds. With more stuff getting tossed at the wall, there is a greater likelihood that something interesting sticks.

Steve Poling's picture Steve Poling
Brain made out of a circuit board Leveraging Machine Learning to Predict Test Coverage

Test coverage is an important metric within test management, and as technology evolves, we‘re able to leverage new trends to predict coverage. Weka, an open source suite of machine learning software, can take your test management beyond spreadsheets to the latest AI technologies, letting you predict your test coverage earlier with greater accuracy.

Bhavani Ramasubbu's picture Bhavani Ramasubbu
Group of people holding trophy that says "2018" Top 10 StickyMinds Articles of 2018

With the rise of technology like AI and practices like DevOps, teams everywhere are looking for ways to speed up testing without sacrificing quality. The articles in 2018 reflect that, with the most popular topics being shifting testing left, optimizing tests for continuous integration, and the future of software testing. If you're looking for cutting-edge testing techniques, check out this roundup.

Beth Romanik's picture Beth Romanik
Testers looking at graphs of performance test results Responsibly Reporting Performance Test Results: Trends, Noise, and Uncertainty

In order for performance test results to have value, you should report them in context. There are two main considerations: How do these compare to previous results? And how can we provide early reports on performance while emphasizing that these are preliminary results that may change significantly as we progress? Here are some ideas for responsible reporting.

Michael Stahl's picture Michael Stahl
Gauge with a needle in the green zone, showing good performance 7 Simple Tips for Better Performance Engineering

Rigorous practices to reinforce performance and resilience, and testing continuously for these aspects, are great ways to catch a problem before it starts. And as with many aspects of testing, the quality of the performance practice is much more important than the quantity of tests being executed. Here are seven simple tips to drive an efficient performance and resilience engineering practice.

Franck Jabbari's picture Franck Jabbari
Graph showing how testing earlier costs less and means fewer overall defects The Shift-Left Approach to Software Testing

The earlier you find out about problems in your code, the less impact they have and the less it costs to remediate them. Therefore, it's helpful to move testing activities earlier in the software development lifecycle—shifting it left in the process timeline. This article explores the shift-left methodology and how you can approach shifting left in your organization.

Arthur Hicken's picture Arthur Hicken
Decision table Using Decision Tables for Clear, Well-Designed Testing

Decision tables are used to test the interactions between combinations of conditions. They provide a clear method to verify testing of all pertinent combinations to ensure that all possible conditions, relationships, and constraints are handled by the software under test. If you need to make sure your test cases cover all outcomes in a scenario, read on to learn how to use decision tables.

Josh Giller's picture Josh Giller
Dial with the needle moving from red to green A Better Way of Reporting Performance Test Results

Reporting the results of functional tests is relatively simple because these tests have a clear pass or fail outcome. Reporting the results of performance testing is much more nuanced, and there are many ways of displaying these values—but Michael Stahl felt none of these ways was particularly effective. He proposes a reporting method that makes performance test results easy to read at a glance.

Michael Stahl's picture Michael Stahl

Pages

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.