On Domain Testing

Matthew Heusser's picture

I'd like focus this blog in 2014 on “hard” tester skills - to get past vague generalizations and slogans. (If the ideal is the T-shaped tester, with broad general skills that are shallow and a deep speciality in test - well, how exactly do you get that deep line in testing? What testers do, and how can they do it well?)

To that end I made two posts - Learning To Learn (about testing) brought out the concept of skill while practice makes progress offered a very specific skills problem - given a specific user interface, if you only had the opportunity to run five combinations of input/submit button, which five tests would you run, and why?

I got a few great answers: Justin Rohrman took a stab and found a functionality bug; so did Chris Trantor and Srinivas Kadiyala; Audrey Tang, the famed programmer at Socialtext, took the challenge and left comments on my Google Plus Profile. I hope at least a few more took the challenge but didn’t leave a comment. If you missed it, you can always check out the blog and have a go. It’s free.

At the end of that blog post I asked the readers to name the skill involved. I had a very specific answer in mind: Domain Analysis, or perhaps domain testing, which consists of taking this particular testing problem, finding the number of possible tests (which is likely infinite), then selecting a few of the most powerful tests to run in the time given, and, perhaps, figuring out what the results of those tests tell us about the software.

There are more skills to testing than the domain: For example, understanding the risk profile of the customer, figuring out when to be done, communicating with the customer on the risk, helping to figure out which defects are worth fixing.

Still, I think Domain Testing is huge, and, most of the time, given short shrift. When you look at the number of conference talks on how to automate this or that, and compare it to the number of talks on how to figure out what tests to run and how to analyze what the results mean, it’s more than a little bit shocking.

The core assumption seems to be that figuring out what tests to run is intuitive, or obvious, or simple.

Those same folks, two weeks later, that thought it was simple, will ask questions like “Why didn’t QA find that bug?”

In hindsight, the defects look simple to find, sure. The actual work of figuring out what tests to run under time pressure is a little bit harder.

Enter domain analysis

Domain anaylsis, or domain testing, is the study of the software to see what tests make sense - that is very different from quick attacks or histroci analysis, which focus on common ways the platform or software has failed in the past. To do quick attacks, you can push input that probably doesn't make sense and figure out the output doesn't make sense. For domain testing, you need to know the business rules, to figure out if the non-error result is valid.

Most people learn domain analysis through a combination of trial and error and "the literature", which, I have to admit, is more than a little shallow. Equivalance classes and bounadary tests are two domain tools, that are typically taught with simple examples that fit perfectly into the concept. After the training, back at the office, we find that our problems don't fit neatly into the little buckets taught in class.

I don't blame the trainers; they are simply trying to fit the concept an examples into a class format.

The good news is that domain testing just got a fantastic new resource - Kaner, Hoffman, and Padmanaban’s Domain Testing Workbook, which weighs in at 460 full-size pages and several pounds. Not a book to be “just” read, the workbook contains exercises to work through.

This isn't classic "hard" stuff of testing - it isn't code, or jQuery, or selenium, or some performance testing tool. But it is still a real skill, and the workbook provides examples to work through over hours (and days, and weeks) along with reasoning about what tests might make sense over what conditions. 

If you're not sure on the workbook, don't fret; I have permission to excerpt part of it and share it on the web.

Don't worry. There's plenty more to come.

User Comments

3 comments
Jeremy Carey-Dressler's picture

I really appreciate that you're going to start talking about this more in depth work.  I have complained before that CDT pratitioners tend to just answer with "it depends" (not an invalid response, just not particularly useful).  I myself have struggled to explain how I test in depth and I know it is diffcult, particularly wihile keeping context in mind.  I look forward to see more details from you on this topic in the coming months!  - JCD

February 25, 2014 - 8:32pm