Scripted and exploratory testing can be seen as opposites, and it’s true that they approach testing from different angles. But they can also support each other. It is more important to think about what we will achieve with certain levels of scripting or exploration. Ask yourself: What is controlling you when you perform a test?
I’ve spent countless hours over the past year trying to understand exploratory testing. I’ve noticed how scripted and exploratory testing can be seen as opposites, and it’s true that they approach testing from different angles. But they can also support each other.
Applying Scripted Testing Doesn’t Shut Down Our Thinking
I thought for a long time that test cases with detailed test steps ensure that testing happens identically time after time. Since then I’ve realized that the way people interpret information leads inevitably to a situation where we have differences between how each individual test case, or even step, is performed. If that isn’t enough, then there are often configuration and system resource-related differences between testers’ machines.
Let’s go back to how we interpret information and think about a first step for an imaginary test case.
1. Start the application
On a Windows operating system, many questions arise in my mind when I see this. How do I start the application? Using the Start menu? Using a shortcut? Using File Explorer? With a mouse? With a keyboard? Can there be other applications open? Should the application I’m starting be just installed? Should the computer that I’m testing with be started immediately before I start the application? How much memory should there be available? How about hard disk space?
Questions like these show the variations that result from the different ways the application can be started, and this is only the first step of a test case. Even more variation is introduced as people perform the other test steps. It’s also good to notice that we are assuming the tester is mentally engaged (e.g., not checking Facebook now and then, being interrupted by a colleague, or skimming the instructions instead of actually trying to understand them).
Considering the previous example, we can see how there can be significant differences between the results of the same test case, making the point that the underlying reason might not be regression.
Another point from the previous example is that a scripted testing approach includes exploration. If it didn’t, it would mean we are following instructions over which we have no control. We wouldn’t have any freedom to think. I don’t think that’s the case, because exploration is always there when humans are testing. In the end it boils down to this practical question (which I learned from James Bach):
What is controlling you when you perform a test?
Is it the instruction or a script? Or is it you and your thinking? Our everyday truth probably lies between the two extremes of not having any control versus having all the control.
Charters of session-based testing, such as Explore message-sending functionality with different kinds of attachment file formats to discover if the format affects to message being sent successfully ,are one example of how exploratory testing can be partly scripted. Similarly in scripted testing, exploration can be increased if steps of the test case are taken away. Test cases could therefore include only the topic, such as Sending message to queue 10 with PDF attachment. This can lead to testers having more freedom in thinking when testing the functionality.
Forget Confrontation and Focus on Your Own Testing Approach
Knowledge about the relationship between scripted testing and exploratory testing has helped me move away from thinking about which approach I want to use. Instead, I focus more on this question: How exploratory or scripted do I want my testing to be in my current context? This doesn’t mean all testing has to be one or the other. Part of my testing can be highly scripted, highly exploratory, or anything in between these two extremes.
It is more important to think about what we will achieve with certain levels of scripting or exploration. If you define very detailed test cases with their test steps, do you realize how much time you will spend maintaining those test cases? Or how much time you will spend actually interacting with the software? Can you justify why we want to minimize the possibility of testers thinking by themselves? Can you tell a credible story about your testing if your testing is highly exploratory? How quickly can you give feedback like information about a bug to the programmer when he or she makes a change to the code and compiles a new build?
So, ask yourself this practical question again:
What is controlling you when you perform a test?
Then think about the answer and how it helps you learn about the risks of your product.