Do You Need to Write Test Cases?

[article]
Summary:
Writing test cases can be a time-consuming activity, and approaches vary from comprehensive test plans to more casual and exploratory cases. What factors should influence your approach? We take a look at a couple of these factors to help you guide your project and team to success.

Testing software is a real challenge, because there are so many types of test cases that come in so many different shapes and sizes. The truth is, there is no “one size fits all” method for software QA testing. You have to take the time to stop and assess every project that hits your desk. In some instances, you’ll want to create a comprehensive set of test cases as part of a detailed plan that will document your every step. Other projects will lend themselves to a more casual, exploratory approach, where agile test cases are helpful. The vast majority will fall somewhere in the middle.

Making the Case for a Test Case
When you are making the decision about whether to create a set of software test scripts, there are some influencing factors that you can’t ignore. The argument for documenting software test cases grows stronger when you have to conform to specific business rules. The project may have legal compliance requirements, or there may even be contractual obligations to provide documentary evidence of exactly what you tested. In that scenario, you obviously need detailed test cases.

Most applications present pros and cons when it comes to test cases. If it’s an enterprise application that you expect to be in use for years, then a set of well-thought-out agile test cases will provide value for money because they’ll be reused again and again. They may even form the basis for automated tests down the line. If it’s a topical mobile app that needs to hit the market fast and won’t likely be updated, then it may not be worth spending the time to draft test cases at all.

You also have to consider how catastrophic a bug might be. Most software that caters to medical professionals simply cannot afford to go wrong. The consequences are too great. Software test scripts with detailed acceptance criteria must be written to prevent any possible human omission when verifying base system requirements. Other applications might not be that mission critical and so can afford to ship with some minor bugs.

Testing in an Agile Environment
What if your development team is adopting an agile approach where features are designed on the fly and changes are continuously introduced? If there’s no clear picture of the application that you’re going to end up with at the start of development and if requirements are likely changed, then writing a lot of agile test cases is going to be pointless because you’ll have to rewrite them completely within a few builds. You might opt for a checklist approach instead and combine exploratory testing with a simple list of compatibility requirements that don’t need to be spelled out in full. It may even be possible for testers simply to refer to the original user stories that informed the design or talk directly to the customer to find a basis for their testing.

Know Your Audience
Audience is another vital consideration, and it breaks down in two ways—the test team and the end-user. How skilled are your testers? What background knowledge do they possess, and how complex is the software they will be testing? An experienced test team with relevant background knowledge won’t need as much guidance. A rookie squad, on the other hand, will benefit from a set of test cases.

Software in a specific niche or for a specific profession will probably lend itself well to a tight test plan. You can predict how the end-user will interact with the product and test accordingly. Software that’s aimed at the mass market should be subjected to more exploratory testing. Intuitive and experienced testers have a feel for how the end-user will try to interact with a piece of software, and they can uncover important bugs if they are given the freedom to find them.


Empower your Test Team
It’s worth considering the potentially demotivating effect that strict, step-by-step test cases can have on test team members. If their daily job becomes a familiar, tedious walkthrough of the same old test steps, then they may become jaded. Failing to employ their skills and encourage their thought processes because of overly strict test cases will often result in bugs slipping through. Talented testers will quickly expose any imperfections in the test cases if you give them the chance to share ideas.

So, empower your test teams by positioning testing as a creative challenge that provides opportunities to try different paths to get to the common end-goal of quality software. They will always have constraints on time, staff, documentation, and other resources in your projects, and writing test cases can be a time-consuming test activity. The choice of which detailed test cases to write will have a big impact on the effectiveness of the test team, and I hope the factors described in this article will help make your test case decisions much easier.

Tags: 

User Comments

3 comments
Sanat Sharma's picture
Sanat Sharma

Hi Vu, I agree with your viewpoint that writing Test cases sometimes become tedious tasks for the test team. I, myself, had faced this problem in the past. Sometimes, test team writes test cases only because management wants the same. In that case, test team writes whatever they can, irrespective of relevance of those test cases with respect to software or not. Knowing the audience while writing the test cases is definitely important but that can be done only by experienced testers. I tried a lot within my team to encourage them about writing detailed test cases for further reference, but sometimes it becomes difficult to spend so much of time in writing test cases only due to time constraints.

-Sanat Sharma

August 7, 2012 - 12:59am
Manendra Yadav's picture
Manendra Yadav

Hi,

Test Cases should not be acceptable. They are made for testers who would follow the exact steps without any Product and Target-User knowledge. I would always go with the Use Cases as they give much better understanding of Product on a whole. Also, it accommodates real life scenarios. It broadens the thinking of a tester and approach towards testing. Surely, there are a few compliance tests which must be exactly as per norms and those must be detailed with submission (call them submission checklist).

Use cases work well for an Agile, Iterative and traditional approach.

However, Test cases are best Approach if -

1. We are not willing to spend time in training testers to perform better

2. Don't want to disclose the Big Picture to them and limit their scope to minimum

3. Don't want them to think creatively

4. High rate of resignation among Testers.

5. No Proper documentation other than test cases

I hope Companies expect much more from QA teams

August 22, 2012 - 4:43am
Tim Thompson's picture

Detailed test plans do not preclude testers from looking left and right. As in rail stations and airports, if you see something say something. I write incredibly detailed tests and often enough all or some of the feature is already implemented. That allows for a first round of testing before the developers completed their work. While going through the detailed tests new scenarios come up that neither the business analyst, the developer, or anyone else thought about before. Those cases need to be addressed in the application and those are the cases that exploratory testing is unlikely to find.
Once a feature or area in the application is mature write a smaller set of test cases that can be used for regression and for automation. Run through those as often as possible (with automation on every build). If there are changes or additions to the feature amend and extend the detailed tests and run through all detailed test cases that cover the impacted areas.
I did testing on features that had detailed test cases and that did not have any detailed test cases because the person originally in charge is more attuned to exploratory testing without formally reporting bugs (instead IM or email is used) or documenting test results. I had no idea what was originally covered and what had failed before. And since we are agile and for many agile means "no documentation whatsoever" the requirements were limited to half a sentence. A customer reported a bug once in the feature that was light on any kind of documentation. So before I could address the issue I first had to squeeze requirements out of the business analyst and then write a detailed test plan. Then I could test properly and uncover numerous other issues. If we would have done systematic testing with detailed test cases that feature would have been released maybe a bit later, but it would have been without the many flaws that annoyed customers.
In the end the proof is in the pudding and as long as an approach delivers the necessary results it works out fine. As mentioned in the article, a quick'n'dirty app that gets tossed out in a year after it served its purpose does not benefit from detailed test cases. The focus is on making the app do what it needs to do. Anything that has a planned life span beyond that needs detailed test plans. And those are quickly puzzled together when using a library of common test cases. Those test cases need minor adjustments and will address the majority of things to consider. That leaves enough time to focus on the specific business logic.

November 1, 2014 - 12:46pm

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.