Scenario: I am working for a Product based company where schedule is very tight and we usually do not get much time between releases to create proper test cases, update and manage test suite, revisit the existing ones. Hence even though we have quite a large amount of TC repository, most of the testing goes adhoc and so are the defects, rendering those cases un-fruitful. Following traditional testing process consumes lots of test execution time which affect quality. Our process does not follow proper agile methodology or waterfall. Even though we have dates timelines, estimations from Dev, QA etc but 99% of times they tend to extend due to last minute change request (decision: upper management). Overall, since we are not following any proper methodology, following traditional testing method of creating ground level cases with steps/ updating on test management tool/ executing and then following defect management etc takes a toll. Is there any effective, swift way to do the same in better way which follows our working style??
There are so many ways to improve software testing. If you want to improve your testing then it's in your own hand. You can apply some advanced methods like agile methodology, DevOps, and regression testing. Below are a few tips you may follow.
Improve Software Testing through Planning,
You should check out the work of James Bach and Michael Bolton. We don't need test cases to do great testing. I've never even used test cases. I believe to be a great tester you need to seek to understand the whole system from code to hardware to your users to business and client needs. When you're testing a work item for a software release, think about what's really changed by that item within your system model, and test those areas that are most obviousoly affected first. As you test, think about what other areas of the system might be affected, and branch out to try to catch unexpected failures or regressions in other functionality The application itself as well as your model will give you clues. Are there different user types? Is there data which might not be accounted for? Is there a similar process/feature that might have shared logic that you can check out? Keep track with whatever notation format you like, I've tried lots of different methods, and like nested lists. I believe if you test all work items in such a manner, there's not really much need for lengthy regression testing via test cases. I find that in using this approach I can make a reasonable decision that the data I've gathered in testing is sufficient to significantly reduce uncertainty about the product. However, that's the not end of it... I like to send my assessment to a project manager or run it by a developer to see if my approach was correct. Just like a developer can make a mistake, we can make a mistake and miss something that a dev or PM could have spotted right away if they just read a short summary of our test coverage. Hope that helps.
I think the best thing you can do to get over the entire hassle is to align Agile practices into your existing testing model. If you can accommodate testing right from the very first step of development, things will get easier to handle.
If that still feels like a problem, I would recommend you to work on development at full pace while outsourcing testing services to a reliable Quality Assurance service provider. If you have a budget and enough business to take on that, I believe partnering with a QA company could save you a lot of money, resources, and time.
StickyMinds is a TechWell community.
Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.