Software development firms are adopting agile to deliver working solutions fast and in shorter sprints. The testing process has to be aligned to match the development pace. Automated testing is an excellent solution to save time and get accurate results.
When you are running against time but still cannot afford to cut corners while testing, automation is a good solution. Here’s how you can use automated testing to improve your testing outcomes.
Using Automated Testing to Overcome Delivery Challenges
As a QA consultant, I faced a challenge with one of my projects where software was being built quickly and we had an acute shortage of time for testing.
We were working with the agile methodology and had deployments every two weeks. After every deployment, testers faced time constraints, as they had to test the newly deployed features as well as perform regression testing.
There was a long list of test cases that were pending to be tested for finding any regression bugs. It was stressful for our entire testing team because we had a release date approaching and very limited time left. The major challenge was to prioritize the test cases to find critical bugs. Our testing team had manual testing experience, but it required a lot of time to run and test repetitive test cases.
The development team was focusing on improving software quality and productivity, but they also were struggling with the challenge to remove the errors and fix all the bugs before pushing for the production stage.
I had previously worked with teams that were well versed with automated testing, so I was aware of how we could tackle our situation with automation. But one of my biggest challenges was that our current testing team had no prior automation testing experience.
Nevertheless, we decided to automate regression testing in our project. I communicated my challenge to the project manager, so he decided to get help from an automated testing expert from another project.
We organized knowledge-sharing sessions with the expert to help us implement automated testing in our project using Selenium and Java scripts. We also used the programming language AutoIt to create automation scripts to test our software on the desktop.
I selected the most experienced member of our testing team to lead the automation initiative, while others focused on their existing workload. The overall goal was to extend our capabilities to improve testing speed and quality of test results.
We automated functional testing and UI testing to check various aspects of our software, such as the login and signup processes. We also automated test cases to validate the user interface elements of our software.
How Automated Testing Benefited Our Project
We experienced a decrease in overall testing time required. Automated testing also increased our teams’ productivity and efficiency, as it reduced the effort in performing a repetitive task.
As we received timely feedback throughout our automated testing, we could help to significantly reduce the number of defects in the software. Automated testing helped our project delivery because we could execute the same test case a hundred times without a single glitch, avoiding the mistakes that generally happened while performing manual testing.
Now that we could handle the regression bugs more efficiently, we saved time for exploratory testing. Earlier, testers had to test new functionalities and regression bugs in stringent timelines. Now, we had more time to perform manual testing of new scenarios and follow up on issues.
The best part was that there was no team burnout due to heavy workload, which helped us deliver a reliable and effective product.
Another benefit that we experienced is we could run scripts in parallel across different environments. We had fixed all the bugs that impacted the functionality and performance of our software tool.
We faced no challenges in the setup or installation of the third party tool. The entire framework was easy to use and saved us ramp-up and maintenance time.
Automated testing also helped us perform test case execution in batches with the help of a web interface, triggering batch files through the scheduler at a specific time. Our automated testing can look through the product’s memory contents, data tables, and files contents to check that the program is working as per expectations. We also sent status updates directly to our project stakeholders.
The overall project duration was four months, and we were about to get our first working prototype to post after the first month of development. That only gave us two and a half months of testing time, as the final working software would be available just 15 days before release.
With manual testing, we were not getting time to do regression testing with every new feature deployment. With automated testing, we were able to do regression testing in every sprint, and we could easily test the software in the last two weeks before release.
Steps to Implementing Automated Testing
We started our automation process by identifying the test cases that could and should have been automated.
Test cases that had more than two steps and were repetitive were selected for automation. We majorly opted for test cases in which a large number of users were working on the application or making a transaction.
There were some test cases that we had not been executing after every new deployment, as it required huge efforts. When we were only testing manually, we had decided to run those test cases after three sprints. With automated testing, we could execute these test cases after every deployment.
After shortlisting the test cases, we started writing scripts for automating those test cases. Automation helps in converting long test cases into just one step, which saves time.
We also checked scripts for stability, to ensure quick deployment and automation for each build. We discovered that some of the failed test cases during automation was due to script issues or code error. We revised the scripts to remove any script errors, using test data based on test case results.
After running our automated testing, all the bugs found were imported into our bug-tracking tool. We then assigned the bugs to developers, and if there was still any automation script issue, the testers worked to fix them.
Key Value Propositions of Test Automation
These are three scenarios in our project where we derived value from automation.
Situation 1: All Manual Testing
Number of test cases to execute: more than 1,000
Time required to set up the environment for the build: 10 minutes
Time required to test one scenario: 10 minutes
Without automation, the effort required 2,010 minutes, equivalent to more than four workdays.
Situation 2: Manual and Automation Testing
Number of test cases to execute: 500 automated and 500 manual
Time required automating 500 test cases: 3 hours
One single build tested in two parts:
Manual: 10+ (500*10)/5 = 1,010 minutes
Automated: 180 minutes
This 50% scenario covered half the test cases in just three hours, which proved to be time-saving and cost-saving.
Situation 3: Predominantly Automation Testing
Number of test cases to execute: 600 automated and 200 manual
Time required to run all test cases: 30 minutes
With 80% of test cases automated, our entire testing was completed in half an hour; before, with all manual tests, it would have consumed seven hours.
The picture below displays the number of passed test cases, the failed test cases, and the skipped test cases. We also have a categorized view of test case results after running scripts in a test environment.
Automation reports helped us understand the number of passed and failed test cases. Based on the above scenarios, we chose a mix of manual and automation testing.
We can also easily view the categories and test results. For example, these results are for testing the login page of our software, and you can see each test case with their status as “Pass” or “Fail” displayed on the right side.
Automation testing positively impacted the quality and delivery of the product. We released high-quality software within a short span of time, detected more errors and bugs, and aligned our testing with agile development.