My team had everything in place for continuous quality improvement: behavior-driven development, unit tests, and integration tests. These tests are typically run by developers while they write the code to make sure the code performs as expected. Yet when the code gets to me in QA, I still find myself running the same tests and finding issues.
Because the automated checks don’t verify these things, a new build could introduce new problems—which creates a need to retest functionality. We realized we had to raise the quality of the product as it is being developed.
Introducing Quality-Driven Development
My organization implemented the idea of quality-driven development (QDD) with the promise of eliminating the test duplication effort. This process includes some new measures:
- Automated tests created by QA run in the development environment.
- We fix issues on the fly. If a tester finds a problem, she can explain it to a programmer and get it fixed. That means if a tester finds a bug in a story, the story is not done. There is no need to file a ticket and get it assigned to someone later; the story is not done. Just fix it.
- We run manual test charters only once.
- No story is marked done until the automation itself can demo the feature, driving the browser and showing the product owner that the intended use case can actually happen.
It sounds wonderful—at least on paper. Now let’s get real.
The Results of Our Experiment
Initially, I had lots of questions about this approach. How feasible is it to run QA-implemented automation in the development environment? Would it be more efficient to just have automation set up after the development effort? This approach seemed time-consuming, especially when the developer was unable to call a task done until the automated tests ran successfully. It was clear to me from the outset that the development and QA teams would need to completely buy into the new process.
Our organization also introduced a new automation tool to automate browser interactions and our own quality management software. This imposed a learning curve for me, as I not only had to automate new features, but also was required to learn the tool and its capabilities along the way. This caused my effort estimation to be way higher during sprint planning. Initial automation scripts and testing took awhile, but in some instances I finished earlier than planned. This got me questioning whether I was doing things right in some cases versus others.
Still, I was confident that our efforts would all eventually pay off. Any new process will have its struggles. The challenge lies in how we handle these hurdles and progress toward the goal.
First, I needed to learn the automation tool. I reached out to a colleague with several years of experience who guided me through the process and ironed out a solid way of using the tool effectively. With his help, I was able to streamline the automation tasks.
At first, it was a struggle to keep up with the automation and complete when coding is complete. It also became clear very quickly that if unplanned changes came about, the existing automation framework would not function optimally. I really had to change the way I wrote the automated test scripts, as this was disrupting the development cycle. I decided to build the framework so that any change in scope could be easily adapted without much rework; all the hooks were in place to allow for extensions if need be. I tried to create reusable objects where possible to keep the structure stable. I tested this approach a couple of times and did dry runs to make sure we had the framework just right. This gradually helped build a strong foundation that would make the automation ready when coding is complete.
Secondly, the team had to get used to this new way of collaborating. Communication was key. I needed to talk to the developers on a daily basis to understand what was being coded. It took several rounds of feedback before a level of comfort was reached and a realization set in that the changes brought by QDD were worth it.
Typically, the norm had been for me to write test cases, wait until the build was available, perform manual testing, and report issues, if any. Eventually, these test cases would be automated and included as part of regression suite. Now, instead I write the automated test framework well before developers begin their coding. For instance, because our tool supports BDD, I write scenarios based on the acceptance criteria. Once the feature is testable, I implement and execute the automation on a developer’s environment. All possible issues are fixed right there, even before the build is “officially” available. This considerably reduces the turnaround time while increasing collaboration among team members.
A Better Use of the Team’s Time
The development team gradually began to appreciate the value of having a prebuilt quality-assured automation framework to support the code. They realized that their time was being more effectively spent focusing on the advanced nuances of the customer scope. Testing starts at the same time as development, and all the tests are already run during the development cycle.
The QA team also likes the new process. When a build is available on the QA environment, it affords us the opportunity to explore hitherto untested but very pertinent ad hoc scenarios. What used to be a predominantly mundane and time-consuming manual set of testing tasks is now all of a sudden a more dynamic, interesting sequence of ad hoc tasks.
With the integration of the automation tool to Jira, the results from the execution are pushed to the corresponding story, so I do not really need to write test cases for the scenarios that were already automated. The results from automation in Jira spoke for themselves. The average QA practitioner time on the team is now more effectively utilized than ever before on critical sprint tasks.
I am still learning, improving, and working with the team to make the QDD process more stable and solid. As the automation grew, we recognized the opportunity to integrate the entire suite with Jenkins and run it as part of the build process. This meant no more running regression tests the traditional way; the automated tests run when needed throughout the sprint, ensuring consistent quality. These suites can now be tailored to the needs of specific environments (validation, beta, and production). This means a faster release of the product with increased quality.
Thanks to QDD, the time I would otherwise have spent running regression tests can now be used to continuously expand my knowledge on the technologies our organization employs to develop cloud-based applications. In addition, I am able to communicate with developers and architects during technical discussions. It gives me a sense of pride that I not only know what a feature does, but also how it is implemented.
Quality Driven by QA All the Way
When QDD was first introduced to our team, there was a willingness to accept a big learning curve. I realized pretty soon that this learning curve was not that steep, and I started to see the benefits while honing the process with every sprint.
QDD bridged the gap between QA and development teams because we have to work hand in hand throughout the development cycle. This helped me be more upfront with developers, and they are now more willing to provide suggestions or call out certain nuances, which helps us build a better test strategy.
The QDD process has certainly elevated me and my organization as a whole. It has streamlined our testing process, made us more agile than ever before, raised the quality of our products, and given us an increased awareness of customer satisfaction throughout.