My team had everything in place for continuous quality improvement: behavior-driven development, unit tests, and integration tests. These tests are typically run by developers while they write the code to make sure the code performs as expected. Yet when the code gets to me in QA, I still find myself running the same tests and finding issues.
Because the automated checks don’t verify these things, a new build could introduce new problems—which creates a need to retest functionality. We realized we had to raise the quality of the product as it is being developed.
Introducing Quality-Driven Development
My organization implemented the idea of quality-driven development (QDD) with the promise of eliminating the test duplication effort. This process includes some new measures:
- Automated tests created by QA run in the development environment.
- We fix issues on the fly. If a tester finds a problem, she can explain it to a programmer and get it fixed. That means if a tester finds a bug in a story, the story is not done. There is no need to file a ticket and get it assigned to someone later; the story is not done. Just fix it.
- We run manual test charters only once.
- No story is marked done until the automation itself can demo the feature, driving the browser and showing the product owner that the intended use case can actually happen.
It sounds wonderful—at least on paper. Now let’s get real.
The Results of Our Experiment
Initially, I had lots of questions about this approach. How feasible is it to run QA-implemented automation in the development environment? Would it be more efficient to just have automation set up after the development effort? This approach seemed time-consuming, especially when the developer was unable to call a task done until the automated tests ran successfully. It was clear to me from the outset that the development and QA teams would need to completely buy into the new process.
Our organization also introduced a new automation tool to automate browser interactions and our own quality management software. This imposed a learning curve for me, as I not only had to automate new features, but also was required to learn the tool and its capabilities along the way. This caused my effort estimation to be way higher during sprint planning. Initial automation scripts and testing took awhile, but in some instances I finished earlier than planned. This got me questioning whether I was doing things right in some cases versus others.
Still, I was confident that our efforts would all eventually pay off. Any new process will have its struggles. The challenge lies in how we handle these hurdles and progress toward the goal.
First, I needed to learn the automation tool. I reached out to a colleague with several years of experience who guided me through the process and ironed out a solid way of using the tool effectively. With his help, I was able to streamline the automation tasks.
At first, it was a struggle to keep up with the automation and complete when coding is complete. It also became clear very quickly that if unplanned changes came about, the existing automation framework would not function optimally. I really had to