The Impact of Quality-Driven Development

[article]
Summary:
When the development and QA teams work independently of each other, there can be some duplication of test efforts—which results in wasted time. The solution: quality-driven development, with QA-implemented automation run in the development environment. This is the story of one team's venture into this new process.

My team had everything in place for continuous quality improvement: behavior-driven development, unit tests, and integration tests. These tests are typically run by developers while they write the code to make sure the code performs as expected. Yet when the code gets to me in QA, I still find myself running the same tests and finding issues.

The behavior-driven development (BDD) examples are written during or after testing, so the bugs haven’t been caught yet when the code reaches me. The unit tests tend to focus on low-level components, allowing defects from the interactions of components to slip through. And the integration tests use the REST APIs and can’t catch visual problems or problems with JavaScript calling the REST APIs.

Because the automated checks don’t verify these things, a new build could introduce new problems—which creates a need to retest functionality. We realized we had to raise the quality of the product as it is being developed.

Introducing Quality-Driven Development

My organization implemented the idea of quality-driven development (QDD) with the promise of eliminating the test duplication effort. This process includes some new measures:

  • Automated tests created by QA run in the development environment.
  • We fix issues on the fly. If a tester finds a problem, she can explain it to a programmer and get it fixed. That means if a tester finds a bug in a story, the story is not done. There is no need to file a ticket and get it assigned to someone later; the story is not done. Just fix it.
  • We run manual test charters only once.
  • No story is marked done until the automation itself can demo the feature, driving the browser and showing the product owner that the intended use case can actually happen.

It sounds wonderful—at least on paper. Now let’s get real.

The Results of Our Experiment

Initially, I had lots of questions about this approach. How feasible is it to run QA-implemented automation in the development environment? Would it be more efficient to just have automation set up after the development effort? This approach seemed time-consuming, especially when the developer was unable to call a task done until the automated tests ran successfully. It was clear to me from the outset that the development and QA teams would need to completely buy into the new process.

Our organization also introduced a new automation tool to automate browser interactions and our own quality management software. This imposed a learning curve for me, as I not only had to automate new features, but also was required to learn the tool and its capabilities along the way. This caused my effort estimation to be way higher during sprint planning. Initial automation scripts and testing took awhile, but in some instances I finished earlier than planned. This got me questioning whether I was doing things right in some cases versus others.

Still, I was confident that our efforts would all eventually pay off. Any new process will have its struggles. The challenge lies in how we handle these hurdles and progress toward the goal.

First, I needed to learn the automation tool. I reached out to a colleague with several years of experience who guided me through the process and ironed out a solid way of using the tool effectively. With his help, I was able to streamline the automation tasks.

At first, it was a struggle to keep up with the automation and complete when coding is complete. It also became clear very quickly that if unplanned changes came about, the existing automation framework would not function optimally. I really had to change the way I wrote the automated test scripts, as this was disrupting the development cycle. I decided to build the framework so that any change in scope could be easily adapted without much rework; all the hooks were in place to allow for extensions if need be. I tried to create reusable objects where possible to keep the structure stable. I tested this approach a couple of times and did dry runs to make sure we had the framework just right. This gradually helped build a strong foundation that would make the automation ready when coding is complete.

Secondly, the team had to get used to this new way of collaborating. Communication was key. I needed to talk to the developers on a daily basis to understand what was being coded. It took several rounds of feedback before a level of comfort was reached and a realization set in that the changes brought by QDD were worth it.

Typically, the norm had been for me to write test cases, wait until the build was available, perform manual testing, and report issues, if any. Eventually, these test cases would be automated and included as part of regression suite. Now, instead I write the automated test framework well before developers begin their coding. For instance, because our tool supports BDD, I write scenarios based on the acceptance criteria. Once the feature is testable, I implement and execute the automation on a developer’s environment. All possible issues are fixed right there, even before the build is “officially” available. This considerably reduces the turnaround time while increasing collaboration among team members.

A Better Use of the Team’s Time

The development team gradually began to appreciate the value of having a prebuilt quality-assured automation framework to support the code. They realized that their time was being more effectively spent focusing on the advanced nuances of the customer scope. Testing starts at the same time as development, and all the tests are already run during the development cycle.

The QA team also likes the new process. When a build is available on the QA environment, it affords us the opportunity to explore hitherto untested but very pertinent ad hoc scenarios. What used to be a predominantly mundane and time-consuming manual set of testing tasks is now all of a sudden a more dynamic, interesting sequence of ad hoc tasks.

With the integration of the automation tool to Jira, the results from the execution are pushed to the corresponding story, so I do not really need to write test cases for the scenarios that were already automated. The results from automation in Jira spoke for themselves. The average QA practitioner time on the team is now more effectively utilized than ever before on critical sprint tasks.

I am still learning, improving, and working with the team to make the QDD process more stable and solid. As the automation grew, we recognized the opportunity to integrate the entire suite with Jenkins and run it as part of the build process. This meant no more running regression tests the traditional way; the automated tests run when needed throughout the sprint, ensuring consistent quality. These suites can now be tailored to the needs of specific environments (validation, beta, and production). This means a faster release of the product with increased quality.

Thanks to QDD, the time I would otherwise have spent running regression tests can now be used to continuously expand my knowledge on the technologies our organization employs to develop cloud-based applications. In addition, I am able to communicate with developers and architects during technical discussions. It gives me a sense of pride that I not only know what a feature does, but also how it is implemented.

Quality Driven by QA All the Way

When QDD was first introduced to our team, there was a willingness to accept a big learning curve. I realized pretty soon that this learning curve was not that steep, and I started to see the benefits while honing the process with every sprint.

QDD bridged the gap between QA and development teams because we have to work hand in hand throughout the development cycle. This helped me be more upfront with developers, and they are now more willing to provide suggestions or call out certain nuances, which helps us build a better test strategy.

The QDD process has certainly elevated me and my organization as a whole. It has streamlined our testing process, made us more agile than ever before, raised the quality of our products, and given us an increased awareness of customer satisfaction throughout.

User Comments

5 comments
Andrew Sieffert's picture

Great article Praveena.  I am curious to hear how difficult it was to implement BDD, especially in its early stages.  My guess is QDD wouldnt have been possible without first becoming quite adept at BDD.  Our development organization is currently new to Scrum and we are working through some of the growing pains as part of that transition.  BDD isnt even on the radar for us but I was curious if you have any suggestions, should we adopt it early, should we grow into as we familiarize ourselves with Scrum?  Your input would be greatly appreciated.

December 1, 2015 - 6:03pm
Praveena Ramakrishnan's picture

Hi Andrew,

It is a good idea to start thinking about BDD early on as we grow with Scrum. It gives time for the team to Adopt to BDD as it quickly becomes a part of weekly sprints. In our early stages, the development team implemented some of our tests using cucumber and Selenium webdriver and then it was demo-ed to a broader team. Like any other development needs, we had to think about laying down the framework for the automation and then building on top of it. 

 

 We did hit roadblocks- 

 Every sprint, we focused more on functional development and kept BDD at the end. BDD never got picked up for that sprint and eventually ended up being pushed out to the next sprint. Before we knew it, we had accumulated a BDD tests backlog that needed to be complete. Eventually, we got a grip on the automation, and QA also started to contribute to implementing BDD thereby reducing the backlog. 

 

 BDD tests are now integrated as part of the Jenkins build process. We are now at a point where we don't deploy if the Jenkins build fails for BDD tests! 

 

 Hope that helped.

-Praveena

December 2, 2015 - 8:29pm
Praveena Ramakrishnan's picture

It is a good idea to start thinking about BDD early on as we grow with Scrum. It gives time for the team to Adopt to BDD as it quickly becomes a part of weekly sprints. In our early stages, the development team implemented some of our tests using cucumber and Selenium webdriver and then it was demo-ed to a broader team. Like any other development needs, we had to think about laying down the framework for the automation and then building on top of it.

We did hit roadblocks-
Every sprint, we focused more on functional development and kept BDD at the end. BDD never got picked up for that sprint and eventually ended up being pushed out to the next sprint. Before we knew it, we had accumulated a BDD tests backlog that needed to be complete. Eventually, we got a grip on the automation, and QA also started to contribute to implementing BDD thereby reducing the backlog.
 
BDD tests are now integrated as part of the Jenkins build process. We are now at a point where we don't deploy if the Jenkins build fails for BDD tests!
 
Hope that helped!

December 3, 2015 - 10:50am
Steve Martino's picture

What tool did you choose to use to build you automation framework?

December 23, 2015 - 9:18am
Vivek Chakravarthy's picture

A nice article! Wanted to ask, when we are developing test automation framework which is browser based testing using selenium, how can we implement QDD ? Since until developers finishes the functionality, we don't have UI to interact in the browser. What strategy you think we should apply for QDD ?

Thanks!

 

April 22, 2020 - 1:02am

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.