When I joined the particular division where I work as a senior test manager, test maturity was low. Testers were endangering the success of iterations by overcommitting and not delivering on promises. Most testing activites were undocumented, and test execution was defined in an ad hoc way. Test engineers were overworked and had low morale—even the joy of a job well done was missing.
For understanding the situation better, we need to look at the organizational structure. Software engineers were grouped by product divisions. In a product division, multiple development teams worked toward common goals. This was our average team structure:
- Development manager
- Oversees team operation
- Acts like a ScrumMaster
- Is the line manager for developers
- Senior developers
- Product owner
- Technical architect
- Senior test engineers
The team was geared toward delivery—with the inherent mindset focused on functional delivery—but other layers of testing were casually missing, risking overall production safety. Test engineers, though reporting to the outsider test manager, had the responsibility to work with the assigned team, and always had the end result of a successful iteration as first priority. Possibilites of actual QA were limited, with no timeslots available to educate testers for a better outcome.
We created product development plans and updated them in Microsoft Team Foundation Server with the following structure: Large-scale activities would be captured as an epic, defined as around two to four months of work and consisting multiple development teams. Epics are broken down into features, which would be one to three months of work, with a limited number of teams involved. Features, then, would be broken down into requirements to be worked on by one team for one iteration lasting two weeks.
All these documents were captured as change requests. So, if Product 1.0 was designed, engineered, and delivered, later on, an epic and its features and requirements for product 1.1 only contained the changes required.
This fact by itself wouldn’t have been a problem, but test cases, if ever captured, were connected to a requirement. Once the requirement was closed, test cases also disappeared into a void. Due to badly tagged test cases, it was difficult to get any real understanding by creating queries alone. (Microsoft Test Manager, which stores test plans and results on Team Foundation Server, does not have a built-in test case repository. There can be a test case not visible on any UI that can be found only via query.)
If these weren’t enough challenges, we also had one particular misunderstanding of agile surface repeatedly: "We are agile, so there’s no need for test documentation."
A change was inevitable.
However, we had further stumbling blocks. Upgrading the available toolset or changing the methods, guided interactions, and management tools was out of question.
Because there were no other options on the table, my team decided we’d have to take small steps to effect long-term beneficial change in the organization.
To make the case for regression testing and the documentation needed for it, we added a new type of traceability to our requirement test case world. We created a new notion, the business function—not bound to epics, but to the currently available functions in a software application.
It’s a system of independent folders, each covering a specific business function. Ownership of this approach was granted to testers from day one, so we came up with some new rules, too. In order to support the business functions, we formalized two more requests: Please write down your test cases, and if ever anybody could reuse them, link them to a business function.
We named this system "business function test suites," or BFTS.
The long, sometimes painful process of habit formation had started, consisting of kind reminders to every test engineer (often daily) in order to establish a new routine. It only takes around five seconds to map a reusable business function, but it’s an easy step to forget.
We started to capture and map test cases against the new BFTS. After a couple of months, products began to receive version updates, and the testers began to realize the power of a neatly organized, clickable libary of test cases to facilitate their work.
Next to the test cases and their BFTS style of cataloging, we started to do high-level test designs where test engineers, after having received a requirement, reviewed it and wrote down informal sentences outlining a local testing goal. We also involved programmer peers in reviews, resulting in higher initial quality in test cases.
So, with the new methods in place, here was our improved workflow: When a test engineer receives a new requirement, they create a requirements-based test suite where a high-level test design is captured. During the high-level test design, the test engineer understands the BFTS entry point as a source for any test case to be reused, and as a destination where new test cases should be mapped. After finishing the iteration, we do a quick review of the test assets, finalize the mappings, and close down.
With high-level test design and BFTS-driven test case implementation, our testers began to realize that a test case is their contribution to the overall success of their software. This system is their product, giving them trust in their collaboration and a real sense of ownership.
We’re able to understand our work better, as well as the time needed for design and implementation, and what test assets can be reused. All together, this raised the level of commitments, lowering risk and rebuilding trust. We have opened up ways of development by lowering the average effort needed to cover an iteration—from a QA point of view, freeing up time to be used learning and practicing test automation.
Two months were spent on collaborative problem analysis and understanding our technical possibilities, and nine months were needed to establish our new habits. After about two years, BFTS is autonomous, with around six thousand test cases, both manual and automated, mapped to its folders.
By taking control of our process, we have enabled test engineers to do their entire job in a better way.