A Common Tool for an Uncommon Problem


Have you ever wondered about how many test cases you have written in your career? Can you imagine the number of test cases written by everyone in this field? There must be a way to leverage that knowledge. How do you reuse test cases and the knowledge that has created them? In this article, Paul Sixt tells you how to do just that!

The Problem
This was a problem unlike others that I had faced. We were going to test an entire family of products, from the low end all the way to the Cadillac model. This was exciting, getting to learn the product family, have insight into each member, and be able to write test cases for each member of the family. Ok, this doesn't sound fun anymore. Trying to contain our excitement at this thought, we decided to invest some time trying to figure out the difference between each family member and how the different members were alike and different. What we found out was each generation was the same feature set as the previous plus new or expand features. Thus the product was an accumulation of features. The same feature once introduced could be tested with the same test cases for all following members.

A Solution
A database seemed the most reasonable solution for our problems. We chose Visual Basic and Access due to experience, they were easy to obtain, and affordable. Now that we had decided on the tool set, we needed to work on the design. We started on the database, seemed to make sense since that would ultimately drive the GUI. The problem was how to create the database without duplicating the data. We needed a simple yet flexible database.

The hard part is trying to keep the test cases without referring to any particular member and yet still be clear enough to cover the action and expected result. The real beauty is we reduced the effort of writing test cases to writing the test cases for the most complex member only. With most projects where you share test cases, an enormous amount of time is spent making the same correction for every member, you never get them all. We eliminated that issue, a single manual correction was made and it was reflected everywhere it was used. No guesswork! We also worked on adding new members so it could be completed with little effort. Again, once the initial member and test cases were entered, we were able to add new members and define what their features were. We had the test cases completed within an hour.

The relationship that is built in the Scenario table is what drives the system. We defined each member by a set of features. This is represented in the Table labeled Scenario. If you'll notice, we also used unique identifiers to build the relationship instead of the textual name. This is because the names were constantly being modified. With this system, a single change on the members name and it will be updated everywhere without much effort.

Each feature is defined by the set of test case that are related to them. This is represented in the Test Cases table. We also added a unique ID for each test case. This was to cover a previous database design. Although currently not being used, we could imagine enough different scenarios and future enhancements that could take advantage of it so it stayed. The actual test case reports are based on the relationship between members to features and then features to test cases. This allows us to print the test cases by member or by feature.

The First Hurdle and Feature Creep
One of the curves we encountered was editing test cases because the feature changed or was removed. So we built some new interfaces to be able to manage the editing of the different aspects. Editing the member name or feature name was simple. A single straightforward interface to handle those was quickly created. The issue was editing the relationships between members and features or features and test cases. Two problems were identified; floating test cases and floating features. Floating test cases would be related to a feature that no longer existed or would lose the relationship all together. Floating features would be a feature with no test cases or even worse with test cases that no member was using. We added code in the interface to check for these instances. (We could have enforced this in the database.)

The GUI has a screen for adding and editing for each of members, features, and test cases. The editing screens for each of those allows for next, previous, move first, and move last. The opening screen allows for the operator to pick the database of choice. As the test cases and features were being added, we saw the need for a filter. So we added a filter screen where you could filter the data by member or by feature. The filter drove the way the data was seen by the different screens.

Future Enhancements
The first enhancements would be to either integrate this with the bug tracking database or create a table to represent each test pass through the test cases. Being statistically inclined, I already have three additional reports in mind. The first would be to report the encountered bugs versus the feature area. Are certain feature areas predisposed to having large numbers of bugs? Does this represent new technology or a highly critical area? The opposite is also true. This feature has been stable for the last "x" members. Should the QA group adjust their focus?

The next step would be to track if a member has bugs in an area where other members don't. This type of discovery could be representative of integration problems. Are we working with the same code base and version? Who is the configuration and/or compile manager, can this information lead to a quicker turnaround time? Is the test case clear and understandable? Is this reflective of tester error or interpretation? Maybe it's a training issue?

The last would be a selfish report. I would like to see a report based on test cases: completed / blocked / open. We could also have a baseline or the estimate being used as a guide. I am sure that a formula of some kind could create a testing dashboard using this information and be as current as almost live.

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.