Imagine a little application that pulls all the rows for the last month out of a database, formats them in a manner the government can understand, and sends them off to a government agency for accountability.
For me, it’s not too hard to imagine; I’ve done that sort of work many times in my career. Here, I’d like to talk about a way to test it, using a real example, simplified just enough to cover in one article.
It’s time to get specific.
I once worked on a project that helped the government manage the guest worker process, and the software reported who was visiting the country and the state of their passports and visas. A simple maintenance change might be a new government code: People on an expired visa from the United Kingdom now get an output of “7” for visa_type, and people from any other country who have expired visas continue to get a “6.”
The obvious test methods for this are to find or set up some example data, run the extract, and check the results. When we tested this, we had both programmer tests to check the new functions and also looked at the results. But is that really enough?
The Good News
Once a day, the extract enters maintenance mode and a powerful new tool suddenly becomes available: using the previous version as an oracle. In testing, an oracle is a way to recognize a problem once you encounter it. By comparing the previous results to the current results, anything not directly related to the requirements of this change is worth looking into as a potential problem.
Showing no changes outside the requirement for millions (or even tens of millions) of data sets based on real transactions doesn’t proveanything, strictly speaking. But it does show that none of our real, live data we used lately would cause a problem, and that’s nothing to sneeze at.
Here’s an example of how we did it.
Testing Even More
Over my first few weeks working for the government client, I had to test these sorts of changes. I’d start with the obvious spot check, but I would also run the extract twice, creating two data files: one against the database with the current build, and another against the previous production version. I also built small programs in Ruby and Perl to compare those two files.
In this case the files were in XML format; each transaction was a different blob of XML, looking something like this:
<fullname>Matthew Robert Heusser</fullname>
My little diff tools would spit out all the IDs that had different information and what those differences were. At the end, the tool would list all the IDs that were different so that I could cut and paste the results into a database query.