In this article, Chris McMahon offers an approach to implementing automated tests at the user interface level in a way that is visually simple and should save a lot of work when analyzing and maintaining tests down the road.
Almost everyone agrees that having automated tests at the user interface (UI) level is worthwhile, but analyzing and maintaining such tests can become very expensive very quickly if the tests are implemented poorly. As you grow your automated UI test suite, consider an approach like the one I'm about to describe.
Consider your application as a circle, where all the application functions are inside the circle and the circumference of the circle provides an arbitrarily large number of access points to those functions.
The Starting Points
On your circle, draw a dot on the circumference. Now, draw another dot on the circumference a good distance away from the first. Now draw one on the circumference between the others. These represent starting places for your automated tests. As an example, let's say that the application you need to test will manage users, money, and time (since a lot of software does exactly that).
If you are testing a Web application, each dot you draw on the circumference of your circle will be represented by a URL:
Inside your big circle draw three very little circles in arbitrary places. Label them "U," "M," and "T" in very small letters. Each of these little circles represents an end state of the application.
Now, draw lines from the points on the circumference of the circle to the small circles inside. Each of these lines represents a path through the application. Your automated test will start at the outside of the circle and take a certain number of steps to arrive at a place within the circle.
If the automated test traverses the path through the UI without any assertions failing, then the test will pass.
Draw several more of these paths through the UI within your circle. Label them U1, U2, U3, M1, M2, M3, T1, T2, T3. You have a complex application; it will certainly take more than one test to validate that everything is working correctly.
When you are done, your circle will be covered by a web of lines of various lengths and directions. Some lines will intersect. Some won't. Some areas might have greater density of lines than others. Hopefully, the circle will be more or less evenly covered by lines.
This diagram of paths through the UI is a representation of a well-designed set of automated UI tests. It represents feature coverage, not code coverage. Also, the paths through the application are mostly orthogonal to each other. I'll talk about why this is important shortly, but first let's take a look at what can go wrong.
One objection goes like this: Regardless of how many starting points I have, every new test has to log on to the application. Why doesn't the diagram show the single login point for every test?
Any routine, repetitive action that does not contribute directly to validating the behavior of the application being examined should be hidden away in a fixture within the test harness. In the case of login, each test should do it upon starting up, so that the tester doesn't have to code those steps explicitly for every single test case.
In fact, any time a tester is forced explicitly to code several test steps over and over to achieve some goal, those steps should become a fixture available with a single call. Another example is Search. A typical search function takes three steps: highlight an input box, type search text, click a Search button. The test harness should expose a one-step "Search" feature for test designers to use.
Another objection goes like this: My application requires users to go a long way down a certain single path before they get to the functions we need to test. Starting my tests at arbitrary points in the application is not at all what users see. And, isn't that the point of UI testing?
Imagine if we had a tree with branches instead of a circle with lines. Every test path has to start at the root of the tree and go up the trunk, then down a big branch, then a limb, then a twig, then a leaf. Now, imagine there is a test failure on the trunk path. How many tests fail now? All of them. Imagine a test failure on a big branch. How many tests fail? A significant number.
Maximizing the number of starting locations for your automated tests minimizes the number of automated test failures for any given defect in the application, making failure diagnosis much easier. Take another look at your circle. Every place where one line crosses another is a place where both paths might fail based on a single defect in the application. But, I suspect that your diagram shows that any single defect in the application will affect very few of your lines inside the circle, so you continue to get good information from the paths through the application that are not affected.
It is true that designing tests this way does not emulate a user's experience with the application, but no test automation ever emulates a user's experience. I suggest that proper evaluation of a user's experience takes a skilled, dedicated human being. Test automation serves a much different purpose, and the more effective we can make our automation, the better off we will be.
Finally, let's go back to our circle and talk about one more advantage of designing automated tests with various starting points and various different paths through the application. Draw another circle, maybe a quarter the size of the big circle. Draw it on top of the big circle, so that part of the new circle is outside the big circle and part is inside the big circle.
This represents a new feature. Outside the big circle you have no lines, no coverage at all. Inside the big circle there are probably a few paths through the UI that are affected by the new feature. Hopefully, there won't be too many, because they are the existing tests that will have to be changed to accommodate the new feature.
Once again, by having many starting points for our tests and many different paths through the application, we have saved ourselves a lot of work when a new feature comes along.