Focus Your Testing by Understanding How Customers Use Your Product

[article]
Summary:
If you're uncertain about where to focus your testing or what kind of testing you should be doing, look at what your users are telling you. Understanding the analytics of how your customers use your application can help you improve your test efforts. This article explores instances of how this data can inform user interface automation, compatibility testing, and web services tests.

A few years ago, I was leading the test team for a client-server application for a network management system in the telecommunications domain. We had automated all the tests around the APIs and felt that we’d exploited all those interfaces; it was time to start automating through the user interface (UI).

Testing the UI was quite expensive, and automating those tests was even more so. We had to be smart about our approach. However, we had two problems.

Look to User Data to Form Your Test Strategy

First, the technology we chose to implement the client was not compatible with any of the automated testing tools on the market. To automate through the UI, we had to build connectors, which were relatively expensive to create.

Second, we had 150 features as part of our UI. Expensive connectors, times 150, was a lot to deal with.

Luckily, our product had a user activity log that tracked who performed an action in the UI. We gathered the previous ninety days’ worth of activity from our top customers and crunched the numbers. It turned out that 93 percent of all customer usage involved just three of the features, and three other features accounted for just 2 percent of the usage. Guess which three features we automated.

In addition to helping us focus our test automation efforts, understanding how our customers used our application was valuable information for every team. The product management team now knew to redesign these features to make them more optimal in terms of ease. The program management team could streamline our schedules, knowing where most of the risk lies in these three features. And the marketing team could stay awake at night knowing that a competitor could build a very simple app that only did these three features and perhaps undercut us.

Look at user analytics before building a quality strategy. Automated tests, smoke tests, performance testing, and transaction monitoring all benefit from knowing the top customer features.

Focus Testing on the Configurations That Need It

Later, I went on to a consumer-based financial web application, where the audience wasn’t particularly tech-savvy and consumed our application through the browser. One key question for our testing was, which browser brand and versions should we test?

Looking at the data gave me a headache. If a browser existed, it was being used: Internet Explorer, Chrome, Firefox, Safari, and Opera. We even had a couple of customers accessing their finances through the browser on their game consoles.

Armed with our customer data, we chose as many of the most important browsers as we could reasonably test. We covered about 85 percent of the usage, which was good, but that still left 15 percent of users with untested browser configurations. With a million users, that is 150,000 people running at their own risk. Before asking for more funding to get wider coverage, we wondered if we could influence our customers’ behaviors in the browsers they chose.

My team figured that if they were using the latest browsers, our customers would benefit from a better experience across their entire Internet usage, not just with our product. Using the user-agent string to determine which customers were coming in on a low-priority browser, we displayed a dialogue box suggesting that they use a new browser. After a couple of months, that 15 percent on the untested browsers became 10 percent.

We then increased the urgency of the message, saying the experience would be better with a supported browser and that they would be proceeding with an unsupported browser at their own risk. That 10 percent then dropped to less than 5 percent. The vast majority of our customers now were using the browsers we were testing without our having to expand coverage.

On the other hand, this strategy doesn’t work for mobile apps. It is not really an option to ask customers to change their mobile devices, so we need to work on as many of the devices used by our customer base as possible. Still, having the analytics on our customers is very useful. We keep the most popular devices on hand to test with. For the second tier, we use emulators. And for the balance, or long tail, we cover those with a combination of crowd-testing service providers and cloud-based device farms.

Monitor Customer Usage to Prioritize Tests

Many systems are powered by back-end services, and understanding the customer usage patterns can benefit your testing as well. Knowing the volume of service calls, the configuration of those calls, and the relationship between web services and the important customer flows can all be useful.

As in the UI test automation case, knowing the volume of web service calls can help guide your automation efforts. The volume of each service call can be obtained from the production server logs for the systems that monitor the health of your services. In addition to the customer usage, you may find some fundamental services that are called many times from many customer actions.

One problem my team encountered in testing web services was the many permutations of possible tests we could create. First, we had many different service calls, and the volume of services was expanding. Our architects were migrating from an orchestrated service model, where fewer services performed more functions, to a microservices model, where more services performed more atomic tasks. During the transition, we supported both kinds of services.

In addition to those many services, the specifications for the services contained many parameters, some required and some optional. For example, if a service had three required parameters and two optional, then we had a total of four ways to make the call (without any optional, with the first optional, with the second optional, and with both optional parameters).

Fully testing the services layer was going to take a lot of code. Once again, it was customer usage analytics to the rescue.

The service calls happening in production are usually monitored with an application performance monitoring product that tracks the performance, error rate, and call configuration of each service call. We used this data to prioritize our tests.

We used the volume of each call to determine the most used services, and made sure that the most popular calls were automated first. We also examined the error rate to look for opportunities to improve those services and to help prioritize our tests. Finally, we looked at the structure of the calls to see which optional parameters were actually used in practice, which showed us what configurations to ensure were covered.

Knowledge Is Power

Our team benefited in three ways from incorporating customer analytics into our testing: User actions informed our UI test automation strategy, client preferences allowed us to focus testing on the most important configurations, and using service monitoring allowed us to test according to actual usage instead of theoretical usage. Understanding customer behaviors benefits everyone—your teams, your stakeholders, and your users.

How has customer analytics helped you improve your testing? Please tell us in the comments below.

User Comments

1 comment
Akbar Mohammed's picture

That is such useful information. Test automation projects fail a lot due to lack of strategy and direction. Using your points, we can have successfull strategy.

April 13, 2017 - 5:20am

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.