Web analytics can help you deduce, reduce, and prioritize your testing efforts. Learn how to gather and use qualitative and quantitative information about your users and the risks that can threaten your software's success.
One of the clearest definitions of web analytics is “the analysis of qualitative and quantitative data from your website and the competition or industry.” The data helps drive continuous improvements for the end-user experience, and what is being measured—website interaction (e.g., search, form completion, loyalty programs, download registration), social media interaction, in-bound and out-bound advertising, and performance—varies as much as the available tools. In the end, it’s all about sales revenue, transaction volume, customer stickiness, page views, length of stay, and advertising revenue.
According to Wikipedia, risk-based testing (RBT) is “a type of software testing that prioritizes the tests of features and functions based on the risk of their failure—a function of their importance and likelihood or impact of failure.” A question often asked of test owners is “What business risks are being mitigated and measured with RBT in place?” A good QA team can qualify the risk, but a great QA team can quantify it. If you are not measuring, then you are not managing, and if you are not managing, then you are not in control of your risks. Validate that your company has an overall understanding of risk identification and management before you introduce RBT.
Web-analytics tools try to provide insight into the success of marketing and sales campaigns as well as new functionality or feature sets, so it is most common for sales or marketing to fund and own them. If they don’t own them, then the product owner usually does.
Testing teams can use web-analytics statistics to quantify the need for both increased depth of testing and robust test suites (functional and regression) that mimic end-user patterns and behaviors, based upon a prioritized view of risk for insufficient testing. They measure the risk of untested or minimally tested code in the production environment, and they can directly tie the number of tests to business risk when the time it takes to adequately test the application is less than the time available.
Clarifying business requirements and goals is critical to identifying the correct analytical tool. From a test perspective, there should be few surprises about business requirements, criticality, and goals, as all tests should take these elements into consideration.
Web Analytics 101
Most users have encountered the omnipresent hit or visitor counters on websites. This is web analytics in its simplest form, providing:
- Information about unique visitors, total visitors, page views, hits, etc.
- Statistics about the most-accessed pages, workflows, and images
- Statistics about the average (minimum, maximum, and percentile) number of visitors, concurrency over a period of time, and the busiest time of day or week for traffic
- Details about average error rates across various pages and types of errors
- Various types of users and user profiles
- The most- and least-common user workflows and exit pages
- Traffic patterns, trends, and average session duration of users
- Statistics related to geographical distribution of users
Modern web-analytics systems providers like Omniture, Tealeaf, Google Analytics, and Webtrends primarily utilize onsite analytics. Some of them combine onsite and offsite analytics for even more detailed metrics collection and analysis. The following are some of the common web-analytics metrics:
- Unique browsers and browser types
- Number of new visitors
- Visits, unique visitors, and unique authenticated visitors
- Average number of visits per visitor
- Page views and exits and site exits
- Average number of page views per visit
- Clicks, impressions, or hits
- Average pages viewed per visitor
- Ratio of new to returning visitors
- Visits per month, quarter, or week
- Conversion of non-subscribers to subscribers
- Page views per visit by the active subscriber base
- Visit or session duration
- Average subscription length
Sometimes, the features of a tool (collecting data, paths, and statistics) are available but not in place from a test perspective. You might have to do some negotiating with the team to expand the current capabilities of the tool. Most web-analytics tools enable the correlation of these metrics with various business and financial figures and statistics. This detailed information is extremely useful for proactive web-application optimization and predictability. Hence, business and technical stakeholders are increasingly utilizing web-analytics systems for managing, measuring, and analyzing the end-user experience and risk to the business. With this information, the test team can truly make risked-based decisions into how robust testing really needs to be. Once data is quantified, management can make risked-based decisions. Naturally, without a test-case-management tool, this cannot be accomplished in an effective or efficient manner.
The cold, hard facts about website functionality—that is, interaction and performance—make up the quantified data that articulates risk to the business for tested and untested code in production, and all test teams should know what is tested and untested. The complex, web-based delivery model significantly increases the efforts required for testing. With increasing time-to-market pressures and shrinking release schedules, a successful web-application testing strategy boils down to prioritizing the efforts based on quantified risk to the business.
Web Application Testing Challenges
In addition to the routine challenges associated with testing, web-application testing introduces some additional nuances including a large number of application-usage workflows. The hyper-contextual nature of web applications lends itself to a potentially large number of workflows for even simple scenarios. Testing all potential workflow permutations and combinations can be fairly complex and costly, so learning from web analytics where to draw the line for diminishing returns will add value. A diverse user base introduces additional complexities in the overall test approach. Not knowing user types or any information about them compounds the complexity, as does not knowing hardware and software configurations, the network or ISP, connectivity speeds, and browser types. The testing organization must fully understand how these unknowns will affect the complexity, the number, and permutations of test cases, as well as the effort and cost in attaining the level of quality that the business owner expects. The testing team needs to ensure that all of these dimensions are accounted for as part of the overall RBT process.
A common criticism of regression testing is that it takes too much time and, as the application functionality is enhanced, so does the regression grow. A test team that cannot quantify the number of tests for adequate (risk mitigation) coverage will have difficulties in supporting the duration and effort. When it comes to exploratory testing, the analytical data will ensure that the team is not wasting valuable time on least-travelled paths. “Information is power” is an understatement in terms of the value that analytics provide to the business and technology teams. For testing, this information can help validate user cases or business scenarios, test-case-development techniques, test-case coverage and depth exploratory, regression and user-acceptance testing, all under the RBT umbrella.
Prioritizing the business workflows and test scenarios based on analytics metrics should also be compared against the product owners’ view of high-priority requirements. There could be surprises about how and what the users most often do. Testers benefit from the ability to identify, tag, and execute the most common paths and configurations, increasing the effectiveness and confidence about adequate test coverage and the associated users utilization patterns. One cannot test every combination or path—there is the law of diminishing returns—but ensuring that a test covers the most important and used features under the most common configurations should lead to greater success for the business. All of the above will be applicable to testing mobile applications. When you consider adding to the test bed of test cases all the combinations of mobile devices, models, and operating systems without some level of analytics about your user base, your costs, effort, and risks will exponentially increase.
A bad user experience can negatively affect revenue in both the short and the long term. Missed opportunities and the loss of future revenue streams can be a deadly combination to the survival of the business. As the world continues to embrace increasingly feature-rich, web-based products, technology delivery organizations must learn how to mitigate risk in the pursuit of their goals and objectives. Testing efficiency and effectiveness will lead to a positive experience for the end-user. Through the utilization of analytic tools and RBT, these criteria can be achieved.
As a tester, make sure that the information used to ensure that a satisfactory level of risk has been mitigated is quantified and measurable. Analytics support the RBT strategy and plan in such a manner. When testing costs are challenged or time constraints limit the ability to deliver a product of high quality, one can turn to web analytics and clearly quantify the risk of not testing. At the end of the day, effective analysis of web-analytics metrics can provide deep insight into the overall web-application-usage landscape. If effectively leveraged, this can be a gold mine of information for deducing, reducing, and prioritizing overall testing efforts from a risked-based approach.