In this week's column, Dale Perry talks about a Web testing paradox. On one hand, users have been trained to expect Web sites to be unreliable. On the other hand, Web development often gets less testing priority than other software projects. While random users may not be surprised by a Web transaction going awry, they will not wait ten seconds to give your Web site a second chance. That is the great advantage users on the Internet have--they can easily go somewhere else. That fact should give pause to project leaders as they prioritize Web testing. Read on to see how Dale ponders the current state of the Internet.
On a flight home from teaching a Web-testing seminar I was confronted by an interesting question from the woman sitting next to me. Strapped in with a stranger for a few hours, we did what most people do. We began chatting about where we were going, where we were from, and what we do for a living. When I told her I consulted on Quality Assurance and Testing and that I recently taught a Web-testing class, she asked me, "Why should I be concerned whether or not my Web site has been tested?" This caught me completely off guard and I had to pause to consider her question.
On arrival home my stepfather asked me about creating a Web site for his new business. He said the ISP he used had built a Web site for him, but that they had gotten the name of the company, the phone number, and other critical information wrong.
The question of quality and testing was again brought to my attention on the following flight out to teach another Web seminar. While reading the Atlantic Monthly I noticed an article on "Cyberspace" by Jonathan G. S. Koppell and how the concept of "Cyberspace" made no real sense. These three factors made me realize one thing: most people have no idea what the Web is or how it affects them.
Most people get their view of the Web from the media. I am amazed at how often I hear the words "Cyberspace," "The Internet," "The Web," and "The Information Superhighway" as if (as noted by Mr. Koppell) they were an actual place or a single, stable construct. The Web is far from a single place or even a single thing. The Internet, as readers of this column probably know, is thousands of separate communications networks existing in hundreds of places and countries. The Web is only part of the overall Internet. There are far more activities on the Internet than Web sites and surfing (such as email, chat rooms, news groups, data movements, file transfers, etc.).
The very concept of an information Superhighway is misleading. Granted, there are some very fast connections and pathways within the Internet, but these are the exception, not the rule. The reality is more like a whole road system. There are some expressways, some major roads, and a lot of slow-moving minor roads. The only thing that makes "the Internet" what it is, is a common process for navigating the roadways: this is TCP/IP (Transport Control Protocol/Internet Protocol). You can think of TCP/IP as the rules of the road.
Within these general, overall rules, there are sub-rules. The Web comprises just one class of such sub-rules. It is very important for those who develop Web-based systems to be sure those systems work within the rules. The very nature of the Web changes testing and validation. No longer is information technology and functionality the driving force; in the Web world the driving force is Information Engineering (e.g., library science).
In traditional systems you do not have to worry whether or not a user can locate your application or system--it appears on their system when they log on. On the Internet/Web, however, before a person can use your system (Web site) they must first find it. On the Web you no longer have a fixed, trained set of users who focus on a small set of functions within a finite environment. The Web is open to all comers. As such, testing of basic interactions must be much more detailed and complete to avoid potential disasters as untrained people learn their way around.