River or Lake? The Water Theory of Software

[article]
Summary:

Heraclitus once said, "It is impossible to step into the same river twice." This is true for software, too. Software is constantly changing, and there are several theories on how these changes are introduced into production. Linda Hayes describes some of the theories and offers ways to navigate the seas of change.

Everyone knows—usually from painful experience—that software is constantly changing. But how those changes make it into production can vary widely. In some cases, changes are introduced into production as a constant flow, at any time of the day or night. In other cases, changes are strictly versioned and released into production at predefined intervals. It's similar to the differences between a river and a lake: A river is constantly flowing into the sea, but a lake gathers water until the dam is opened.

Which of these describes your environment, and what does it mean to testing?

Whitewater
The river theory is usually applied to critical internal applications that run on mainframes or other centralized platforms. The criticality of these applications demands rapid responsiveness and their centrality means the change does not have to be proliferated to multiple platforms. You might think that this approach would be used only for serious errors that must be fixed immediately, but I have found that it is also used for enhancements in the name of time to market.

Whitewater rafters know that running the same stretch of rapids doesn't mean you won't encounter new obstacles. Likewise, the challenge for testers is obvious: If a change can be introduced at any moment, how can you rely on the predictability you need to verify expected results? How do you know that a test that ran successfully in the morning will get the same result in the afternoon? How can you perform regression testing when there is no known, stable set of functionality?

The benefit to a business is agility. Customer needs or market drivers can induce an almost instant response. The shorter the cycle time, the faster the time to market, and defenders of this practice pride themselves on their flexibility and responsiveness. But,like whitewater river rafting, this approach is both exciting and dangerous. Rolling in one change can cause breakage elsewhere, and the rapid fix for that breakage can cause even more issues. In fact, I know of several companies where the cascading effect of errors created by changes caused production to become fundamentally unstable for months, wreaking havoc throughout the enterprise. In one case, it actually resulted in federal regulatory oversight for more than a year when customers complained that their account balances were not accurate. For this reason, testers must learn to navigate these waters and anticipate the potential changes.

Lake Placid
The lake theory predominates in commercial and/or distributed applications. Because changes must be widely disseminated, releases are scheduled at regular intervals. Naturally, software vendors have to follow this practice, because their customers won't tolerate constant updates, and installing internal applications with desktop components to hundreds (if not thousands) of machines is too cumbersome to perform constantly. In both cases, the logistics discourage continuous churning. So, at specified intervals the dam is opened and the changes are released.

Testing benefits are self-evident: By batching up functionality into a single release, functional and regression testing are simplified, because there is a measure of stability. The downside is that the volume of testing is greater because there are more changes per release, but this is offset by the capability of performing regression testing to ensure that the changes aren't having unintended effects.

For a business, the lake theory introduces both structure and stricture. Structure because projects require more planning and coordination, and stricture because response times are usually longer. Of course, a critical error can still breach the dam and be rushed into production, but the very stability created by this approach will reduce the probability of errors reaching production in the first place.

Tags: 

About the author

Linda Hayes's picture Linda Hayes

Linda G. Hayes is a founder of Worksoft, Inc., developer of next-generation test automation solutions. Linda is a frequent industry speaker and award-winning author on software quality. She has been named as one of Fortune magazine's People to Watch and one of the Top 40 Under 40 by Dallas Business Journal. She is a regular columnist and contributor to StickyMinds.com and Better Software magazine, as well as a columnist for Computerworld and Datamation, author of the Automated Testing Handbook and co-editor Dare To Be Excellent with Alka Jarvis on best practices in the software industry. You can contact Linda at lhayes@worksoft.com.

StickyMinds is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

Nov 09
Nov 09
Apr 13
May 03