Conference Presentations

High Speed Testing Cycles: An Approach to Accelerated Delivery of Bug-Free Software

Large companies often have multiple software development projects running at the same time. Getting enough infrastructure in place to test these projects concurrently, however, can be very difficult. A High Speed Testing Methodology (called "Testing Trains") has been developed to perform system/acceptance testing for large-scale projects in two-week periods. Learn how Testing Trains can be successful in delivering bug-free software on schedule for your organization.

Daniel Navarro, Banco Nacional de Mexico
Design and Test of Large-Scale Systems

Increasing complexity and functionality of digital systems--coupled with time-to-market constraints--pose quality challenges. Strategies often include a mix of new development with the integration of pre-existing components from multiple sources. Ann Miller presents some of the software engineering and software management lessons learned from eight years on a large commercial satellite program, as well as several years on military satellite programs. This presentation focuses on the planned evolution of large-scale systems from the design and build of smaller components based on an end-to-end system backbone.

Ann Miller, University of Missouri-Rolla
Performance Evaluation and Measurement of Enterprise Applications

Today's large-scale enterprise applications are all Web-enabled and complex in nature. Many users experience performance problems from day one. Performance evaluation and measurement via extensive testing is the only practical way to raise and address all issues prior to a successful deployment. Learn how to tackle performance and capacity issues with the appropriate testing strategy and scalable infrastructure/architecture.

Rakesh Radhakrishnan, Sun Microsystems
What's That Supposed to Do? The Archeology of Legacy Systems

In testing utopia, all software products submitted for testing have thorough and comprehensive documentation describing how every program function should work. On planet Earth, however, test engineers usually have to make do under less-than-ideal circumstances. It's not uncommon for test engineers to be asked to verify the functionality of a critical legacy system which has no documented requirements whatsoever. While there are many reasons this can happen, the result is the same: You assume the role of an archeologist sifting through the layers of clues to reconstruct the specifications. Patricia Ensworth gives you instructions and tools so you'll be ready to roll up your sleeves and dig.

Patricia Ensworth, Moody's Investors Service
Solid Software: Is it Rocket Science?

While we can't guarantee that our software will never fail, we can take serious steps to reduce the risk. The toughest kind of system to build involves safety-critical software where the reliability requirements are extremely strict-and whose failure puts lives in jeopardy. Shari Lawrence Pfleeger looks at what "solid software" means, and explores ways we can achieve it. She examines solid software within the context of the proposed National Missile Defense System.

Shari Lawrence Pfleeger, Systems/Software, Inc.
Establishing a Telecommunication Test Automation System

Building an environment to successfully test wireless intelligent network peripherals presents an array of complex problems to resolve. The target environment integrates various SS7 protocols, a proprietary protocol, and voice recognition subsystem--and requires a controlled and synchronized test environment. Learn how a test automation approach allows the software engineer control over the peripheral interfaces and provides for the testing of the entire call flow sequence, its initiation and consequential message traffic. Discover how this approach provides for function testing as well as scalability for automated performance, load, and stress testing.

Greg Clower, Software Development Technologies
Space Shuttle GPCF: A Retrospective Look

This paper is based on a recent experience implementing and testing a large new software capability in a maintenance organization which had not dealt with a large change in some time. The capability was called GPC Payload Command Filter (GPCF). While the task was completed successfully, it was not without cost in terms of schedule slips and personal angst. The purpose of this paper will be to help the verifier learn from what was done right and what was done wrong, hopefully to avoid the pitfalls and emulate the successes. Specifically, the objective is as follows:
To provide guidance on how to successfully test a large new software capability using verification processes which have specialized over time to provide extremely effective results
for relatively small changes.

Alan Ogletree, United Space Alliance
Delusions of Grandeur: Is Your Web Site Really Scalable?

This presentation relates a software test lab's real-world experiences performing load testing for scalability on three Web sites. Besides methodology, it also covers the tools employed, client expectations before launch, and how the findings from the testing were applied to help clients correctly scale their sites. Learn why this type of testing is the most effective way to validate design and hardware architecture, plus identify potholes before they end up on the information superhighway.

Jim Hazen, SysTest Labs, LLC
Enterprise Test Engine Suite Technology

Many companies invest heavily in test automation in order to verify the functionality of their complex
client/server and Web applications, only to find that anticipated cost savings and higher reliability remain
elusively out of reach. This paper is a guide on how to create Table Driven Test automation with off-the-shelf utilities and commercially available GUI testing tools. It demonstrates the benefits of using a table driven approach and presents various engines, utilities and documents that enhance or support this third generation testing
architecture, which I call Enterprise Test Engine Suite Technology (E-TEST).

James Schaefer, Capital One
Evolution of Automated Testing for Enterprise Systems

The key to accelerating test automation in any project is for a well-rounded, cohesive team to emerge that can marry its business knowledge with its technical expertise. This session is an in-depth case study of the evolution of automated testing at the BNSF Railroad. From record-and-playback to database-driven robust test scripts, this session will take you through each step of the $24 billion corporation's efforts to implement test automation.

Cherie Coles, BNSF Railroad

Pages

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.