Process

Conference Presentations

Achieving Software Quality Through Test Automation Process Integration

With increasing demands for high-quality software applications in shorter development cycles, it's clear that teams need to go beyond simply running tests at the end of their development cycle. Instead, teams must approach development with quality as their primary objective. Brian Bryson shows you how to integrate automated testing tools with best practices to implement an effective quality assurance process from the beginning (and throughout) the development lifecycle.

Brian Bryson, Rational Software Corporation
Measuring the Effectiveness of Automated Functional Testing

Many struggle to accurately judge the value, success, and return on investment of test automation. In this session, Ross Collard helps you identify which areas and aspects of testing-both manual and automated-provide fruitful opportunities for improvement. You'll have the opportunity to compare the effectiveness of your organization's test automation with industry norms and best practices. You'll also see how other organizations gather, interpret, and apply these metrics. Find out what's worked and what hasn't.

Ross Collard, Collard & Company
Software Test Automation Spring 2002: Test Automation on a Shoestring: Doing More with Less

Want to automate your tests but don't have the budget for big-league tools? Elisabeth Hendrickson offers case studies where test automation was accomplished with simple tools for small budgets. She delivers practical advice for creating the automation you need from the tools you already have or can easily get your hands on. Fact is, everything you need to get started is probably right in your "kitchen drawer."

Elisabeth Hendrickson, Quality Tree Software, Inc.
Lessons Learned in Test Automation

Can test automation really advance your testing mission? The answer to that question is a resounding "That depends!" But to make it happen you have to provide value to development and find new ways of testing. Bret Pettichord offers lessons from the the trenches in building powerful test suites. He shares his experiences as well as those of other test automators to help you avoid the pitfalls others have already stumbled onto.

Bret Pettichord, Pettichord Consulting
Taking Test Automation Mainstream

By now, most test organizations have implemented at least one test automation tool. However, the success of these tools is by no means guaranteed. Why is it that these products often fail to meet their potential? What can managers do to increase the tool's return on investment? Andrew Pollnew helps you with ways to ensure that tools support rather than hinder you. He discusses a number of common-but-flawed approaches to automation, then explains how to change them.

Andrew Pollner, ALP International Corporation
Software Test Automation Spring 2002: Test Automation With Action Words: A Practical Experience

Action Word Testing. This concept illuminates testing as an action, a process, an art. Learn how Action Word Testing can be applied to deal with critical test issues such as lack of functional knowledge of a system under test; instability of the design during test development; and automation of 100% of the functional or technical tests. Hans Buwalda uses a financial exchange that's introduced a new electronic trading system to demonstrate Action Word Testing (approximately 15,000 tests). In this example, automation of the entire test was essential, but it was difficult to achieve.

Hans Buwalda, LogiGear
A Practical Approach to Early-Cycle QA Test Automation

Everyone knows that a large body of automated unit tests for classes, subsystems, and frameworks adds to overall code quality. However, the "burden" of unit test automation is frequently placed squarely on the shoulders of developers because of the perception that only a developer can write a unit test. Since QA personnel typically test from the user interface-and usually have to wait until later in the development cycle for the availability of that interface-they're often left to scramble at the end of the cycle to get their testing done. Michael Silverstein reveals a model for early-cycle collaboration between developers and testers where testers augment the developers' unit testing activities without adding additional process overhead.

Michael Silverstein, SilverMark, Inc.
Identifying Testing Priorities Through Risk Analysis

It's impossible to test everything-even in the most trivial of systems. Tight time schedules and shortages of trained testing personnel exacerbate this problem; so do changing priorities, feature creep, and loss of resources. In many companies, test professionals either begin their work on whichever components they encounter first, or the parts they're most familiar with. Unfortunately, these approaches may result in the delivery of a system where the most critical components remain untested. Or, at the very least, critical components are tested later in the lifecycle when there may not be time to fix the problems found. All of this adds to the risk of a project. One way to overcome every one of these challenges is to employ the use of risk analysis. Rick Craig demonstrates the basics of a usable process for assigning testing priorities based on relative risk.

Rick Craig, Software Quality Engineering
Testing Your Software's Requirements

Many testing organizations focus primarily on software executable code, but that's not the only thing you can test. For instance, did you ever consider testing your software requirements? When you test only code, you face some big disadvantages, not to mention that design defects often aren't even fixable because they demand too much effort, too late in the release cycle. In fact, it's difficult to even report some requirements defects since the developers have already committed to the design strategy. But if you test your requirements early in the game, you can discover defects before they're cast into designs and code, consequently saving your organization potentially huge rework costs.

Brian Lawrence, Coyote Valley Software
The W Model Strengthening the Bond Between Development and Test

In software development, thirty to forty percent of all software activities are testing related. That is why it is critical to launch test activities at the beginning of the project rather than after coding is completed. Based on the Vmodel, this paper describes a model that shows how the tasks for testing relate to the tasks in the development model. This testing model-the W-model further clarifies the priority of the tasks and the dependence between the development and testing activities. Though as simple as the V-model, the W-model makes the importance of testing and the ordering of the individual testing activities clear. It also clarifies that testing and debugging are not the same thing.

Andreas Spillner, Hochschule Bremen and Karin Vosseberg, Specialists

Pages

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.