STARWEST 2002 - Software Testing Conference

PRESENTATIONS

"Excel-erating" Test Status Reporting

As a tester, you're often asked how far along your testing effort is, and when it will actually be done. This is one of the most difficult-and nerve-wracking-questions to answer, especially when a project has just begun or is nearing completion. While a tool is what's needed to help gather information and effectively answer this inquiry, many companies cannot afford to purchase or implement a complex, commercial tool. But there is a solution available in commercial spreadsheet products, particularly Microsoft's Excel.

Earl Burba and Jim Hazen, SysTest Labs
A Custom Automation Framework and Test Case Management Solution

Interested in seeing a real test automation solution in action? Automated testing is an exciting thing to be part of, but automating the automation is even better. This session presents a system where the test case/automation system is set in motion after configuration management builds a piece of software for a project in which test has been automated. This means thousands of preprogrammed test cases can be run on multiple machines day and night.

Darin Magoffin, Todd Hovorka, and Rich Wolkins, PowerQuest Corporation Inc
A Missing Link: Project Management in the Testing Organization

The emerging discipline of project management within the information technology arena can be a major move toward your testing organization accomplishing its stated goals. That's because effective project management leverages the best practices of quality control and quality assurance, with the basic principles of a sound project strategy. This means working toward goals in an organized way, and using a road map that integrates test project management into the organization.

Karol Vastrick, Federal Express
A Quick Lesson in Test Estimation

You're a software tester who's just been given a new project. You understand what's important to the customers, users, and other stakeholders in the new application, so designing and implementing your tests are no problem. The difficulty arises when your boss asks when testing will be completed. Just how do you develop realistic and practical estimates of test completion? More importantly, how can you intelligently respond when someone suggests cutting the test schedule?

Rex Black, Rex Black Consulting Services, Inc.

A Test Automation Harness for the PocketPC

The emergence of the handheld platform is an exciting opportunity to reapply quality and usability paradigms. It gives us the chance to establish new, industrywide quality benchmarks for handheld applications that may propel society beyond the traditional human-machine interface. Handheld-based computing has its potential-and its limits. When moving from desktop-centered quality assurance to handheld-centered applications, there will be changes that affect software testing techniques. We must be prepared.

Ravindra Velhal, Intel Corporation
A World-Class Infrastructure for Performance Testing

The IBM Global Testing Organization's performance test infrastructure is solely responsible for certifying the performance of all IBM enterprise Lotus Notes and Web applications before their deployment to end users. Naomi Mitsumori describes this infrastructure and provides insights into designing the appropriate test environment, how performance and monitoring tools should be selected, and the management style necessary for success.

Naomi Mitsumori, IBM Global Services
Adventures in Session-Based Testing

Many projects' first test approaches are characterized by uncontrolled, ad hoc testing. Session-based testing can help you manage unscripted, reactive testing. By using sessions to control and record the work done by the test team, you can use these methods to support and give impetus to ongoing learning and team improvement. You'll be introduced to simple tools and metrics to support test sessions, illustrated by real-world examples from two case studies.

James Lyndsay, Workroom Productions
Applications-Centric Testing of System-Level Components

Testing system-level components such as the Java API for XML-Based Remote Procedure Calls is a challenging task. Employing use-case techniques from the Unified Modeling Language (UML), Vinay Pai describes a novel approach for testing such components. His team developed use cases for a realistic application that would use the components, then developed test case designs from those use cases. The resulting test suite uncovered more than 200 defects in eight months, and exceeded code coverage goals by almost 50 percent.

Vinay Pai and Arun Gupta, Sun Microsystems
Applying Orthogonal Defect Classification Principles to Software Testing

Test escape analysis and corrective action tracking (TEACAT) is a method used to collect and utilize information about the causes of test escapes to prevent customer-found defects and improve internal test, development, and release processes. The TEACAT approach provides testers and test managers with the primary causes of defect escapes from the organizations into the field.

Suzanne Garner, Cisco Systems Inc
Automated Testing for Programmable Logic Control Systems

Developing real-time, automated testing for mission-critical programmable logic controller (PLC)-based control systems has been a challenge for many scientists and engineers. Some have elected to use customized software and hardware as a solution, but that can be expensive and time consuming to develop.

Reginald Howard, Advanced Systems Integration Inc. and Jon Hawkins, Alliance Technical Solutions

Pages

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.