In this case study of an award winning project, Andy Redwood describes how his team used "best shoring" of testing services to reduce costs, reuse assets, and get the best from their test automation tools. In an enterprise-wide transformation process at a large investment bank, his team used available infrastructure, technology, tools, and process to reduce business risk from software changes with a new automated regression test suite.
STARWEST 2004 - Software Testing Conference
Acceptance testing is a vital and specific form of testing whether you are tasked with rolling out an enterprise application package, releasing a major system enhancement, or developing acceptance tests in an agile development project. In addition, acceptance tests can give some teeth to service level agreements and software acquisition contracts. However, most treat acceptance testing as the same activity as system testing-but done by different staff. That is wrong!
"Automating manual tests was taking too long and we believed that overhead would become too high to maintain the automated tests. As the code base evolved and expanded, the performance and value of older automated tests deteriorated noticeably. What to do?
Outsourcing testing software projects to countries in Asia is a trend that is here to stay. You have a growing number of choices for an outsourcing country in Asia-India, China, Taipei, Korea, and others. Although India currently dominates the scene and both Taipei and Korea have historically provided excellent quality, though at a higher cost, China is quickly moving to become the leader with even lower billing rates and a large number of experienced and educated engineers.
FitNesse is an open source testing tool based on the Wiki Wiki Web and FIT (Framework for Integrated Tests). The Wiki Wiki Web is a collaboration tool in which anyone can create or change new pages to document or share any information. FIT is a framework and tool for creating automated acceptance tests. Joined together, FitNesse is a Web server-based tool for teams to easily and collaboratively create documents, specify tests, and run them.
With a framework built in .NET using the open source application NUnit, database application developers and testers quickly can create a basic set of build verification tests and provide a foundation for a set of more powerful tests. Alan Corwin demonstrates the framework in the context of a fully functional Web site and offers a brief history of how his team developed it to show how they came to introduce automated testing into their development process.
How are you going to develop 1,000 or more automated test cases and run them automatically and unattended night after night? Commercial test automation tools get a bad rap because many organizations never get past the record / playback / fail cycle of frustration. These tools can contribute to your testing needs, but first you must understand what has to be done to make them work for you. Jamie Mitchell outlines different test automation architectures successfully in use today and discusses the pros and cons of each.
Most software and test managers keep some metrics to help them, but are yours really doing the job for you? Good test metrics can serve as an early warning mechanism about a project in trouble, help justify much needed assistance for a testing team, or demonstrate the value testers provide. Poor metrics can mislead management or drive a wedge between development and test teams. Steve Walters explains the basic enablers for metrics reporting and discusses three categories of metrics, providing tips for choosing metrics that matter.
Security issues are becoming more and more relevant as testers are called on to find security problems before others exploit them. So, where do you start, and how do you bridge the gap between honest tester and bad-guy hacker? Julian Harty suggests we do so by adopting the mindset and practices of a hacker. In this presentation, gain a unique insight into the ways of hackers and specific technical techniques and tips on how to find security flaws before the hackers do.
Good test designs often require testing many different sets of valid and invalid input parameters, hardware/software environments, and system conditions. This results in a combinatorial explosion of test cases. For example, testing different combinations of possible hardware and software components on a typical PC could involve hundreds or even thousands of possible tests.