Developers and testers are under constant pressure to operate more efficiently, cut costs, and deliver on time. Without access to scalable, flexible, and cost effective computing resources, these challenges are magnified. Brett Goodwin explains how to create scalable dev/test environments in the cloud, and shares best practices for reducing cycle time and decreasing project costs. Learn how scalable, cloud-based data centers can run software without complicated re-writes; enable rapid defect resolution with snapshots and clones; and provide global collaboration for multiple product and release teams. Brett presents a case study of Cushman and Wakefield, the world's largest privately held real estate services firm, which struggled with an on-premises development and testing environment.
The role of the software tester is continuing to evolve, becoming more complex and more technical. As new methodologies, technologies, and platforms emerge, testers are bombarded with new, so-called "best practices" on how to do their jobs. The problem is that testers have heard the same songs with different lyrics for more than twenty years now. Clint Sprauve takes a contrarian’s view of testing and the quality assurance industry. He examines some of today’s typical testing "best practices"-keyword-driven testing, requirements traceability, the tester’s role in agile development, quality reporting, tool expertise, and quality certification programs-while providing alternative approaches for how to view each practice.
The deployment destination for today’s applications is going through its biggest transition since the rise of the application server. Platform-as-a-Service (PaaS) and other cloud service offerings are putting pressure on every stakeholder in the application lifecycle, forcing us to modernize both our skill sets and tool stacks. Mik Kersten describes the key cloud technology trends and demonstrates how the coming wave of cloud-friendly application lifecycle management (ALM) tools and practices will become the defining factor for productivity and ultimate success. Discover the new challenges developers face when deploying and debugging multi-tenanted applications on hosted infrastructures. Learn how continuous integration loops require testers to learn new tools that connect them directly to running applications.
It's like watching a chase scene in a major summer blockbuster movie. You're totally focused on the action when suddenly you realize the background is a blurry mess. Trees, buildings, street signs, and pedestrians on the sidewalk have become one mass of smeared colors. As we increase the rate of new software releases and rely more and more on running web services for both interfaces and apps, we are beginning to see the boundaries blur between development, test, and operations. Ken Johnston pokes some fun at the walls between our disciplines and then dives deep into working examples of organizations that are erasing the lines between Dev, Test, and Ops to create more fluid and innovative businesses. Using his experiences from the Bing search development team at Microsoft, Ken describes the impact of lean thinking, kanban, cloud computing, and continuous deployment on role definitions.
Is your organization releasing applications that target multiple mobile devices, platforms, or browsers? If so, you have faced-or soon will face-the challenge of choosing and setting up a test environment for these devices and platforms. Nat Couture shows how to develop a cost-effective application test environment to mitigate the risks associated with deploying mobile applications. He shares his latest research on mobile devices, mobile platforms, and mobile browser usage, and explains in detail what you need to consider when choosing a test environment. Learn how to select a winning combination of device-specific simulation, platform-specific simulation, and browser-specific simulation-coupled with tests on the actual devices. Build a mobile device testing program that reduces cost, increases coverage, and helps achieve the level of confidence you need to release mobile applications into production.
Organizations currently are using virtualization in the test lab to eliminate underutilized systems such as physical computers and software. So why not virtualize the costly, overutilized, or completely unavailable elements of the software architecture that have serious access and data issues for testing? These elements required for realistic end-to-end testing-mainframe computers, production systems of record, and computing services hosted by other companies-are often difficult or expensive to access for testing. Rajeev Gupta explains how virtualizing these overutilized systems can make the constraints of capacity, test data, and availability for testing a distant memory. Discover how service virtualization, employed as an adjunct to hardware lab virtualization, eliminates the bottlenecks and data management efforts that stymie many test and development teams.
Rather than continually adding more testing, whether manual or automated, how can you assess the readiness of a software product or application for release? By extracting and analyzing the wealth of information available from existing data sources-software metrics, measures of code volatility, and historical data-you can significantly improve release decisions and overall software quality. Susan Kunz shares her experiences using these measures to decide when and when not, to release software. Susan describes how to derive quality index measures for risk, maintainability, and architectural integrity through the use of automated static and dynamic code analyses. Find out how to direct limited testing resources to error-prone code and code that really matters in a system under test. Take back new tools to make your test efforts more efficient.
How to apply adaptive analysis to evaluate software quality
Even applications that have gone through rigorous testing in QA tend to have serious performance problems in production. Nearly every CIO or production manager has horror stories of applications that went live and failed. Yet with so much on the line, why are we in a constant firefighting mode? When confronted with new problems, we have to start with the basics and ask, "Is the problem in the application or in the infrastructure? How can I narrow it down fast?" Production tuning takes your good QA practices to the next level, and helps you get out of firefighting mode.
Outsourcing arrangements are established on the basis of a contractual partnership, with both parties having a vested interest in the success of the relationship. Success can be viewed differently by the outsourcing provider and customer, however, making the use of objective, quantifying service level metrics instrumental to the success of the contract. Learn how to properly identify and develop service level metrics required to support both business and technical deliverables.
More and more organizations are committed to establishing an effective measurement program. Big or small, measurement takes time and resources. The overriding key to measurement program success is accuracy. Organizations with established metrics programs typically institutionalize an audit activity to maximize their investment. Explore the current approaches being used to audit measurement activity. Learn why auditing is so important, and what and when to audit within your organization.