development lifecycles

Conference Presentations

Zero to Agile in Three+ Years: It's a Marathon

Agile transformations for large organizations can have mixed results-and often fail miserably if the goal is to become an "agile organization." Sean Buck shares the story of The Capital Group Companies, a 7,000 person organization, which took a value-based approach to adoption. Rather than attempting a big bang implementation, Sean’s company and its agile transformation team planned for the long "run"-a marathon. Sean explains why organizations which proceed too quickly or take a tools-focused approach usually see their teams slip back to the old ways after initially impressive results. George Schlitz, who participated throughout their transformation, shares specific approaches and tools you should consider for your organization's adoption plans. He describes the staged model they employed for organizational transformation and how their strategies changed during each stage.

Sean Buck, The Capital Group Companies Inc
Automation Maturity: Planning Your Next Step in Test Automation

Do you find your organization not achieving the test automation benefits and ROI you expected? Are you spending too much effort rewriting scripts that don't hold up over time? Does your test plan look more like "random acts of automation?" Ayal Cohen describes test automation maturity levels and shares key points on how to determine your test organization's current maturity. Ayal identifies key ideas on how and when to move to the next level. Defining an efficient automation framework coupled with a stepped-up maturity methodology will help you achieve great success with automation. Ultimately, you can increase your test coverage dramatically, shrink your timelines, and better support your company's business goals. As Ayal explains, it's an ongoing process of addressing your goals, challenges, and current maturity level, while laying the foundation for future needs as you grow.

Ayal Cohen, HP
Defect Analysis: The Foundation of Process Improvement

Do you have a process in place to analyze defects, identify the defect categories and common pitfalls, and correlate the results to recommended corrective actions? Forced to get more done with less, organizations are increasingly finding themselves in need of an effective defect analysis process. David Oddis describes a systematic defect analysis process to optimize your efforts and enable higher quality software development. David’s approach promotes collaboration in the post-deployment retrospectives performed by the development/test teams. Join David as he facilitates an open conversation and provides guidance and tips via a real world walkthrough of the strategy and process he employs to analyze defects. Learn how these findings can lead to opportunities for process improvements in your requirements, design, development, test, and environment domains.

David Oddis, The College Board
Performance Engineering for MASSIVE Systems

Dealing with a single system is challenging enough, but the game changes dramatically on a multi-system, distributed platform. MASSIVE platforms can consist of more than fifty distributed systems and components, integrated to process millions of transactions per day-from millions of users-while processing hundreds of terabytes of data. The ramifications of one component or system not scaling to support this load might cost hundreds of thousands or millions of dollars in lost revenue for a single system disruption. Mark Lustig explains how to integrate performance engineering across the entire development lifecycle. The world of MASSIVE platforms requires a disciplined approach to building, measuring, and ensuring system scalability, performance, and throughput.

Mark Lustig, Collaborative Consulting
Get It All Done: A Story of Personal Productivity

You procrastinate. You worry that you may be making the wrong choice. You spend time on the irrelevant. You don't select the most important tasks from your many "to do's." You can't get things done on time. Join James Martin as he shares his experience with analysis paralysis, procrastination, and failure to deliver what others expect. After a look at why we procrastinate, James turns his attention to his personal story of a "bubble" of super productivity in which he delivered more relevant work in a two-week period than he believed possible. Along with the techniques and tips you would expect from a productivity boosting experience report, James explains the state of mind that will help you distinguish important from trivial tasks, reduce waste in your work, and discover the most important thing to do next. You can get It all done in record time-and with less angst than you ever dreamed possible.

James Martin, RiverGlide
Improving Software Quality Through Static Analysis

You've implemented unit testing, pair programming, and code inspection in your development process, but defects still escape despite your best efforts. Furthermore, you discover latent defects in previously error-free software as you make changes. The problem isn't your quality efforts-it's your approach. Michael Portwood shows how practical static code analysis techniques can complement your traditional testing approaches by addressing nagging quality and design defects. He focuses on subtle but common coding issues that lead to defects, code complexity testability issues, and a wide range of architectural issues limiting product lifecycles-issues that are missed by empirical testing. Introducing static analysis into your development process is easy to accomplish.

Michael Portwood, The Nielsen Company
Building Secure Applications

The Internet is full of insecure applications that cost organizations money and time, while damaging their reputations when their systems are compromised. We need to build secure applications as never before, but most developers are not now-and never will be-security specialists. By incorporating security controls into the frameworks used to create applications, Tom Stiehm asserts that any organization can imbue security into its applications. Building security into a framework allows highly specialized security experts to create components that maximize your application security profile while reducing the need for your development teams to have specialized application security knowledge. Learn to pick the right places in your framework to insert security controls and then enforce their use. Join Tom to explore real-world security controls he's applied to commonly used application frameworks.

Thomas Stiehm, Coveros, Inc.
Better Software Conference East 2011: Writing High Quality Code

Quality in delivered software is intangible and very different from quality in physical goods. Some external attributes of quality software-free from defects and easy to maintain-are reflections of the code's internal qualities. When classes and methods are cohesive, non-redundant, well-encapsulated, assertive, and explicitly coupled, they are less prone to mistakes and far easier to debug, test, and maintain. David Bernstein asserts that paying attention to code quality helps us focus on the key principles, patterns, and practices used by expert developers. If you don't pay attention to critical code quality attributes, iterative development practices can quickly degrade code into a maintenance nightmare. Join David and your peers to take a deep dive into the code qualities that make software more maintainable and less bug friendly.

David Bernstein, Techniques of Design
Risk Analysis for Test Managers

Risks are endemic in every step of every software project. A well-established key to project success is to proactively identify, understand, and mitigate these risks. However, risk management is not the sole domain of the project manager, particularly with regard to product quality where test managers and testers can significantly influence the project outcome. Julie Gardiner demonstrates how to evaluate and mitigate product risk from a testing perspective. She describes different approaches to risk management, the benefits of each, and how to use them. With an understanding and appreciation for product risk analysis, the test manager and team can then assess which testing approaches and techniques they should apply to reduce these risks. Julie demonstrates an easy way to report on progress to business management and stakeholders using product risk as its basis.

Julie Gardiner, Grove Consultants
Using Technical Debt to Predict Product Value

Overly complex code? Duplicate code? Inherent coupling? Been there, done that. Beyond these specific code issues, you may believe that something is inherently wrong with your project-increased pressure, decreased velocity, those broken functions that just never get fixed. Although there are no magic bullets to fix these problems, Emad Georgy shares how he has applied a novel, technical debt model as a predictor of overall product value. Emad has used this model at strategic and business levels to bring focus to the issue of technical debt and to obtain resources and prioritization to address debt. You also can use the technical debt model to identify anti-patterns-architecture, process, and project perspectives-in your organization.

Emad Georgy, Experian

Pages

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.