Automation

Conference Presentations

Agile DevOps West Fishbowl Discussion: How Much Automation Is Enough?
Slideshow

These days, everyone knows some automation is a necessity. More usually feels better. But when are you done? Or when do you stop for now? How can you tell if adding automation is no longer helping, or is even distracting from the real issues? Because the answer is "It depends," you'll want to listen to the wisdom of others who are on the same journey. In a fishbowl discussion, the audience members sit in a circle of chairs in the middle of the room. Several brave souls will fill all but one of the chairs in the "fishbowl." When you want to join as a speaker, you enter the fishbowl and sit in the empty chair, and one of the other speakers will voluntarily leave so that one chair is always available for a new speaker. You'll hear ideas and experiences from experts and peers alike. Come join Ryan Ripley as he facilitates this exciting conversation.

Ryan Ripley
Agile DevOps West 5 Common Types of Mobile App Bugs Found Using AI
Slideshow

Among all mobile apps, the current error rate is believed to be at 15 percent. With a thousand new apps launching daily and a constant increase of mobile devices, there’s a need for a scalable solution to create and maintain high-quality apps, without hassle. Thanks to artificial intelligence, exploratory testing is advancing and proving to detect mobile bugs at scale. Join Sandy Park as she examines the five most common types of errors found through more than ten thousand hours of AI-powered testing, with actual samples. She will introduce the challenges of each type and explain how the heuristic or rule-based approach was not able to address the issues efficiently. She will cover topics such as broken-element identification and Z-order detection for layered views. Finally, Sandy will share deep learning methods such as RCNN and LSTM, which enhance coverage and reliability.

Sandy Park
Agile DevOps West How to Avoid Automation Framework Sinkholes
Slideshow

Test automation frameworks are constantly plagued by runaway costs and huge codebases that become maintenance nightmares. Successful automation frameworks are best defined under the “keep it simple, stupid” philosophy—KISS! Test automation needs to be only as complicated as the most complex variation in the system. Laura Keaton will show how to streamline the development and maintenance of automation by integrating it with development, operations, and project management. If KISS is used properly, the maintenance and cost can be relatively straightforward. Join Laura to learn how to simplify branch logic, data variations, versioning, and other framework nightmares using some consulting tricks of the trade.

Laura Keaton
Agile DevOps West Hunting Sasquatch: Finding Intermittent Issues Using Periodic Automation
Slideshow

In pop culture, Sasquatch (aka Bigfoot) is an ape-like creature infrequently seen in the Pacific Northwest of North America—if he even exists. In the software realm, we have our own version of Sasquatch: that irritating, elusive "intermittent issue." Traditionally, we run automated tests on event boundaries, like when we have a successful deployment; we look for problems when we think they may have been introduced. Logically, points of change are when we expect to have injected issues, so we tend to only look for issues then. This approach alone, however, limits opportunities to reproduce intermittent issues. If we also run our automation periodically, we have additional opportunities to reproduce these types of issues; we call this approach periodic automation.

Paul Grizzaffi
Agile DevOps West What's That Smell? Tidying Up Our Test Code
Slideshow

We are often reminded by those experienced in writing test automation that code is code. The sentiment being conveyed is that test code should be written with the same care and rigor that production code is written with. However, many people who write test code may not have experience writing production code, so it’s not exactly clear what is meant. And even those who write production code find that there are unique design patterns and code smells that are specific to test code. Join Angie Jones as she presents a smelly test automation code base littered with several bad coding practices and walks through every one of the smells. She'll discuss why each is considered a violation and demonstrate a cleaner approach.

Angie Jones
STAREAST Testing in Production
Slideshow

How do you know your feature is working perfectly in production? And if something breaks in production, how will you know? Will you wait for a user to report it to you? What do you do when your staging test results do not reflect current production behavior? In order to test proactively as opposed to reactively, test in production! By testing in production, you will have increased accuracy of test results, your tests will run faster due to elimination of mock and bad data, and you will have higher confidence before releases. You can accomplish this through feature flagging, continuous delivery, and data cleanup. Only when your end-to-end tests pass in production will you know that your features are truly working. Talia Nassi will show you how to mitigate risk, improve your understanding of the steps to get there, and shift your company’s testing culture to provide the best possible experience to your users.

Talia Nassi
STAREAST Testing Large Data Sets with Supervised Machine Learning
Slideshow

Price rate is used to calculate an insurance premium based on the different insurance coverage. Every year the price rate is based on updated regulations, so after each change, the new price rate has to be tested for a large amount of data to make sure that the premium is correct based on the coverage. Testing fifty thousand data entries and their variations is impossible for any testing team. Alireza Razavi will present an AI automation testing framework designed to solve this testing problem. Discover how to use a supervised machine learning algorithm to determine the type of training examples, gather repressive training sets, select the input feature representation of the learned function, design the corresponding learning algorithm, run the learning algorithm on the gathered training set, and evaluate the accuracy of the learned function.

Alireza Razavi
STAREAST Well, That’s Random: Automated Fuzzy Browser Clicking
Slideshow

Roughly speaking, "fuzzing" is testing without an oracle—essentially, testing without knowing what the outcome should be. We don’t know what should happen, but we have a good idea of things that shouldn’t happen, such as 404 errors and server or application crashes. We generally apply fuzzing to produce these kinds of errors when we’re testing text boxes, but why should text boxes have all the fun? Websites today are interconnected, multiserver applications that include connections to out-of-network servers, making it difficult to enumerate and control all the possible combinations of paths through our system. Even if we could identify all the possible paths, most organizations would not have the time to test all these scenarios, regardless of whether they apply automation to help with that testing.

Paul Grizzaffi
STAREAST API Testing: Going from Manual to Automated
Slideshow

API testing can be challenging—especially for the uninitiated. Ever wonder what makes an API test great? Patrick Poulin will arm you with an understanding of the benefits of automating API testing over doing it manually. Patrick will review the tools landscape and show common errors people make while creating API tests. He'll discuss the steps required to completely automate the entire testing framework for APIs, and show how it is simpler than most people assume. Leave this session with an understanding of how to automate API testing and overcome the fear of the unknown.

Patrick Poulin
STAREAST What's That Smell? Tidying Up Our Test Code
Slideshow

We are often reminded by those experienced in writing test automation that code is code. The sentiment being conveyed is that test code should be written with the same care and rigor that production code is written with. However, many people who write test code may not have experience writing production code, so it’s not exactly clear what is meant. And even those who write production code find that there are unique design patterns and code smells that are specific to test code. Join Angie Jones as she presents a smelly test automation code base littered with several bad coding practices and walks through every one of the smells. She'll discuss why each is considered a violation and demonstrate a cleaner approach.

Angie Jones

Pages

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.