It can be easy to feel like the villain when you work in testing. After all, part of the job is to point out when things are broken, people have made mistakes, timelines aren't realistic, or a plan can't work. But if your team feels like you're a frequent naysayer, trust can and will erode.
Connor Dodge believes that data is the most valuable commodity in the world, and that testers generate some of the most valuable data in product development organizations. Test data can inform release schedules, aid in decision-making, and shape the direction of the product.
The internet of things (IoT) brings connectivity to a range of previously non-internet-enabled physical devices and real-world objects. This shift has an impact testing—changing what we test, when we test, and the way we test. For one thing, once you’re in the real world, the number of possible issues explodes due to environmental conditions. Just like a race car must adjust its tires for different track conditions, IoT devices must account for environmental factors such as temperature and humidity to prevent unanticipated failures. Jane Fraser believes that for the IoT to be successful, we must focus on developing testing methods, analytics tools, and SDKs that help teams to automate activities such as checking connection strength and robustness, verifying mobile compatibility, and testing various hardware capabilities. This includes Wi-Fi, BTLE, radio, natural language processing technologies, and more.
Max Saperstone tells the story of how a health care company striving to get to continuous releases built up their automation to secure confidence in regular releases. Initially, as no test automation existed, Max was able to take an opportunity for greenfield test automation and, in the span of twelve months, develop over two thousand test cases. A pipeline was created to verify the integrity of the automated tests and build Docker containers for simplified test execution. These containers could be easily reused by developers and the DevOps team to verify the application. Join Max as he walks through the feedback loop that was created to allow application verification to go from hours to minutes. Max will share his choices of BDD tooling, integrated with WebDriver solutions, to verify the state of web and mobile applications.
Testing artificial intelligence- and machine learning-based systems presents two key challenges. First, the same input can trigger different responses as the system learns and adapts to new conditions. Second, it tends to be difficult to determine exactly what the correct response of the system should be. Such system characteristics make test scenarios difficult to set up and reproduce and can cause us to lose confidence in test results. Yury Makedonov will explain how to test AI/ML-based systems by combining black box and white box testing techniques. His "gray box" testing approach leverages information obtained from directly accessing the AI’s internal system state. Yury will demonstrate the approach in the context of testing a simplified ML system, then discuss test data challenges for AI using pattern recognition as an example and share how data-handling techniques can be applied to testing AI.
We are often reminded by those experienced in writing test automation that code is code. The sentiment being conveyed is that test code should be written with the same care and rigor that production code is written with. However, many people who write test code may not have experience writing production code, so it’s not exactly clear what is meant. And even those who write production code find that there are unique design patterns and code smells that are specific to test code. Join Angie Jones as she presents a smelly test automation code base littered with several bad coding practices and walks through every one of the smells. She'll discuss why each is considered a violation and via live coding, she will demonstrate a cleaner approach. While all coding examples will be done in Java, the principles are relevant for all test automation frameworks.
Testers tend to be innately curious creatures. Being curious and evaluating risks—that is what the testing job is about. Often it is the statement “I don’t know” that drives our curiosity in testing. Find out not only how to push past the fear of not knowing but how to embrace your curiosity.
For many test organizations, the first hurdle to automating the testing of a product is deployment of that product in its test environments. Infrastructure as code can be used to facilitate the basic processes of provisioning servers, from bare metal to virtual to cloud, as well as configuration management of the software that resides on the servers. Off-the-shelf infrastructure-as-code tools such as AWS CloudFormation, Chef, Puppet, and Ansible provide less expensive alternatives to developing proprietary in-house deployment solutions. Join Kat Rocha to learn how infrastructure as code can better align test and production environments and reduce problems that arise from configuration drift. We will explore how to use some Infrastructure-as-code tools to facilitate automation and improve testing.
Accessibility empowers users, increases diversity, and can drive higher adoption and higher growth of your digital services. The axe family of open source technologies has been designed with speed, ease of integration, and zero false positives in mind.
Because of its specialized nature, many aspects of application security testing are often assigned to testers from another team or another company, and they may be brought in to perform a point-in-time assessment prior to a release.