In this interview, Tanya Kravtsov, a director of QA at Audible, explains why identifying bottlenecks is so critical when you’re turning to agile and DevOps, as well as how automating manual processes can lead to better quality.
Josiah Renaudin: Welcome back to another TechWell interview. Today I’m joined by Tanya Kravtsov, a director of QA at Audible and keynote speaker at this year’s Agile Dev, Better Software & DevOps West conference. Tanya, thank you very much for joining us. First, could you tell us a bit about your experience in the industry?
Tanya Kravtsov: I have fifteen years of experience in the financial and technology industries, working as part of building and managing QA, automation, and DevOps teams. I am currently working as a director of QA at Audible, subsidiary of Amazon, the number one provider of spoken audio entertainment, helping to build a new QA org to support innovative web product development at scale. Previously, as the head of automation and continuous delivery at ROKITT, senior QA manager at Syncsort and VP at Morgan Stanley, I focused on quality, automation, and DevOps practices and worked with internal and external customers to transform development and testing processes. In 2014, I founded the DevOpsQA NJ meetup that brings together process transformation enthusiasts from the NY and NJ area.
Josiah Renaudin: Why is identifying bottlenecks so critical when you’re turning to agile and DevOps?
Tanya Kravtsov: Since agile and DevOps promote continuous integration, continuous testing, and continuous deployment, anything that breaks this continuity is a potential bottleneck. According to the Theory of Constraints, a chain is not stronger than its weakest link. This is reinforced by Gene Kim in The Phoenix Project, who said, "Any improvements made anywhere besides the bottleneck are an illusion." Any manual process in the software development lifecycle prevents the continuous build, test, deploy cycle and will either extend the time to market or will keep growing the technical debt due to insufficient testing. Identifying these weakest links or bottlenecks is a critical step in achieving agility.
Josiah Renaudin: What tools would you suggest teams use to not only find these bottlenecks, but also encourage innovative thinking so that these bottlenecks can be more easily eliminated?
Tanya Kravtsov: Monitoring, logging, reporting, and analytics tools go a long way by providing valuable insights into the application usage and performance, test frameworks, and customer feedback. Using this data to identify gaps in the product or process, in conjunction with the automation tools which allow you to remove the manual steps involved, can greatly contribute to innovative solutions needed to eliminate the bottlenecks.
Josiah Renaudin: What are some of the most common bottlenecks teams encounter? Which one tends to be the most difficult to find and resolve?
Tanya Kravtsov: Some of the most common bottlenecks in the SDLC are data, which includes data discovery, generation, subsetting, masking and cleanup; and test environments including environment setup and monitoring; and test execution including build validations, unit testing, regression, and test results analysis. The most difficult bottlenecks to find are the ones that historically been done manually and are assumed to require human intervention, like an approval step that requires a physical sign-off from the manager, or test results analysis and defect logging. However, any of these can be automated to an extent provided proper process documentation and understanding.
Josiah Renaudin: How can mind maps and innovation games help eliminate your bottlenecks and lead to a smoother agile transition?
Tanya Kravtsov: While mind maps and innovation games cannot eliminate the bottlenecks, they are great tools for facilitating bottleneck discovery. Mind maps go hand in hand with the brainstorming exercise, helping to deep dive and branch out, while innovation games like Speed Boat exercise help to collaboratively identify and prioritize the top bottlenecks.
Josiah Renaudin: How can automating these manual processes lead to better quality? Can manual testing be at the heart of most bottlenecks?
Tanya Kravtsov: Automating most or all of the manual processes in the delivery cycle can enable the teams to follow agile principles and deliver software in an iterative manner, continuously collecting feedback. The feedback will, in turn, make developers more quality-conscious and encourage such practices as static code analysis, code reviews, and test-driven development, which in turn improves the quality of the code. For testers, automation of repetitive testing tasks will allow them to spend their time exploring the product and find those unknowns that would otherwise be found by customers post-production.
Josiah Renaudin: You mention that removing these bottlenecks gives us more time for exploratory testing. Can you define exploratory testing and how it can help teams think outside the box?
Tanya Kravtsov: Exploratory testing is a time-based, minimum-planning, maximum-execution hands-on approach to testing. It focuses on finding known-unknown and unknown-unknown defects, which normally cannot be found with automated or any type of scripted tests which cover known-known and unknown-known issues. Exploratory testing is where true testers really get to exercise their skills, curiosity, and passion for testing, apply mnemonic devices, and think outside of the box to find most hard-to-find bugs.
Josiah Renaudin: What central message do you want to leave with your keynote audience?
Tanya Kravtsov: As a result of this keynote, the audience will learn how to identify the bottlenecks in the SDLC, as well as tools and techniques available in the market to address these bottlenecks. I would like them to come away with a vision of what ideal future state should look like for their organization and innovative ideas of how to achieve the needed transformation. I recently heard a quote, “Fall in love with the problem, not the solution,” and this is what I would like the audience to leave with. Agile and DevOps are about utilizing various tools to continuously improve the process, sometimes failing but eventually succeeding achieving small wins and collecting feedback along the way. As long as we are passionate about solving the problem, and have an awareness of the available tools, we will be able to move towards that coveted future state of continuous delivery.
Tanya Kravtsov is a director of QA at Audible, helping build a new QA organization to support innovative web product development at scale. Previously, as head of automation and continuous delivery at ROKITT, senior QA manager at Syncsort, and VP at Morgan Stanley, Tanya focused on quality, automation, and DevOps practices, working with internal and external customers to transform development and testing processes. Tanya is passionate about process automation, continuous integration, and continuous delivery. She is a founder of the DevOpsQA NJ Meetup group and a frequent speaker at STAREAST, QUEST, and other conferences and events. Follow her on Twitter @DevOpsQA.
Excellent points made by Ms. Kravtsov. I would to point out that there are two views of the role of QA in a DevOps setting. Both are legitimate. Audible's approach appears to be that QA's DevOps role is to own the pipeline - at least, that is what I gather from the interview. I have seen that evolve often, where development subsumes the role of QA. It is consistent with an Agile philosophy of "trust the team". However, there is another approach, equally legitimate, such that QA plays a role of customer advocate. In that approach, QA develops its own white box tests. A DevOps process requires that those tests be incorporated into the pipeline and run according to a cadence; still, the tests are independent. I would propose that for most product companies, where the primary feedback loop is with the customer, Audible's approach is best; but I would also propose that for companies that deal with high risk, such as financial services and medical devices, the other approach would be more appropriate, whereby QA provides an independent test suite.
Thanks for taking the time to discuss this, I feel strongly about it and love learning more about this topic