In this interview, Wayne Ariola talks about the relationship between risks and continuous testing, misperceptions of continuous testing, who should be interested in it, how we are all in a new era of testing, and why it's the best time in the history of software to be a tester.
Cameron Philipp-Edmonds: Today we are joined by Wayne Ariola of Parasoft. He is going to be speaking to us today about continuous testing. To start things off, let me thank you for joining us today.
Wayne Ariola: My pleasure, as always.
Cameron: All right, can you also tell us a little bit about yourself and your role at Parasoft.
Wayne: Sure, I've been with Parasoft for, I think it's twelve or eleven years now. My current role is chief strategy officer, so I'm responsible for the direction of the company from our technical perspective and the ways we can help clients achieve better results reducing the risks associated with application failure. What I do is work very closely with clients to help them achieve their business goals with software.
Cameron: All right, fantastic. Now today you're going to talk to us about continuous testing, so in your words, can you tell us what continuous testing is?
Wayne: Sure, today we're in a position in which software is truly the interface to the business. There's just no doubt about it. Software [Skype] is the interface to our discussion. It didn't always use to be that way, but, today it certainly is. Whether you're a bank or driving a tractor in the field, software is optimizing how you're interacting with your customer or how that customer is interacting with what they are doing. With that, the risk of software failure has an extended cost. Or the cost of quality, which is really the cost of failure, is much greater today than it ever has been. As we progress down this technical chain of events it will become even more expensive and there will be more risk to the business when software fails.
When I'm talking about continuous testing, I mean we need a method in which we can more confidently evaluate our products (software) continuously to understand the risks associated with it at any point in time. And when I say products, I mean software projects or any kind of application that is evolved through a software development group.
Cameron: Now you mentioned risk there a couple of times. What exactly is the relation of the word risk to continuous testing? Is it kind of a one-to-one ratio?
Wayne: Absolutely, there's no doubt about it. Just like any manufactured product, what you're trying to do is diminish the risk of failure, right? Let's just face it: Software is complex—it's not easy.
It's just one of those things that as you begin to evolve software, the application gets pretty complex, pretty quick. With change the complexity associated with, or the risk associated with failure, increases. There's a couple of great studies by the FDA by the way, along this theme. In most of the FDA recalls (and I'm paraphrasing this dramatically), they found that it wasn't the initial application that had risk, but it was the subsequent change that introduced risk into the application. So the second release, the third release; those had a greater risk for application failure. And as you can imagine, as we look at an application and we start tearing into one area, rebuilding around something or adding code to one element, things get a little more complex. As they get a little more complex, the risk of failure increases.
With that, we really need to understand where the risk associated with the application is at any given point in time so we can make trade-off decisions associated with release candidates. That's really the translation between what the development team has developed and what the business is willing to adopt, or accept, as a risk in terms of going to market.
Cameron: It sounds like continuous testing is really great for diminishing risks and having a great handle on things. Who really should be interested in continuous testing and why should they be?
Wayne: That's a great question. As every industry matures, you will see this trend replicated associated with process maturity. The maturity goes along really three distinct elements which are speed (or time), scope, and quality. When you look at these trade-off elements, they are in constant conflict. If you could imagine, the business is always saying, "I need more. I need it faster." As I mentioned before, software is distinctly the interface of the business. If software is that differential component, then you need to truly be able to deliver software faster. So the business is saying “I need more and I need to be able to differentiate faster.”
Now we have this scope and quality issue that development teams need to be able to accommodate. If you look at any single industry, and one of the best documented is the auto industry, you need to actually put these things into balance in order to come out with a quality product that enhances the brand and doesn't necessarily promote any distinct type of risk. I did a little research project last year. I took all the public companies that had software failure announcements between 2012 and early 2014. On the day of the announcement, the companies that were public lost an average of 2.3 billion dollars of shareholder value upon that announcement.
Cameron: Wow. Just wow.
Wayne: When we sit there and we talk about risk and about controlling the quality of software... I doubt that the developers who were working really, really hard to turn that product out were thinking that they were going to lose 2.3 billion dollars of market share because there was a bug in their software.
Cameron: Gosh, that's unbelievable.
Wayne: That is really a distinct metric that we need to start to manage much more comprehensively.
Another issue that we need to balance is this conversation between the business and the technical team. We're no longer in this era where you shove pizza under the door and out comes code, right? We're in this era in which the business demands we basically turn out software that is meeting the objectives of the organization. So we have this, I'd like to call it the “geek gap” - but the phrasing is not so popular. We have this gap between really what the business is expecting and what development is delivering. Because we need to be able to translate the risk associated with how the applications are evolving so the business manager can make better trade off decisions—whether you can go or not go. We're entering this next era or need for process maturity. We’re at the precipice where this next level of process maturity is required in order to prevent the collision between technical teams and the line of business. A lot of people are calling this DevOps, a lot of people are calling it Bimodal, or Lean development, or agile. We're in a business conversation now and not solely a technical conversation.
Cameron: So to kind of reiterate a little bit here. You should be interested in continuous testing if you really care about quality and creating value as well as balancing all the different evolving factors that have emerged in our new industry.
Wayne: Absolutely. Let me add a little color to that.
We're in an era in which the business is demanding differentiable software at a faster, faster pace. In order to do so, we need a much better handle on exposing the risks. So the business managers are in lock step with the development team in order to deliver an acceptable level of risk to the market upon release. I like to emphasize here this acceptable level of risk. All software has a level of risk to it. It's whether the business can absorb the risk that you're about to deliver to the market or not. And that's a business decision. If we face an unacceptable level of risk, you basically now have a scope and time decision to make. We need to better delineate these risks for business managers.
Cameron: Now continuous testing seems like a really great tool here. Is there a law of diminishing returns for continuous testing or is it kind of a more the merrier approach that if you're doing continuous testing and you're doing more and more of it, you should get a better and better result?
Wayne: In order to answer that question, let me break out continuous testing. There's basically four major points associated with continuous testing. One, is the business expectations associated with that application need to be defined. So the business risk associated with the application, the team, or the release candidate needs to be very well defined. When those business expectations are defined, we are able to act upon them.
The second thing is defects associated with those business objectives or business expectations are automatically prioritized versus those business drivers. We understand what errors have been injected. We have very distinct isolation techniques to isolate the defects at any particular point in time and when one of those defects are found, basically the priority of them are automatically categorized so they don't just slip away.
The next thing that we need, and this is point three, is there needs to be a very distinct ownership associated with workflow and defect remediation. So today when a potential defect is found, you go through a huge iterative process associated with can you recreate it, can you define it, can you have an environment associated with recreating the error. Can development recreate it and can you fix it. And today we have mechanism to collect the information, but we don't have very distinct clear paths for ownership and remediation. That is one of the biggest areas where we can actually collapse the remediation cycle time and save a lot of time.
This leads to the last point, which is when we do find that there are trends associated with defects that are found, we need this feedback loop for defect prevention. Let’s say an organization is exposing more security issues than they would like. We should look at that pattern, understand that pattern, and then create defect prevention strategies to eliminate those classes of errors in the future. These priorities will obviously shift versus the business expectations.
When you ask the question, can you do more? I don't think the question is, "do more?" I think the issue is more about putting this idea of testing better in line with the business. Delivering information in which the business can make better trade off decisions associated with an application or its release versus extending time or scope. With that being said though, a lot of people say well, we do manual testing, we can't be continuous. It doesn't matter, you can have a continuous testing mindset and be a 100 percent manual testing organization as long as you're achieving the business demands associated with the organization. This is something we always try to think about because we think that everything needs to be automated. Automation is great by the way—automation gets us there a lot faster, by the way, without necessarily relying on humans who can inject errors as well. But if the business demand or the application demand itself has a very strong kind of environment in which manual testing achieves the goal, then manual testing is it. What we want to do, is we want to fit the practices to the business not what we have been doing in the past, which was to make sure to fit the tool into the business. We want to make sure the practices meet the business demand.
Cameron: Now, you kind of said it too where people say they can't do continuous testing because they only do manual testing. That's kind of a trap that a lot of people all into is where they think that continuous testing is really just more automated testing. They don't understand that it goes so much deeper than that. What response or what approach would you give to someone who has that type of thinking?
Wayne: That's a great question. I get asked a lot this question a ton when I'm at conferences. They say, what's the single most important thing that you would tell a QA tester, or a tester, or a developer. I would say understand your business. Understand the true objective associated with the application you're working on. Understand the users', end users' experiences associated with the application you're working on. That is the single biggest gap. By the way, I think this is the best time in history to be a developer or tester.
Wayne: I think it's the best time to be a tester in history of software, right now as we sit, this is the best time to be a tester.
Cameron: And why is that?
Wayne: Just because we're in this era in which quality matters and we have to redefine how we're going to achieve that level of quality, which inversely can be defined as the reduction of risk. We as testers have the opportunity now to understand the business objectives and apply our knowledge to a process more so than ever before. As we look at it and as there's demand for faster and more, we need to influence that process in order to show that we can actually reduce the risk of the business.
Today, as things are moving faster and there's this drive for more, and the risk associated with it is greater than it's ever been, developers and testers now can significantly influence the business outcome associated with the projects they're working on. If we do not take the steps as individuals to make sure that we understand our business—that we read our 10-Ks, that we understand the annual report, that we understand what the CEO is asking for, and what are their strategic directives associated with the CEO in the business. If we don't understand that, then we can't contribute and speak in a way that the business finds it applicable. Today, we are in this era in which we are blending the business demand with the actual technical deliverable. And the more and more we know about our actual business objectives, the better we are going to be overall in terms of an industry.
Cameron: Awesome. Fantastic. All right, to kind of wrap things up here a little bit, we're going to talk about a book that you recently wrote called Continuous testing that you wrote along with Cynthia Dunlop, and it came out earlier this year. What lead you to writing this book?
Wayne: It was really personal experiences. Aside from being in this industry for multiple decades, I've also always been responsible for the business direction of software. As a business owner of the software, from the business side, my perspective was much like what all of our software teams face when they're facing the product manager, or they're facing the CEO, or they're facing someone who's in the line of business. When I order a piece of functionality, whatever that might be, I just want it to work. That's kind of like, hey it doesn't work. It's ridiculous. I ordered X, and it's kind of like going through a drive-thru and you order a cheeseburger and you're getting a chicken sandwich. That's like, oh what happened? It didn't work.
Cameron: [Laughter] Right.
Wayne: It's It's very hard for the business people to understand the nuances associated with the complexity of software. The business feels like it's just software and it should work. Or it's malleable and it should be able to be made to work. When it doesn't work, it's a drastic failure on the team. When this whole era of SOA, and APIs, and mobile, and the transfer of functionality to the hands of the end-user has driven this idea of business differentiators, we needed a way to bridge the gap between what the expectation is associated with software working and what the developers are doing day to day. This is where continuous testing came in.
There's also a number of other issues associated with it. A lot of times you hear about DevOps, or you hear about agile, or you hear about BiModal or Lean development practices. A lot of the emphasis that you hear in those conversations is about speed, is about communication, is about the cultural shift associated with how we're going to actually try to meet the demands of the business. Usually the testing topic that is associated with that or the quality topic that is associated with that conversation is the last piece of the conversation. We wrote this book in order to highlight the fact that it's not just speed, it's not just time and collaboration, it's also having stringent processes associated with understanding the business demand for testing.
Cameron: You talked about why you wrote the book. Really who is the target audience that you wrote this for? And why is it worth them reading it?
Wayne: It's another great question because I get different responses of people reading it. I really wrote this for the business manager, to get a better idea of the complexities associated with what they have to face. I also wrote it for the development teams because we have to make a shift. There's this concept called Shift Left, in which we're trying to take information and take practices that are more expensive downstream and shift them upstream or left in order for us to do it in a manner which is A: less expensive and B: reduces the risk associated with the application more comprehensively earlier in the process. Makes total sense and every industry in the world has gone through this type of Shift Left type of initiative associated with either A: quality or B: speed.
Even if you look at supply chain management, the whole concept associated with supply chain management was essentially Shift Left. It very well ties into agile and lean and everything else. With that, though, developers, testers, dev teams also need to back things up a little bit and understand why are we being now requested to do so much more? And what does it mean for us to do so much more? What are truly the business impacts of me taking the time to write a unit test or a adopt things like test driven development or adopt things like static code analysis or look at more risk analysis earlier in my process, and how do I work that into my day to day activities?
Quite honestly it's not like the development team is sitting there waiting for more things to do. They're busy. I can guarantee you, they're at a 100% capacity. So, when you introduce this idea of Shift Left, there's also a much greater demand on their time. We need to understand what does that mean from the business perspective in order to really set the business case up for them to make the shift. It's really to promote this conversation between the business, line of business owner, and the people delivering to those requirements.
Cameron: All right, fantastic. Now as kind of one last question for our readers and listeners out there, what do you see as the future for software development and are there any emerging trends that you can see?
Cameron: That's a very broad question.
Wayne: Well, that's a broad question. Let me take it into the direction of software and the evolution of software. There are many people who really would oppose taking manufacturing analogies and applying it to the SDLC. If I I look at Lean manufacturing principles or cellular manufacturing principles or TQM, or Six Sigma, any kind of real true manufacturing type initiatives or programs or methodologies and applied it to software, a lot of people are defensive because they say, hey you know what, software is a creative endeavor. And it really is. It truly is a creative endeavor. The folks that write software and test software are really intelligent folks.
Cameron: Oh, absolutely.
Wayne: What's going to happen, though, we've got to take a lot of the human interaction in terms of expectation setting and testing versus those expectations out and automate it. I like to use the analogy that if you looked at the software development life cycle, we do a lot of ad-hoc type testing at particular points in the process. What we really need to do is we need to turn on sensors throughout the software development life cycle that are collecting information in more of a non-temporal strategy and use that information in order to mitigate risks associated with the application itself. We have to learn from that information in order for us to do a better job of releasing the next iteration of our software.
Today we do it a little bit ad-hoc versus other industries. For example, if you're talking about the auto industry, you know you have a chassis moving down the production line. When the frame comes to meet the chassis, there's a little arm that comes in and spot welds those things together. I can guarantee you that when there's a spot weld there's a little laser that comes in to measure the tolerance of that weld…to say are you in tolerance with that weld or are you out of tolerance?
And there's two things that they're checking for. The first thing they're checking for is whether that spot weld itself met the demands, or met the requirement. The second thing they're looking for is the process associated the spot welding in control or out of control. Today, in the software industry we do a lot of work to answer the first question. Did we meet requirement one? But we don't take a lot of time to back it up to say, is our processes associated with that requirement or meeting that requirement in control or out of control. When I look at the future, our future is more looking at testing or looking at the process of quality in much more of a process perspective rather than an ad-hoc test perspective.
Cameron: Okay, that's a great answer and thank you so much for answering that. That actually concludes our interview today. Once again this was Wayne Ariola of Parasoft and he spoke to us today about continuous testing. Thanks so much, Wayne.
Wayne: Thank you very much, Cameron.
Wayne Ariola, chief strategy officer, leads the development and execution of Parasoft’s long-term strategy. He leverages customer input and fosters partnerships with industry leaders to ensure that Parasoft solutions continuously evolve to support the ever-changing complexities of real-world business processes and systems. Ariola has contributed to the design of core Parasoft technologies and has been awarded several patents for his inventions. A recognized leader on topics such as service virtualization, SOA and API quality, quality policy governance, and application security, Ariola is a frequent contributor to publications including Software Test & Performance, SOA World, and ebizQ—as well as a sought-after speaker at key industry events. Ariola brings more than twenty years strategic consulting experience within the technology and software development industries. Ariola joined Parasoft in March of 2003. He has a BA from the University of California at Santa Barbara and a MBA from Indiana University.