With twenty years of commercial software development and testing experience, Regg Struyk has developed for several software testing tools including test integrity, iTest, and Polarion QA. Regg is continually analyzing testing trends and their potential impact on software testing.
Regg Struyk will be presenting a presentation titled Why Classic Software Testing Doesn’t Work Anymore at STARCANADA 2014, which will take place April 5-9, 2014.
About Why Classic Software Testing Doesn’t Work Anymore:
The classic software testing team is becoming increasingly obsolete. Traditional processes and tools just don’t meet today’s testing challenges. With the introduction of methodologies such as agile, testing processes with a "test last" philosophy cannot succeed in a rapid deployment environment. To exacerbate our testing difficulties, we now have to deal with "big data," which introduces an entirely new set of problems. In the past, we have relied on tools such as test automation to solve these problems; however, classic test automation simply will not suffice on its own and must be integrated with the right testing activities while being supported by correct procedures. When you combine these problems with inadequately defined requirements and limited resources, you have a recipe for testing disaster. Regg Struyk shares real-world examples and offers constructive ways to move away from traditional testing methods to a more integrated process using concepts such as test-driven development and TestOps.
Cameron Philipp-Edmonds: Hello, here we are today with Regg Struyk, with twenty years of commercial software development and testing experience. Regg has had many different positions, ranging from the head of Pinnacle product management for Agfa HealthCare to, most recently, product evangelist for Polarion QA. Dedicated to the domain of test management, Regg is continually analyzing testing trends and their potential impact on the discipline of software testing. I guess I’ll start off. Regg, did we miss anything on that summary there?
Regg Struyk: No, it was a fairly complete summary, other than the fact that it’s Agfa HealthCare.
CP: OK, I apologize.
RS: That’s quite all right. It used to be a camera company that produced film for both cameras as well as medical devices for x-rays.
CP: OK. You are going to be speaking at STARCANADA this year, which will be April 5 through April 9. You are doing a session titled “Why Classic Software Testing Doesn’t Work Anymore.” That harps on the emerging trends in software testing. I guess this is the first question: Is there a traditional methodology being held on to by a lot of companies today that has proven to do more harm than good?
RS: That’s a great question, Cameron. I’d say that yes, there is. A commonly known one is the waterfall methodology. That would be the traditional QA that uses the concept of creating requirements and design specifications. Then you would use the method of going through these different gates, so to speak. Then, there’s a gate almost at the very end of the development process, which is called the testing process, which includes testing specifications, creating test cases, and executing those tests, and posting your results back to the development. Very tedious, regimented, rigorous methodology that certainly—in today’s fast-paced environments—really doesn’t meet the needs of the business community.
CP: Right. That makes plenty of sense. OK. Then, you spoke a little bit on requirements there, but what is more likely to lead to a testing disaster? Is it poorly defined goals, or is it requirements?
RS: That’s a tricky situation, because I think that when you get into different environments, it depends on potentially both of those issues, or maybe one or the other. An example is that, clearly, if a company doesn’t have defined goals and know what their objectives are, specifically from a business perspective, that obviously could create disaster for testing, as well—the same thing goes with requirements. If an organization doesn’t articulate correctly what their requirements are—meaning is—what exactly am I testing? What are the expectations? The outcomes could be very disastrous.
The mindset, I think, is definitely wrong. It comes down to there’s market pressures to release early and release fast, and the traditional QA methodologies don’t necessarily work in that environment. Requirements and goals get muddied because of the sake of being able to do increased releases at an exponential pace. An example, obviously, is looking at mobile apps. The expectation is to release them at lightning speed, but that could really suffer from a testing perspective if the user community doesn’t accept those particular apps.
CP: Right, OK. I guess with mobile development, what does limited resources play into that? Is that going to affect the waterfall method? Is that going to affect a more agile approach? Is that going to affect the requirements and the goals being defined properly and being really carried out the best way they should be?
RS: Yeah, I think there’s a mix that’s there. It’s certainly from a resourcing perspective. Actually, there’s a bit of a caveat there. A lot of organizations think that you attach more resources to the testing area, and that would resolve the problems. I remember back when I was managing the technical product management group at Agfa HealthCare, we used traditional waterfall methodology. We had two hundred developers, so the sheer size of the actual develop process was fairly, significantly large.
What we would do is we would actually attach, near the end of the development lifecycle—which, by the way, was two years long—more testing resources. Basically, all testing would occur at the end. There’d be the fear of getting it out to market in time, a lot of defect issues, and that. Instead of actually improving the methodology itself, they would just throw a bunch of testers at the end to the actual project. That was a recipe for disaster.
Keeping those kinds of things in mind, I think that is the new—what I would call—hybrid, because no one uses purely agile. Good organizations use some kind of hybrid—a mixture of what works for them. The way that we actually view testing and how we do testing will certainly change the dynamics of what gets released out into the marketplace.
For example, good practices that I’ve seen are things like testing early and testing fast. Having testing done at the very beginning and continually testing. Not just the testers, but also development, everyone, is involved in the testing process. Also, having the testing organizational, or the stakeholder from testing, involved at the very beginning of the development process as well, helping define the goals, all those things. That’s a really good example.
Test-driven requirements, as well. Being able to actually, once the requirements are created, in synchronization, actually doing testing at the same time. That really changes the complexity. I would say that definitely there is sometimes a need for additional testing, but I truly believe what I’ve seen out in field, the reality is that we have to change the mindset of what we’re doing as opposed to actually just adding more resources.
CP: OK. That’s a good response. Then you also cover cloud, big data, in your session. What is the biggest problem for the adoption of the cloud and big data for software testing?
RS: Good question. I’d say there’s two components to this. First is obviously the cloud. The real issue with that is basically security issues, so companies being reluctant to use an off-site, so to speak, model to have their testing artifacts. However, Amazon has actually taken a good approach to this and has certainly alleviated a lot of the concerns. Second of all, obviously, is the cultural adoption. As I mentioned is that back to three to five years ago, roughly, when companies were reluctant to outsource pieces of development and testing. That’s certainly a part of that.
Then with big data, it really is—I have a really good white paper that we’ve written about big data and some of the issues and discrepancies that we’ll see with testing. What it comes down to is that it’s how we actually handle big data and how we actually will process big data, as well as the tools that we use. Meaning that we’ve got this insurmountable amount of data that’s always being pushed toward us with the Internet of Things, so all these mobile devices and all the data that’s being created. In the last five years, we have created more data than what was created in the Library of Alexandria in its entire history.
RS: That’s the sheer amount. That’s continually, exponentially doubling every year. That’s not going to go away. The current methods and the current tools that we have for testing, they certainly do not. They don’t actually have both the capability, and also our mindset will not be able to handle those sheer volumes and the integrity of that data. We’re going to have to behave differently and think differently how we test large amounts of data.
CP: That streams into my next question here. Is there any particular methodology or trend that you think has a lot of potential going forward to meet those challenges for testing?
RS: Yeah, I think it really comes down to hybrids. I wouldn’t necessarily say that there’s a specific hot trend. What I would say is that it’s a combination of things. I like the agile methodology, but let’s be honest here, that agile was written and developed by developers. It isn’t testing-focused. In fact, unfortunately, testing was an afterthought, which, a lot of times in this world, in our world, is that that’s an unfortunate struggle that we have to deal with on a daily basis.
Having said that, though, the reality is that inserting testing into the agile process—being able to be a key player to becoming one of the key stakeholders in those stand-up meetings, and having testing involved from the very beginning—really, I see that as the focus. Also, that whole concept I mentioned earlier about test early, test fast—I cannot say enough about organizations that succeed and are very competitive are the ones that view testing as a key component of the entire development process. I think that’s a key trend.
The other is the mindset is going to be a big difference. I’m seeing a lot of companies where the focus now is not on . . . Let me step back for a second. The traditional way to look at testing, from an organization perspective, was to test the behavior of a product, per se. You have a software application. It’s like, does it do what the requirement intended for it to do? I would say that that is the wrong way to look at it. I would say that the view needs to be, what is the customer expectation?
Companies that become customer-focused . . . I’ll use a name-drop. The Nikes of the world, the Apples of the world, they look at products and they say, “What are the customer expectations? How does the customer perceive this?” This is how we have to start testing. There are some companies that have learned this lesson through losing a lot of either both advertising and money.
For example, Xbox, a few years ago, with—I can’t remember what version of Madden football it was, but they released early. Product was really full of defects. Customer expectation was extremely high. That was the pressure to release it. What happened is that it was a flop at the time. There’s countless examples, especially in the gaming community, of products that if they’re not released and the customer expectations are high, they’re going to get torpedoed. I’d say that’s the biggest thing, is the customer expectations.
CP: OK. You talked a little bit about holding on to the traditional ways. How much success can a company really expect if they have a firm commitment to those traditional ways of testing?
RS: Yeah. That’s a great question. Some of these companies are actually heavily regulated. The medical device industry is one of them, now finance, et cetera. There’s a lot of heavily regulated environments. Not necessarily that that’s a bad thing, because obviously, if you’re using medical devices, you want the risk as a patient, a consumer, to be very low. However, that does foster the environment of traditional methodologies. Really, what that does is it slows down the pace to be very competitive and also release quality product.
You’ll see a lot of the game-changers in a lot of the—especially when you’re coming to apps and you’re looking at cloud-based products—a lot of the game-changers will actually be companies that are lean and that are fostering new ways to test and develop software. When they do that, they are disruptive forces within those industries.
Certainly, I’d say that you have to balance that, and that the traditional methodology doesn’t make sense to follow if you’re competing against companies that are using other types that are hybrids, that are more nimble, that are using more of—being able to adapt to the very quick changes that are happening in the marketplace. Yeah, definitely, those legacy techs really do, and we see it all the time. HP is a good example, but not just HP. There’s all kinds of different companies that they have old-legacy ways of doing things, legacy code, and lots of smaller, nimble disrupters that actually come along and change the marketplace.
CP: OK. You mentioned that hybrid again. Do you think it’s a case-by-case basis, or is there a ratio that is ideal for having those new testing methodologies versus keeping a little bit of your toes in the traditional methodologies? Is there a ratio that’s prime?
RS: I wouldn’t say there’s a ratio. I would say it is case-by-case. Maybe as we move forward, it will become more of a defined metric, but I would say at this point that definitely, it is certainly more of a case-per-case. It’s not surprising, but I see out there that when I’m talking about hybrid, it’s a real mix. There’s no standardized—whether it’s agile versus waterfall, or whatever. It’s really a mix of what works best to get basically—again, this comes back to, you’d asked me before, about goals and those requirements. Really, what it comes down to is, what’s the best way for us to achieve our consumer expectations?
CP: OK. That’s a lot of really good answers on why classic software testing doesn’t work anymore. I have one last question. With twenty years in the industry, is there something you wish that a young Regg knew that you know now?
RS: Certainly a great question. I would say that the biggest thing is that being able to work with development in the organization. One of the key things in testing is that there’s always been this combative mentality. Really, what it comes down to is cultural shift within an organization. The reality is that testing, for whatever it’s worth, a lot of times is not viewed as the most popular thing. It’s almost viewed as, it has to be done.
The best way to actually become very appreciative in an organization is to change the culture. That takes time, and there are some frustrations, but the payoff is significant. Not only does it make the products or services that you’re building better for the consumer, but it also fosters a greater environment within your testing organization.
CP: OK, great. Once again, this is Regg Struyk, and he will be speaking at STARCANADA April 5 through April 9. Is there anything else you want to say to any of the possible attendees?
RS: I'm looking forward to it. I hear that last year, it was a great event. I’m looking forward to meeting everybody at the event.
CP: All right, fantastic. Once again, that was Regg Struyk. He has a session called Why Classic Software Testing Doesn’t Work Anymore. Thank you so much, Regg.
RS: Thanks, Cameron.
CP: All right, have a good one.
RS: Take care.
With twenty years of commercial software development and testing experience, Regg Struyk has held many different positions, ranging from the head of technical product management for Agfa HealthCare to, most recently, product evangelist for Polarion QA. Regg has developed for several software testing tools, including test integrity, iTest, and Polarion QA. Dedicated to the domain of test management, Regg is continually analyzing testing trends and their potential impact on the discipline of software testing.
Very good discussion and highlights.