DevOps: Find Solutions, Not More Defects: STARWEST 2015 Interview with Andreas Grabner


In this interview, TechWell speaks with Andreas Grabner, a performance engineer who has been working in this field for the past fifteen years. At STARWEST 2015, he presented DevOps: Find Solutions, Not More Defects.

Josiah Renaudin: All right. We are back with another STARWEST virtual interview. Today we have Andreas Grabner. Andreas, thank you very much for joining us today.

Andreas Grabner: Thanks for having me here.

Josiah Renaudin: Yeah, it's really great. We actually, we did an interview last year, before you were here, and now you're doing another session. How has your conference experience been here so far? Like I said, you're a veteran at this point of the STAR conferences, so what's that been like?

Andreas Grabner: It's good. I think it's always good to see people coming back together and then sharing experiences, which I think this is the major thing here. Obviously on the one side, it's great to hear speakers talk about their experience, but I think what I see a lot is the networking that happens between the sessions, after the sessions in the evening, where you actually then go to somebody and say, "Hey, how did you solve this problem?"

I think this is the great aspect of this conference. On the one side, yes, sessions are great. You may agree or disagree with what you hear, but then you get some new input. Then you want to dig deeper and say, "This is one aspect that I just heard from this guy. Let's talk with him afterwards."

Josiah Renaudin: Like you said, testing is so often about solving these problems, the idea of coming together. There are a lot of issues that we might not be able to figure out on our own, but there's so many smart people thinking of innovative things. Who have you talked to so far that you've really felt like, "Man, I've learned something that I've never thought of before"? Or "I solved something that I never would have solved before"?

Andreas Grabner: I have to say I went to two sessions the last two days that were exceptionally well done and great. One was from Adam Auerbach from Captial One. He basically explained how they transitioned their organization from waterfall to agile and how they do continuous testing. He made some very strong statements that I think were a little hard to swallow for the major audience because he said they used to do manual testing. They don't do it any longer; they only do automated testing. Every manual tester that didn't progress with them to automated testing is either in a different role or no longer with the company.

They really transitioned over, made a very hard step. Then today I was listening into, I don't remember his name right now, he is an expert on Selenium. He gave some really cool best practices, very cool tips on how he's using Selenium to a very interesting, extreme way. I think there's a lot of people that took a lot of things with them.

I actually blogged about my highlights on the blog for, in case you're interested—my highlights that I took out of the sessions. These two sessions were, in my personal opinion, exceptionally well done and had a lot of great input. I think a lot of great discussions came out of it afterwards.

Josiah Renaudin: It's a great way for the virtual audience, for the people who couldn't be here, to be able to read your thoughts and try to get stuff out of that. It's something that you'll be able to use because a lot of what you do is, you identify problems. Then you solve the problems step by step and then you share that information with other people to see if you can collaborate and move forward with them.

When you go into a team, an organization, anything like that, or just a problem in general, what are your first few steps for identifying that, figuring out what's wrong here, and what can I do to move forward?

Andreas Grabner: Just to give you a little background about myself, I do work for a tool provider—and it's easy to see on my shirt. I really love them, and I'm not forced to wear this shirt. I'm not going to sell you anything, but I've been brought in a lot of times to help people that are in a critical situation because either their app has crashed or they have a big quality issue in general.

I've been working in the industry for fifteen years, so I know typically where to look at first. Because what I have experienced in my history, most of the problems that we see are only caused by a handful of the same technical or communication problems. I want to put an emphasis on communication because that's a lot of problems we deal with.

From a technical perspective, I know where to look. Typically when I analyze websites, I have my tools that I use, which are, by the way, all three tools. You put up these tools and you look at the website. I immediately see there are too many images on there, too many JavaScript files; they're using an outdated version of jQuery, which I know has an issue.

On the other side, I'm doing implant performance diagnostics and quality diagnostics. I look into tools like Dynatrace, which is also available for free for testers and developers. Then I look at the key problem patterns like too many database round trips, downloading too much stuff from an external web services, any coding issues that lead to too-high CPO synchronization.

Basically what I do, I try to look at my list of, let's say, the top five things I always see, and I know I typically always hit the five. Then I sit down with the engineering team and the testers—I see them as a whole; testers and developers should all work together. Also ops, by the way. I tell them, "So, you're dealing with the situation. Here's the technical issue. How did we get there? How can we make sure this doesn't happen again?"

Typically, most of these things are metric-driven. I like to look at different metrics like I mentioned before: the number of JavaScript files, the number of database queries. I then educate testers to also look at these metrics even though they might not be comfortable with them. Because typically, testers do a functional testing that verifies if the login button works all right.

I tell them to use these tools that the developers also use and look at these metrics so you can actually find many more problems with the same amount of testing that you do, problems that would later on lead to a problem. Then the other thing I tell them, now testers: "It's your role to educate developers and tell them I keep finding the same problem. Maybe we should avoid them."

That's what I call shift quality left—shifting quality, the responsibility and the accountability of quality, to the left. And I think testers have a key role in there, because they see a lot of problems and then they can compile them and say, "We can't ever see these problems, so maybe we can do something in preventing them right from the start, which is at the developer's desk."

Josiah Renaudin: It also sounds like when you're coming in to help these people and you're solving these problems, you're not putting a Band-Aid over it. You're not putting a Band-Aid and saying, “This is fixed for a little bit.” You're educating these people so that moving forward, they can help themselves. They can move forward and solve their own issues. Is that correct?

Andreas Grabner: That's correct, because in the end, we want to do something sustainable. I'm not sure if I used the word sustainable correct in this aspect, but that's what it basically is, right?

Josiah Renaudin: Yeah.

Andreas Grabner: We want to make sure that we don't have to manually find the same problem all the time. We need to avoid sitting in these war room scenarios, and we need to work on these quality issues early on also in order to avoid building up technical debt, which is another key word that is floating around. I'll try to do this with the people that have called me in, but what I also do is I share it with a larger audience. STARWEST gives me the opportunity, but I also do a lot of meet-ups, user groups, where actually I see a lot of testers—tester meet-ups now use to organize themselves in different cities.

Basically, I try to tell them from my point of view, I think we as testers need to level up. We need to do more than just functional testing. Be comfortable or get comfortable with the tools if you're not comfortable yet. This is the world we live in. This is the world we live in right now. You're not just testers anymore to test functionality.

Josiah Renaudin: Yeah, it's almost you have to be well-rounded. You have to do more than one thing.

Speaking of STARWEST giving you opportunities, you had a session. You had a session about DevOps. Can you give us the name of that session and talk about what you had covered?

Andreas Grabner: Yeah, basically my session was—I put DevOps in there, and to be honest with you, DevOps is a term that everybody knows about, I guess is one of the reasons. Really, I think what the main title was, don't focus on finding more bugs, but help to find solutions. Basically, you are part of a team that needs to deliver software to an end-user, and it has to be in the right quality. Otherwise, your business doesn't make any money. It doesn't help you, and I think this was something reflected by many people that I hear talking this week ...

Josiah Renaudin: Janet Gregory had a very similar kind of idea.

Andreas Grabner: Yeah, because it doesn't help if I find fifty new bugs every day. If I just measure myself based on that, it doesn't make sense. I think as a tester, I want to be part of a team that contributes to better quality. It means I need to sit down and, instead of creating fifty new test cases, I may just create ten more, but extend them and expand them and look at other things that I know will hurt quality later on.

Josiah Renaudin: Finding the solutions now instead of, you don't want to look back and say, "Oh, we messed up here, here, here. Now this is affecting all this." Let's look forward and say, “What are the risks?”

What's your opinion on risk management? Who in the team should be looking forward and saying, "We need to be considering this"? Like you said, we don't want to have all these bugs down the line. Do you think that's the entire team's responsibility?

Andreas Grabner: Even though it might not be applicable to 100 percent of the companies out there, I know some people say we're working in health care, or we work in another business, and it doesn't apply to us. I still think to most companies, it applies that a team should be responsible for the stuff that they produce.

First of all, as a team they're responsible for building the things that really matter to the end-user. In this case, you need to sit together with the business analysts or product managers or product owners and really make sure we only build what's really needed. Then when we build it, we build it as a team. It means developers develop the code in combination with the testers that make sure that the tests are all written. They should do it in the combination.

They also as a team need to figure out how we can measure if the stuff that we produce now is actually running well and it's used well. I think the concept of pushing metrics to the right—meaning in the development phase, building metrics into the code that later tells us how many people are actually using this, because if build something and nobody uses it anyway, I'm just building up technical debt. I'm building features above features above features that nobody really uses. The more code we produce, if nobody uses it, it's just ...

Josiah Renaudin: Yeah. When you teach these ideas to people, you're talking, you're not thinking bugs so much. You're thinking about solutions. Let's say you're working with one individual team. Have you found that, very often, that teams spread that identity, those concepts, to the rest of the organization? Do you ever go back and say, “Wow, this one team expanded, the testing team expanded, and now the entire organization is following this sort of mindset”?

Andreas Grabner: I think once the first success comes in, typically there is success when you change the model. Typically what people do, people are happy with success so they spread the word. I see a lot of companies internally having content management systems or blogs where they basically highlight it as a success story. This is a success story.

Management encourages people to tell what they are doing and what they have done well. If that catches on, then other teams see, well, they are doing something different now. They're very successful. They can write about this and they get appreciation from the whole organization. We want to do this, too.

Typically what I see, and this is also reflected by—I mention Adam again from Capital One, they did it the same way. They started small with a team and then they spread it out across the organization. They're doing internal, I don't think he's calling it DevOps Days, but internal education days, where they spread the idea to a larger audience.

I used to work DevOps Days because this is a general concept that I see more and more. DevOps Days are also at conferences that are organized, but I see more and more companies now copying that concept and running these conferences internally to really promote new ideas. Everything around DevOps—or whatever name you want to give it, I think—is something that catches on, and basically I think my initial definition when I heard about it was, DevOps is the stuff that we did when we were a small startup and we were all in a room. We were all testers, developers and operations in one ...

Josiah Renaudin: Yeah, everyone working together.

Andreas Grabner: Everyone working together, and DevOps basically tries to tell us what works well in a small organization should also work well in a large organization.

Josiah Renaudin: Absolutely. Yeah. Once you expand, it doesn't mean you should stop doing what was working before. Like you said, it's been great to talk to you, because you had mentioned collaboration being at these conferences, and the virtual audience is now a part of that. It's really cool that you can share everything that you've learned throughout the entire conference on this last day and share ideas. If anyone wants to reach out to you, ask about your session if they weren't able to attend, what's the best way of doing that?

Andreas Grabner: I think either the blog—so, I don't know if they can see the blog on my shirt. I'm blogging there regularly. I also have a Twitter account, @GrabnerAndy. That's where you'll find it online as well. Yeah, but with the blog, I think that's the easiest, and then you get all the links, also the tools that I'm using. All the stuff that I do is actually done with tools that are for free. Feel free to use them as well or use other products in the same space. There's a lot of great products out there.

Josiah Renaudin: All right. Well, fantastic. Thank you very much for talking to me today. We'll be back soon with more interviews.

Andreas Grabner: All right. Bye bye.

Andreas GrabnerAndreas Grabner is a performance engineer who has been working in this field for the past fifteen years. Andreas helps organizations identify the real problems in their applications and then uses this knowledge to teach others how to avoid the problems by sharing engineering best practices. He was a developer, tester, and evangelist for Segue Software, builders of the Silk Testing product line. Later Andreas joined Dynatrace where, for the past eight years, he has helped organizations worldwide test applications, better understand the technologies behind their apps, and improve the entire development process. He shares his expertise on

About the author

Upcoming Events

Jun 02
Sep 22
Oct 13
Apr 27