Better Test Automation, Metrics, and Measurement: An Interview with Mike Sowers

[interview]
Summary:

In this interview, TechWell CIO and consultant Mike Sowers details key metrics that test managers employ to determine software quality, how to know a piece of software's readiness, and guidelines for developing a successful test measurement program.

Josiah Renaudin: Welcome back to another TechWell interview. Today I am joined by Mike Sowers, the CIO and senior consultant at TechWell. He'll be conducting two tutorials at Better Software West covering test automation, metrics, and measurement. Mike, thank you very much for joining me today.

Mike Sowers: Josiah, great to be with you as always.

Josiah Renaudin: Absolutely. First, just as a good primer, could you tell us a bit about your experience in the industry?

Mike Sowers: Sure. I've been just really fortunate in my professional journey. I started as a co-op student right out of college, and my first testing job was as a hardware tester. Not to say how old I am, but I used a paper tape program to program a pneumatic tester that pounded on a keyboard for doing keyboard reliability tests. People don't probably even know what paper tape is.

From there I tried my hand at programming. Really wasn't very good at it, so I moved into software testing. I've had the opportunity to work with large, medium, and small companies as a tester, and also be a testing leader across financial, transportation, software OEMs, banking, and other industries. Tried my hand as consultant for a while. Worked with a lot of great Fortune 500 companies. Probably my largest role was as a senior vice president of a QA and test, so moving from a co-op student to senior VP of QA and test was pretty exciting. Leading a team, internationally distributed a team of about 400 people in eight different geographies so I learned a lot.

Now I am with TechWell, and I've got the opportunity to speak at conferences, and teach, and consult, and really help testers worldwide become the best that they can be.

Josiah Renaudin: Like I mentioned, metrics is something that you will be covering very strongly in your tutorials, and as you just explained, you've been around the block, you’ve seen a bit of everything. What are some key metrics that test managers employ to determine software quality?

Mike Sowers: I think as we start to think about projects, we've got the beginning of the project, we got rolling through the project, then we got post project, so I think about those across the spectrum. The quantity and the quality of user stories is a metric, degree of change. Risk, whether it be product or project risk, or complexity. There's time based metrics such as estimating the schedule. How long is going to take to do test planning? How long is going to take to do test analysis, test design, test execution? How long is going to take us to automate? How long does automation take to run even? We have ops, environments, and trying to do continuous build, continuous integration.

There's a lot of quality metrics which is the number of defects found at any given point in the lifecycle by category, by severity, how are we containing defects. Defect containment or defect leakage from one stage to the other.

In the agile world, now we're talking about a team's velocity. Velocity, how quickly can they implement user stories. The degree of technical debt maybe accumulated, and address stories completed, stories committed.

Always the retrospective of metrics that come out of that. How are doing at improving our process? How long does it take us to do our builds? Are we able to integrate our builds and testing and deployment together in a cycle, and continually refine that. Continuous integration, continuous testing, and continuous deployment process. Lots of metrics to think about.

Josiah Renaudin: Something that is really interesting to me in software is this concept of readiness, and something that I've been involved with a lot over the years is video games, for example. Back way in the day when it's cartridges, you have to make sure everything is nip and tuck, and ready to go, because you don't have the opportunity to update it later because it's on a hard cartridge.

Today, because everything is so digital, you have some leeway. You can release a piece of software or game with some issues, and then update it later. When a test manager is determining a software's readiness, how much leeway do they have? For example, if a manager determines a product is ready to go, ready to go out the door, but soon discovers crashes or bugs that force an update to fix that software while it's live, can that significantly harm the manager's reputation or even the future of that software, because people's first opinion of that software is, "Well this is broken."

About the author

Upcoming Events

Apr 03
Apr 24
Apr 24
May 07