e-Talk Radio: Rothman, Johanna - Test Management 101

[article]
Summary:
In this "Test Management 101" discussion, Carol Dekkers and Johanna Rothman talk about the role of the test manager; techniques for assessing the quality of the testing process; tips for new test managers; and "good enough" quality.

In this "Test Management 101" discussion, Ms. Dekkers and Ms. Rothman talk about the role of the test manager; techniques for assessing the quality of the testing process; tips for new test managers; and "good enough" quality.

TEXT TRANSCRIPT: 25 January 2001

Announcer: Welcome to Quality Plus e-Talk! with Carol Dekkers, brought to you by StickyMinds.com, the online resource for building better software. This program will focus on the latest in the field of technology. All comments, views, and opinions are those of the host, guests, and callers. Now let's join your host, Carol Dekkers.

Carol: Welcome to Quality Plus e-Talk! with Carol Dekkers. I am Carol Dekkers. I am glad that you have joined us. If you are listening through the Internet, I hope that your reception is good. If you are listening to us live in Phoenix, that's great. If you happen to catch this in a few weeks on a downloadable streaming audio stream, welcome to the show. I am very pleased today to continue with--one of my feedback email people told me that we keep getting the cream of the crop guests. And I have had people say, "How do you get such great guests?" Today is no exception. I have with me Johanna Rothman, who is a colleague, a good friend, and probably one of my favorite presenters at software conferences. I would like to welcome Johanna to the show. Welcome.

Johanna: Thank you so much, Carol. It is so nice to be here.

Carol: I am going to tell you a little bit about my company, a little bit about what I do, and then I will introduce you to Johanna. I am sure that you are going to be very pleased to have joined in because she is going to give us a lot of information about Test Management 101, which I am hoping to learn a lot about. What I do in my company--I am president of Quality Plus Technologies, which helps companies to really build better software through measurement. One of the things that we do is we implement something called function point analysis, defect tracking, and a lot of different measures that might fit into your goals and your questions of a measurement program. What I would like to offer you is the same thing that I offered to you last week. If anyone would like to have a New Year's calendar that contains the capability maturity model integration project, all of the new five steps from the Software Engineering Institute, together with where function points would fit into that process, send me an email at [email protected], and I would be happy to pop one in the mail for you.

I would like to thank our sponsor, StickyMinds.com, the online resource for building better software. It is wonderful to have a sponsor and who has been able to make this broadcast available to many of you. Without further ado, I would like to really introduce and get into the guts of our show. I would like to introduce Johanna Rothman, who observes and consults on managing high technology project development. She works with the clients to find the leverage points that will increase their effectiveness as organizations and as managers, helping them to ship the right software product at the right time, and recruit and retain the best people. She is a frequent speaker, she is an author, she is a presenter. She has written articles for Software Development, Cutter IT Journal, IAAA Software, Cross Talk, IAAA Computer--I could probably go on and on, but that would fill our show just with about Johanna. I think we really want to get some of her advice. One of the things that I picked up off her Web site that I think really encapsulates one of the things that I admire about Johanna, is that she says "My philosophy is that people want to do a good job. They just don't always know what they're supposed to do or how to do it." I think that really encapsulates one of the things that Johanna does best, which is to demystify some of the technical jargon and really talk to people one on one and make these really hard technical ideas come to life. So, with the topic today of Test Management 101, Johanna, I am glad you are here with us.

Johanna: Thank you so much Carol. I am really glad to be here.

Carol: Now, I'm not a tester. So, I am probably one of the people who can probably gain the most from Test Management 101. And perhaps you can kind of lead us into the topic by telling us a little bit about what does Test Management 101 really mean.

Johanna: Well, the first thing that I think of is that a lot of people are called test managers, or quality managers, or quality engineering managers, and I think the real key is to figure out what are they paying you to do. So many people think their job is to either make sure the software ships, or make sure it ships without any defects, or that their job is to stop shipment if for some reason the software is not ready for whatever they think is ready. I see a big disconnect with what the people think their jobs are supposed to be as managers, and what the companies want from those people. It gets very disconcerting and it gets to the point where you start wondering, "Am I doing the right thing in my company?" Sometimes the best thing is to check with your boss and say, "I would like to either, you know, give you a lot of information about the product under test, or I would like to be able to stop shipment and here is why I want to be able to do those things." I am not a big fan actually of stopping shipment. I think that is a decision that is best left to senior management. But I think that being able to talk about what kind of information do I have about the product under test, and if I haven't been able to run any tests, that is information too. So, what do they pay me to do? How do I know what it is that I am supposed to do? How do I then organize and manage the work, so that I can get the thing done that I am supposed to be getting done? So, the first piece is "What am I supposed to be doing here?" I think, especially, as new managers, that a lot of us get promoted up from the technical ranks, we started off as testers or as developers or whatever. We come in, we get promoted into the first line management position; of course, no one ever teaches you how to manage anything. God forbid that should happen. So, we do not actually know. We maybe do the thing that our previous manager did, or maybe we do the things that our previous manager did not do. I think the key is to say, "What is my job here, and then how do I organize the work to be done, and organize and manage the people so that the work does get done?"

Carol: I think having a traditional development background that I would assume, as some of our listeners might, that testers are those people or those test managers are those people that sit off in a little room and when my software is finished, when I have finished putting all of the creative touches on it, when I have finished doing all of the programming, I throw it in that room and quickly close the door and hope that they can find all the defects. But, I get the sense, from reading your articles and things, that is not really what test management is all about.

Johanna: I don't think so. I mean, that's one way to do it. When I tried to do that once, very early on in my career, I ended up having to need a lot of Tums because I never quite knew what was happening. I could not quite plan what I was going to do. I never felt as if I understood where all the defects were, mever mind told the developers so they could fix them. So, I changed very quickly. I think I did that for about three weeks and then decided this is not working. So, I think the best techniques for being a test manager are to say, "My job is to provide information about the product under test." Now for me the product under test actually starts at the very beginning of the project. So, I think there is a product to test when you have requirements. Even if they're three bullets in an email message, you still have requirements, and there is a project plan. In fact, if you haven't even tested whether or not the project plan is at all reasonable, I think that you have not done the very beginning of testing. Now, you are not going to need a lot of testers at the very beginning of the project, but you as a test manager, start testing where things are in a project so that by the time you have designs and by the time you have an architecture, you have an idea of where you could put people. I actually find also that having testers sit in on design reviews or at least design discussions makes the product better and helps the testers figure out "Where can I test this thing; where should I be looking for defects?" If the developers are having trouble discussing this piece of it, how do I test that piece of it to know that they actually really did figure it out in their design review, and they're not really stuck on something. So, I think that testing actually starts at the very beginning of the project from the time that you have a discussion about what this project should be, all the way through until the end. The more testers participate in testing or analyzing the requirements or verifying the requirements up front, taking a look at the architecture and the design, participating in code reviews, if they know how to read code, and then actually developing tests during the whole front-end of the project and then running the tests as soon as there's any code that can be run--I think that is how, especially test managers, can provide significant value to their companies, and make it so that it is a "Win-Win" job. It is not something that they need the Tums for.

Carol: I think that is going to take a bit of paradigm shift, especially for those listeners who are in companies where the testers have not been involved up front. They're always given something at the back end. I have always wondered how can you create tests for something that you do not even know how it is supposed to run, or you do not know what the requirements are?

Johanna: Well right, and the interesting thing there, is that every product has requirements. So, the key is how do you discover them. I wrote an article with Brian Lawrence a couple years ago called "Testing In The Dark," I think that is the name of it. It was published in Software Testing and Quality Engineering. It is on StickyMinds and it is on my Web site. It is in a bunch of places. One of the things that Brian and I suggested is that you take at least four or five hours or a day or two and ask people what the requirements are. Who are we developing this product for? Who are we developing this product--so that they can't use the system, they use disfavored users. Who do we not necessarily care about? We are happy they are using the system, but we are not designing the product for them. And then what attributes does the product have? How fast is fast? How reliable is reliable? Is there any way to describe the attributes of the project, I should say the product? What makes the product sellable to the maximum number of users? What makes it sellable to the minimal number of users? What makes it completely unsellable? Is there anything that we left out, we wouldn't have a product? So, we go through that stuff and then talk about the functionality. So that even if you don't have a built product yet, you don't have an executable of some sort, you can talk about what the requirements are. And it turns out that, I at least find that the user attribute function technique of eliciting requirements along with context-free questions is a fun technique to use. And developers respond very well to that, as do project managers. People say, "Oh, we are designing this thing for the kinds of people who...." And you get this real feeling of we are in this together, and we are trying to solve this particular problem.

Once you have any software that is built at all, you can do a whole lot of exploratory testing. James Bach has done a lot of work in the area of exploratory testing. He has published a bunch of articles that I find fascinating. But a lot of us testers got to testing because we were curious, because we were skeptical, because we had this, especially, the intellectual curiosity if I push on this thing, what is going to happen. You can use that curiosity in your exploratory testing. So, you can say, I'll do a little testing for awhile, I will explore this product for an hour or two and see what areas do I think would be right for finding defects. So between looking for requirements by talking to people, whether or not you actually have anything written down, and then starting to do some exploratory testing even if you did not get a chance to look at the requirements and the things that did get thrown over the wall. Then you at least have a chance of figuring out how the product works.

Now, the later you get brought in, of course, on the project the less likely you are going to find out how everything is supposed to work, and so your testing is not going to be as good as it could have been. But even starting in the middle understanding who I am designing this product for? Who are the developers designing the product for? And what kinds of things would they do? What kinds of crazy things would they do? What kind of completely off the wall things would they do? Those are things that help you figure out how to test the software. Even if you have not been involved from the beginning and you don't have any clue what it is supposed to do.

Carol: Right. Now, we have had some questions. Craig Byers who is the principal at American Management Systems Inc., sent me some questions for you that I will throw out and we can sprinkle them throughout our broadcast. I would like to give the toll-free number so if we have some people who are listening who have the guts who would like to step out of their shells and actually pose a question to Johanna. I have never known you to ever bite anybody. I have never bitten anybody. I think the callers would be safe calling in, don't you, Johanna?

Johanna: Absolutely. You know I'm in my office, so even if I spit it is only going to be here, not a problem.

Carol: And I am at a completely different place, so anybody who calls in, they are going to be safe. The toll-free number is 866-277-5369. Again, that's 866-277-5369. I would like to ask you a little bit about some measures that you might recommend for assessing the quality of the testing process and the test pieces or script. And as soon as we come back from these short messages, I will give you a chance to answer that.

Johanna: Okay.

Carol: We will be back with more of Johanna Rothman and Quality Plus! E-Talk...Welcome back to the show. I am Carol Dekkers, and my guest this week is Johanna Rothman. We are talking about Test Management 101. Before we went into break, I just posed her a question that was sent in by one of our listeners which is what measures would you recommend for assessing the quality of the testing process and test cases for scripts?

Johanna: Well, I measure a few different things. One is that for the test cases--how good are the test cases? You need to also understand that at some point the stuff that you measure about that is not going to be relevant. Let me try and explain what it is I measure and then why I think measurements start to fade in importance. I measure the defect find and close rates by week, especially over a long project. For a very short project, I have been known to measure them by day, but those are six-week projects. If any of you are doing E-projects, then you know how short these things are and you do want to measure stuff by day to get a better picture. But, I generally measure by week. I measure the defect find and close rates normalized by testing effort. So, if I have people, you know, five people in my testing group and they are for some reason not running tests or not developing tests for a given day or a few days, then I have to try to normalize that information; otherwise, I do not know how well I am finding defect.

Test scripts and test cases aren't just good if they find defects though. They are also good if you can run through things that the product is supposed to do, and they do not find anything. Now, they have not given you the same kind of information as if you find defects, but you want to be able to explore different areas of the product and test different areas of the product. I tend to work a lot on embedded kinds of systems, applications that are close to the operating system. I like to have a whole suite of regression tests. On the other hand, I have also worked on a lot of, a few I should say, really short time-to-market driven projects where we didn't necessarily want to run a lot of regression tests because we wanted to keep exploring the new development that was coming into the product on a daily basis. So, we had a lot of manual tests in the second case where we had a very short project. And we had a lot of automated tests for the longer projects where I wanted to know that we had not broken anything over a long period of time. So, if you aren't finding defects with your tests, it doesn't mean that your tests are bad. It means that you are not finding defects with your test and you might need to consider an alternative technique for finding tests, or I should say for finding defects. So, I think that is really important. It is also extremely difficult to measure how good a tester is by the number of defects or the kinds of defects that they report. Some testers tend to take a very systematic approach and they start from what they think of as the beginning, and they walk through all of the menus and they walk through the product in a way that makes sense to them, checking for things that the product is supposed to do. Eventually, they check for things that the product is not supposed to do, and that is a very systematic approach. It is an approach that I tend to take.

There are other people, though, who don't think the way I do and they do what seems to me incredibly random stuff. And then they find really good, or I should say, really bad defects. Those big bad defects that would prevent you from shipping or would be an incredible embarrassment to you while shipping.

Carol: Right.

Johanna: So, it's not that one kind of testing is better than another. Your best bet is to have a mix of people, some of whom sort of walk through the product systematically and some of whom are incredibly good exploratory testers. And that you want a mix of looking to see how much it is going to cost you, of course, of some form of automated regression tests along with newly introduced tests. I find that when I work on a project, especially as the test manager, I track the number of times requirements change over the course of the project. Now, the way I track them is sort of major and minor. It is a major thing if it is an essential requirement. It is a minor thing if it is only a desirable requirement. So, I try to look at how essential is this? Is this a constraining attribute of the product? Do we not have a product without this, or is this something that we are going to make the user have much more satisfaction if we include this thing? And I track major and minor requirements changes over the time that I start testing because I want to know, do I have to add any tests to my current suite of tests, whether they are automated or manual? I just want to know. How many things do I need to keep adding tests as I go? So, I track that and I track the number of tests that I have planned to run, that I have actually run, and that have passed. Because, just because you can run a test doesn't mean that it passes.

Carol: Right.

Johanna: And I want to know how far off am I? Did I plan to run 35 tests, did I plan to run 1,000 tests? Am I increasing the number of tests that I plan to run on a weekly basis because I am uncovering more and more of the product, or am I increasing the number of tests I want to run because the requirements are changing and we are already past something that might be called feature freeze?

Carol: Interesting.

Johanna: So, where are we in the project and how do I know what I need to do for more or less testing?

Carol: Right. We will be back with more of Johanna Rothman and her advice on measures and more of Test Management 101 when we get back from these short messages...Welcome back to Quality Plus! E-Talk. I am Carol Dekkers. I am president of Quality Plus Technologies. If you are looking for a show schedule of our upcoming shows, you will find that we do have the cream of the crop of guests coming on this show. You can find it at StickyMinds.com, www.StickyMinds.com, and you can also find it at my home site which is www.qualityplustech.com. We are back with Johanna Rothman, who is a frequent speaker and author on managing high technology product development. She has written articles for just about every industry journal that is out there. She has a really exciting thing coming up. She is the chair of the Software Management Conference which is going to be held together with the Applications of Software Measurement Conference that Software Quality Engineering is going to be running. I am speaking and doing a tutorial at that conference. And Johanna is the chair of the software management side of the conference. It runs February 12th to 16th in San Diego. If you would like more information about that, you can go to www.sqe.com.

And before we went into the break, we were talking about Test Management 101. Johanna was talking about the measures that she uses on projects to figure out if testing is good. One of the questions that was hanging on me was when she was talking about it, there was something that used to be called be-bugging which was somebody came up with this great idea that they could test or they could measure the effectiveness of testing if they purposely injected bugs, purposefully injected defects before it got to the tester, and then based on how many of these bugs would actually be found out, they could tell whether, you know, if it is 40% efficient, 20% efficient, or something like that. Johanna, what is your view on be-bugging?

Johanna: I think it stinks. How is that for blunt and direct? I have never felt comfortable with purposefully putting defects in. The developers do a good enough job putting defects in unintentionally, putting them in intentionally seems to me to be a waste of time. Trying to find them is not necessarily the best use of the tester's time. I am not sure it is a good indication of how good the tester is if they find things like that. Because a lot of times when people do this be-bugging stuff, this putting the defects in purposefully, they put them in places that especially black box testers cannot easily find. They make them strange boundary conditions. They make them very strange or abnormal use of the product or in exception handling. And testers do not typically find stuff like that easily. If you are not sure how good the testers are, it might be a better bet to actually out-source the testing in parallel with the current testing team and see how good your testers are. But, I would never recommend that people put defects in to see if their testers are any good.

Carol: It reminds me, and we were talking just quickly at the break about the Dilbert cartoon when Dilbert is going to be measured and rewarded for the number of defects that he finds, so he is busily injecting defects so that, what was it a minivan, that he--

Johanna: While he says I am going to build me a minivan this afternoon.

Carol: That's right. But I absolutely agree with you. I wonder sometimes whether products get shipped with some of these intentional bugs put in.

Johanna: Well, I wouldn't be surprised. I mean we ship with enough unintentional bugs. Why would we want to leave the defects in? If you cannot necessarily remember where they are, or if in the course of regular updating and fixing the code, you have somehow papered over it, you are not necessarily going to see where it is that you put it. It seems to me to be of a dubious--to me it is of really dubious quality. It does not seem to buy you anything. The risk potential is very high.

Carol: Right. We have a caller. Would you like to take a caller, Johanna?

Johanna: Absolutely.

Carol: Okay. Pam, you are online.

Johanna: Hi Pam, and welcome to the show.

Caller: I was interested in the topic when I saw it on the Web site of Test Management 101 because I am about to face that in my professional career, just being moved in as a test manager. I know testing, I have done testing, but I have really--this will be kind of an important move for me and I do not have as much experience in actually managing the people and managing the overall process. I wondered if you could give me some advice about that, and I will be happy to take my answer off the air.

Johanna: Well, there is a few things that I suggest. One is, of course, hire the most appropriate people for the job. You want to make sure that you are hiring people that give you a range of skills and that are appropriate for testing the software or the whole product that you have. But in terms of the sort of daily and weekly management, I think one of the most effective management tools is the one-on-one: the meeting every week between the manager and the employee. Where the manager says, "So, how is it going" and the employee says, "Well, here is what I'm doing and here are the issues that I am having trouble with, and here are the things that I want to investigate." It is a way to give and get feedback on a weekly basis. You don't have to wait for a performance evaluation. You don't have to wait for some kind of a quarterly review. You have an easy, conversational discussion about how things are going for you as the employee and for you as the manager that week. I find one-on-ones are incredibly valuable. The managers I know who do one-on-ones have no trouble writing performance evaluations. And the people who work for them are much, much happier. So, I think that is a huge piece that you can really capitalize on.

I think the other thing is to just assume that people know how to do their job. A lot of us came up through the technical ranks and it is so easy to say, "Oh, I know enough about this, I could give people help." Well, your people might not want help, especially if you are a new manager. They might want to show you what they are capable of doing. So, assume people know how to do their jobs and make sure that they have the tools that they need to do their jobs, and then check in with them and say how is it going doing your job. That is one of the other points of the one-on-one.

The one last piece that I wanted to mention was to make sure that you treat people the way they want to be treated. Not everyone likes public recognition. Not everyone likes private recognition. So, understanding what people want out of their job and figuring out a way to treat them appropriately, I think is another huge piece of being a good test manager. Especially, where testers tend to be the ones who come in at the end of the project. They might not get the same glory under normal circumstances as the developers do. Figure out what it is that these people really want out of their jobs. How do they want to be rewarded and recognized and then figure out a way to do that.

Carol: And I think that is good advice for any manager.

Johanna: I think so too. But, I think especially, a lot of organizations put the test management and the testing organization in a support role or a services role. I think especially if you have come out of development and now you are in a services organization, it feels really strange, so it is especially helpful there.

Carol: I think that is good advice. I think that Pam will do well. One of the things that managers often do not do is ask questions because we have this perception that if you are in management, you know how to manage people. That is not necessarily the case. I think we grow into the role and we can really learn from the people who are underneath us, and we are not necessarily a boss, we just happen to have that position.

Johanna: Well there is a real difference, I think, between being a boss in the typical term, the typical sense, and being a really good manager. I think that being a manager means being a colleague as well as being a supervisor. It is just the skills that I bring to this collegial relationship are different. My job is to provide leverage. The way that I provide leverage is by finding out what people are doing by giving them enough information to do their jobs. By not withholding information that they need, and by treating them with respect, by treating them as human beings. I find that is just how it works for me.

Carol: Now, how is Pam going to overcome the tendency that if she has come up through the testing ranks, and has been a tester, to not jump in when her people start floundering, what kind of advice could you give to her when she wants to just go in and do it herself?

Johanna: Oh, the first thing is don't do that. The second thing is understand what kind of a problem do you have. There is a problem here if you think you can't get the testing done with the people you have. And, first you need to understand is there a problem. Is this your problem or is it your test group's problem? One of the things that you want to find out is, are people comfortable with their progress? Do they think that they are going to finish the stuff that they have to do on time? All that stuff. Make sure that you really actually do have a problem. If someone comes to you and says, "I don't know how to test this thing," the best thing you can do for them is to coach them through a way for them to figure out how to do it. If you say to them, "Oh, you do this first, that second and this third," they have not learned anything.

Carol: Right.

Johanna: But if you say to them, let's talk about how you can approach this particularly thing...Say you are testing the performance of a Web site. Say you have a junior tester who comes to you and says, "I don't know how to test performance of a Web site." So, you say--when you are testing performance, there is kind of the fixed performance and then there are all of the variables. Let's talk about what are the things that are fixed that would affect performance, and what are all the variables that would affect performance. You can then go from there and go with the next set of questions and get them to really start thinking through it so they have a chance of being able to do this again without you.

Carol: Right. Excellent advice. And we will be back with more of Johanna Rothman and Test Management 101 after these messages...Welcome back to Quality Plus! E-Talk. I am Carol Dekkers, and my guest this week has been Johanna Rothman. I cannot believe that we are in our final segment before we wrap up. I have a million questions to ask Johanna. The thing that I really like about you, Johanna, is that you make things easy to understand.

Johanna: Thank you very much.

Carol: You're not, kind of bowling me over with these huge words and things. One of the things that I am dying to find out is that you mentioned something about good enough testing right at the very beginning of the show. And that you maybe you would prevent from shipping, that a tester could prevent from shipping. I am just dying to know what is good enough testing and what is good enough quality, and how would anybody ever decide that.

Johanna: That's a big question, so let me break it down into a couple pieces. First of all, good enough quality is really that level of quality appropriate for the product. Ed Yourdon and James Bach have done a lot of writing about good enough quality. The way I have taken that is that I have tried to say that there are a whole bunch of project requirements and project constraints for any given project, especially in the commercial world. There is the ship date, there are the kinds of features that you want to put into it, there is the number of defects and the kinds of defects that you are willing to ship with. You have a bunch of constraints. Do you ever have enough people? Probably not. Can you create the right work environment? Probably. So, there are a bunch of things that help you figure out what are all the requirements and constraints on your project. And the good enough quality people, of which I consider myself part, say let's talk about what makes sense. How fast can we ship it and still have something that people want to buy. When is this thing of value to a sufficient number of people? And the way that I actually find out if something is good enough, is that I use something called Release Criteria or Ship Criteria depending on how you think about it.

For me, Release Criteria is the time when the test manager and the project manager and the entire project team, hopefully the developers and the testers and whoever are on the team, the writers, anybody else, get together and say what is critically important to this project. What do we absolutely have to do before we ship? And then those become the criteria that you decide. Now, I have worked with some startups who said that we have to ship something on July 1st. It does not have to work very well. It has to have at least this feature in it, but if we don't ship it July 1st, we don't have a company July 2nd. Okay. So, July 1st is the one and only release criterion. But there are other companies where you say I don't want to alienate the current customer set. I want to make sure that I have good enough performance, which I am going to measure in this particular way for the next set of customers, and I have a couple other things that I absolutely have to do. Anything else, in a sense, is icing on the cake. And I would love to have it by the ship date, but if we don't have it by the ship date, I can live with that. There are some companies that say, we don't even care about the ship date, though there aren't too many of these right now. We will not ship this thing until we have this amount of coverage from our test, you know, we are actually going to measure basic test coverage. We will have to know that it has so few defects that it is safe for us to ship this project. So, release criteria helps you define, at least for me, helps me define what do I mean by good enough.

The other thing that I like about release criteria, is that it helps everybody on the project and everyone in the company understand what are our goals? What are we aiming for? So you don't, as a test manager especially, you never have to be the person who says this product is not ready to release. The project manager becomes the person who says, "We have not met our release criteria." Now, you can still choose to release, even if you have not met your release criteria, but at least you are doing it honestly and openly. And you are not saying, "Well, we sort of made that, didn't we?" No. No one's hands get tied behind their backs. It is not one of those, yeah, yeah, group think things. We are saying, "Yes, we met the criteria or no, we haven't and we are ready to take the risk. And even though at the beginning of the project we thought that we had to meet these criteria, we haven't. We are willing to take the risks that we are going to ship with them anyway." And then it is a business decision.

Carol: It sounds similar, to me, to kind of doing--if you were building a house, when can you have that final inspection and actually move in? And I have been on projects and been consulting to companies where release criterion was never written. It was always assumed, and so the users would always say, "Well no, no, that it is not what we meant by 'ready.'" The developers would say, "What do you mean, we have to do all these other things yet?"

Johanna: That's right.

Carol: I think that is absolutely incredibly good and down to earth advice to have release criteria that is formalized, written, and agreed to at the start of a project.

Johanna: Yes. You can even still start the project and be sort of, you know, part of the way down the road before you have them. But I strongly recommend that you at least have release criteria before the developers finish writing their code. Because if they haven't, they will do what they think is the right thing. Developers, all the developers I know have a highly developed sense of sort of integrity and moral obligation to the product. And they want to do the best job they now how. But sometimes the best job is doing a little bit very well. Sometimes the best job is putting in a whole bunch of stuff that may or may not work very well but the users are going to live with it anyway. And sometimes the best job is just making sure that you haven't screwed anything up from the last time. So, the best job is different depending on what kind of a product you have, where it is in its lifecycle, all that stuff. You need to be thinking about all that, preferably from the beginning of the project, but absolutely before you begin testing. Because if you think about it after you have been testing, you're lost.

Carol: Right. And we will be back to finalize things and sum up with Johanna Rothman after these short messages...Welcome back to Quality Plus! E-Talk. This has been a great show and Johanna and I could probably sit here and talk for hours and hours and hours, but then we would be cut off the air and nobody would be listening. We could be talking about this stuff for the next five hours easily. I would like to invite any of you that are listening who are in the San Diego area or thinking about that they would like to be in the San Diego area in February to join us for the Software Management Conference and the Applications of Software Measurement Conference--it is a dual conference--you can sign up for one and attend any of the sessions in the second one. Johanna is the chair of the software management side. I believe that you are doing a presentation there?

Johanna: Yes, actually, I am doing a panel with a few other people called, "How To Tell When Your Project Is In Trouble."

Carol: Great.

Johanna: So that should be good.

Carol: And I am doing a one-day tutorial on function points for Web-based software, and I am also doing a feature presentation called Extreme Programming Meets Measurement which brings me into next week's show. We are going to be having the illustrious two of the three "extremos" of the extreme programming paradigm. Kent Beck who has written a book called Extreme Programming Explained: Embrace Change and Ward Cunningham who has also written books and he was really the originator, the father of extreme programming. Ron Jeffries, who has also written a book, we are not sure if he is going to be on or not. But the two of them or the three "extremos" are really like the three tenors of the opera industry, and we are going to have them on our show. So, please join us next week when we talk about Extreme Programming Meets Measurement. What should you measure on a software project that is done using extreme programming? So, that will be quite exciting. We also have coming up at the end of February, Tom DeMarco, who is going to be talking about risk management. Tom has just recently published a new fiction novel, which I am sure we will bring into the discussion. Jim Highsmith is coming up in March to talk about adaptive software development. David Zubrow, who is a senior member of the technical staff at Software Engineering Institute, will be talking about the capability maturity model advancements. And I have Bret Pettichord, Elizabeth Hendrickson, and a number of other surprise guests who will be coming up. So, I hope you will join us. This has been great, Johanna. I have really enjoyed talking to you. I hope people will come up to us at the conference or conferences that we will be at in the future and just say, "I listen to you" or "I like what you are doing" or "I have some questions."

Johanna: That would be great.

Carol: So, thank you for spending the last hour with us and sharing your expertise and your knowledge. Do you have a final word of wisdom that you would like to send our listeners away with?

Johanna: Well, I think the thing, especially when you are a new test manager, or you are trying to figure out "How do I fit into this organization?" is to take a look at where you are and say, "What are the challenges I want to bite off now? What do they pay me to do? What are the problems I see? How do I make those two things intersect? Can I do some quality management? Can I use the process that I know about and any of the process metrics to change how we do development or how we do projects? Can I change what we do for testing? Does it make sense what we do? Do we have the right people? Am I measuring any of the right stuff?" All those things. To really think about where am I now, where do I want to get to? And what are some of my alternatives from getting from where I am to where I want to get to? I think we have a real opportunity in the software industry to make a huge difference for how we produce and test products, and I would like us all to say, you know, "I would just like to get 5% or 10% better with every release." You do that for a few releases and you have significant gains. So, that is the thing that I would like to leave people with, is how do we start from where we are and how do we get to another place that is even better than where we are?

Carol: And in terms of your writing and your articles, I know that you have written in Software Quality Professional which people can go out to sqp.asq.org and they can download things and go and take a look at your Web site which is--

Johanna: It is www.jrothmann.com. I have a paper's page of all the stuff that I have written that I am able to publish.

Carol: I appreciate you being here. I appreciate StickyMinds.com being our sponsor. And I appreciate all of you who are listening either through the Internet, on the radio waves or in downloadable archives in the future. Thank you very much for listening, and we will be back next week with more of Quality Plus! E-Talk with Carol Dekkers. Have a great week.

Copyright 2001 Quality Plus Technologies and Carol Dekkers. All rights reserved.

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.