Stop the Bad MBOs

[article]
Fight the Good Fight Against Abusive "Management by Objectives"
Summary:

Some managers use "management by objectives" effectively; however, too often they are used destructively and undermine the team. In this article Rex gives the clarion call to stop the bad MBOs and gives three case studies of what not to do.

Introduction
Do you want to introduce a new objective to your organization that will help everyone who works for you? How about this: Stop destroying your team with bad
MBOs.

MBOs, or management by objectives , are commonly used as part of yearly performance reviews. In theory, the perfect set of objectives defines, in a quantified way, exactly what the employee is to achieve over the coming year. At the end of the year, in the annual performance review, the manager simply measures whether-or to what extent-the objectives were achieved.

However, some management-by-objectives approaches backfire, often in dramatic ways. Let's look at three case studies of bad MBOs, how they negatively affected the team, and how the managers could have done better.

Case Study One: The Whipping Boys and Girls
One company set two yearly performance objectives for the test team. The first, which was common across all teams, was the on-time delivery of the system under development. The second, which was specific to the test manager, was the delivery of a quality system with few bugs.

The test team did a great job of finding lots of bugs during development; they found thousands, in fact. They found so many bugs that the system was not deployed until months after the original ship date. They found so many bugs that the developers could not fix them all; most were deferred under the pretense that they were not "customer facing" bugs. Most of these bugs later were reported to customer service people.

When the yearly performance review came around for the test team, they were told they had failed to meet either of their objectives. After all, the system shipped late, and it shipped with lots of bugs that had resulted in lots of customer service calls. The dispirited test manager immediately began looking for another job, and resigned to go elsewhere within a few months of this incident.

Clearly, the "ship on time" objective was inappropriate. When we testers do our jobs well, we almost always find problems, and those often result in delays. The "quality system" objective was likewise inappropriate. Testers don't have bags of magic pixie dust that can inject quality into systems.

The choice of objectives was really unfortunate because the test team had contributed a lot:

  • The transfer of knowledge and testing best practices to vendors providing key subsystems
  • The creation of an automated regression test tool that included pioneering integration between custom and commercial-off-the-shelf test tools
  • The large percentage of bugs detected by the test team prior to deployment (even though those bugs were deferred by management fiat)

The situation would have turned out much differently if the management team had created measurable objectives based on those contributions.

Case Study Two: While You're at It, Please Walk Across the Atlantic Ocean
I recently heard of an organization whose testers were evaluated on the same two objectives every year:

  1. How many bugs were detected by customers after release in a subsystem tested by the tester? The tester's performance evaluation rating goes down as this number goes up.
  2. How many bug reports did the tester file that developers returned as "works as designed," "irreproducible," etc.? The testers performance evaluation rating goes-yep, you guessed it-down as this number goes up.

These two metrics are at least partly under the tester's control. But notice how only a perfect tester-or a tester testing a completely trivial subsystem-could ever hope to get a good performance rating under these objectives. Furthermore, note that any attempt to drive one of the metrics toward zero will tend to drive the other metric up. Also notice

About the author

Rex Black's picture Rex Black

Rex Black is President and Principal Consultant of RBCS, Inc., a consultancy that provides testing experts worldwide, serving clients such as Bank One, Cisco, Hitachi, IMG, and Schlumberger in consulting, training, and hands-on implementation. He has written Managing the Testing ProcessCritical Testing Processes, and numerous articles, along with presenting papers and keynote speeches at international conferences.

StickyMinds is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

Oct 12
Oct 15
Nov 09
Nov 09