The success of software projects depends to a large extent on the initial effort estimates. Consequently, a lot of work is done proposing good estimation procedures but without very convincing results. This article identifies good estimation practices and clears away some of the cobwebs created by researchers.
The Omnipresent Expert Judgement
What we often do when asked to estimate the effort for a software project is "we make up our mind as well as we can, without doing anything special." We need knowledge to do this sensibly so we call it expert judgement. Whenever we read expert judgement, we smile silently, as we know it doesn't mean anything, and the term was invented only to satisfy Quality System Auditors. No CMM or ISO 9000 audit was ever passed by saying, "we do nothing special." Some whistle-blowers claim that this isn't a method at all, and call it tauntingly the "ask Joe" method. We basically agree with them, but it is difficult to come up with something profoundly better.
Look at Joe
An alternative to outlawing Joe's method is to investigate what he is actually doing. [Joe, you don't need to read on, nothing new for you]. We will see, that we can even drop a few more method names.
In the Beginning, there was Structure
The first step is easy, as Joe is helped by other software development areas. In software development, whenever we feel we are doing nothing special, we call it "structured." We have structured analysis, structured design, structured programming, and so on. Joe makes a structured estimation, which we call estimation via Work Breakdown Structure (WBS). That means we split the whole project up into a list of smaller tasks, possibly on several levels. No one can argue on that score: a list of tasks is a simple structure, but it is a structure. For many, making a WBS is not much of a method either, but this time, they are wrong. We need to realize that making a WBS is not a necessity. Other more extensively discussed and less extensively used methods (Function Point Analysis for example) do not use this type of structure.
Now Joe has his smaller tasks, but how does he come up with figures? If he has done similar tasks in the past, he just assumes that it will take approximately that long again this time (estimation by analogy). If he assumes that a task is somewhat harder or easier than last time, he adjusts his figures accordingly. Perfectly sensible, nothing special. Unless Joe has an unusually good memory, it helps to have records on the similar tasks of the past (historical data). By now, we could call this a justified expert judgement.
The problem with any expert judgement is that the typical project manager doesn't want his career to depend on the instinct of a pony-tailed software engineer. Several proposals (most notably CMM) now say that gut feeling in effort estimation can be eliminated by doing size estimations. They claim this is easier. What often remains open in these proposals is how the size can be estimated. Are gut feelings more useful for size estimations than for effort estimations? You will have to answer that for yourself.
Another problem with size estimates is their transformation into effort estimates, or the estimation of productivity. We have to keep in mind that size by itself is unimportant, unless someone is foolish enough to pay you by the lines of code you produce. By introducing size estimates we have one uncertainty (effort) replaced by two uncertainties (size and productivity). From a practitioners point of view there is no general rule. Sometimes size estimates improve the accuracy of effort estimates, sometimes not.
That means, it might help Joe to think about the amount of work to be done. If there are one hundred tests to be executed, it obviously