How 'Joe' Makes Software Estimates

[article]
Summary:

The success of software projects depends to a large extent on the initial effort estimates. Consequently, a lot of work is done proposing good estimation procedures but without very convincing results. This article identifies good estimation practices and clears away some of the cobwebs created by researchers.

The Omnipresent Expert Judgement
What we often do when asked to estimate the effort for a software project is "we make up our mind as well as we can, without doing anything special." We need knowledge to do this sensibly so we call it expert judgement. Whenever we read expert judgement, we smile silently, as we know it doesn't mean anything, and the term was invented only to satisfy Quality System Auditors. No CMM or ISO 9000 audit was ever passed by saying, "we do nothing special." Some whistle-blowers claim that this isn't a method at all, and call it tauntingly the "ask Joe" method. We basically agree with them, but it is difficult to come up with something profoundly better.

Look at Joe
An alternative to outlawing Joe's method is to investigate what he is actually doing. [Joe, you don't need to read on, nothing new for you]. We will see, that we can even drop a few more method names.

In the Beginning, there was Structure
The first step is easy, as Joe is helped by other software development areas. In software development, whenever we feel we are doing nothing special, we call it "structured." We have structured analysis, structured design, structured programming, and so on. Joe makes a structured estimation, which we call estimation via Work Breakdown Structure (WBS). That means we split the whole project up into a list of smaller tasks, possibly on several levels. No one can argue on that score: a list of tasks is a simple structure, but it is a structure. For many, making a WBS is not much of a method either, but this time, they are wrong. We need to realize that making a WBS is not a necessity. Other more extensively discussed and less extensively used methods (Function Point Analysis for example) do not use this type of structure.

Deja-vu
Now Joe has his smaller tasks, but how does he come up with figures? If he has done similar tasks in the past, he just assumes that it will take approximately that long again this time (estimation by analogy). If he assumes that a task is somewhat harder or easier than last time, he adjusts his figures accordingly. Perfectly sensible, nothing special. Unless Joe has an unusually good memory, it helps to have records on the similar tasks of the past (historical data). By now, we could call this a justified expert judgement.

Sizeable Salvation?
The problem with any expert judgement is that the typical project manager doesn't want his career to depend on the instinct of a pony-tailed software engineer. Several proposals (most notably CMM) now say that gut feeling in effort estimation can be eliminated by doing size estimations. They claim this is easier. What often remains open in these proposals is how the size can be estimated. Are gut feelings more useful for size estimations than for effort estimations? You will have to answer that for yourself.

Another problem with size estimates is their transformation into effort estimates, or the estimation of productivity. We have to keep in mind that size by itself is unimportant, unless someone is foolish enough to pay you by the lines of code you produce. By introducing size estimates we have one uncertainty (effort) replaced by two uncertainties (size and productivity). From a practitioners point of view there is no general rule. Sometimes size estimates improve the accuracy of effort estimates, sometimes not.

That means, it might help Joe to think about the amount of work to be done. If there are one hundred tests to be executed, it obviously takes more time than ten tests. That helps if he can more reliably establish figures for the size, rather than for the effort. Additionally, he needs to know how long it takes to accomplish one unit, e.g. one test. Again, historical data helps. A simple form of size estimation is classification. Joe would think, "There are ten reports to write, five are easy, four are average and one is really difficult to do." That is certainly a better basis for the effort estimation than just "ten reports."

Classification would be simple were it not for the challenge of defining meaningful classes. What is easy for Joe might be difficult for Sam. For the best of our engineers, we would need only two classes: "easy" and "boring," for other ones we would need only "too difficult." Is there any way to define once and for all, the difference between large and medium without relying on attributes that you are not supposed to know? It's easy to say "Everything more than ten is large," but if you know the size, there is no need for classification. If you don't succeed in this, strange things will happen. You will hear Joe asking his colleagues: "How do we classify a five-month task, is this large or medium?" So it lands in the "medium pool" together with the three-monthstasks, only to find out a few steps later that it is likely to take approximately four months. What an achievement.

Verified, Validated, Confirmed and Reconfirmed
After doing the estimations for the elementary tasks, an experienced Joe would finally run a cross-check. He would add the figures for the various types of tasks to see if they are reasonably in proportion. He might say, "Well, that now makes 800 hours of design work and twelve hours of testing. Perhaps I want to rethink that." Again, finding reasonable ratios is easier when you use analogies from previous work, supported by historical data if available. Now Joe has, by this flexible combination of little pieces of common sense, a figure.

Is that all we can do? The hard work is done, but we can still try to make the most of it. We can review the figures by asking Sam to have a look at them. We can also extend the review idea by doing a Wideband Delphi (WD) session. We ask Joe, Sam and Sue to make estimates independently from each other. Then they compare and discuss the results and come up with a commonly agreed figure.

Doing WD is a really good idea if Joe is not convinced about the figures himself. It is also a good idea if Joe's boss is not convinced about Joe's figures. Especially for this second reason, WD is so common when doing estimations, that many people confuse it with the estimation method itself: "we do estimations by Wideband Delphi." If WD were an estimation method, then anyone who can understand WD could make software estimations.

Now for Something Completely Different
Having well justified data is the end of the technical estimation process. What now happens with the data is an entirely different story on the managerial level. Joe thinks that his data is now being misused to such an extent that he promises himself never again to come up with "correct" data. So in the future he reads Dilbert cartoons when he is supposed to be making estimations.

Anything Else?
There are alternatives to asking Joe. They fall into two categories. Some of those are: Function points, model-based like CoCoMo, and others. They are sometimes successful, but often not. Another category is just cynical statements, made whenever someone gets bored with expert judgement again and again, or if Joe wants to make a point on the second, non-technical estimation phase. Examples are the "price-to-win technique" or "top-down estimation." A Google search on "software estimation techniques" will give you a good laugh, if you like cynicism.

Another alternative is not to make estimates at all, and to rely entirely on the above mentioned non-technical estimation phase. A few large companies that don't exist anymore came close to perfecting that method. However, there is little risk of this method getting lost, as it keeps being re-invented all the time.

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.