Tuesday, November 3, 2009

Complex Up Front Estimation Tools Drive Me Crazy

Way to often I encounter both my colleagues and clients spending way to much effort and stock in building the uber estimation toolkit.

I'm sure many of you have seen this type of tool. The spreadsheet that uses function points, component complexity factors, non functional adjustments, and some really clever math to produce a really impressive, but, an largely fictional and arbitrary guess of what the actual effort of a software implementation effort will actually be.

You know you are using one of these tools when:
-you feel like you are doing the requirements modeling with a spreadsheet
-you are doing architecture and design in your head so that you can estimate the exact detailed code components that you think you need to build
-you are estimating at the hourly level for tasks that will be completed over 6 months from now.

I know that many of us have been the author and user of some of these tools.(I know I have built some wicked estimation steadsheets) But the problem with these tools is that they hide the fact that software development is an excercise in discovery and adaption. Planning is essential, but creating uber detailed estimations, and uber detailed plans for software development is alot like creating an uber detailed plan for a vacation. Setting in stone that you have to go to the beach on a particular afternoon cam get easily waylaid if a storm comes in, planning to bring swimwear with you because you plan to go to the beach on a particular afternoon just makes sense.

The same type of reasoning goes into software planning and estimation. Deciding that business requirement A is X+50% harder to build than requirement B is a whole lot more sensible than trying to calculate the work involved for both requirements to the exact hour. The high level estimate is likely to be just as accurate as the one developed by the complex one, and way easier to change if it's wrong.

Here are some pitfalls I see with using complex estimation toolkits






1)Requirements, even when well written always end up being notoriously innacurate.

Blame it on user always changing their minds, subpar analysts, immaturity of the field, etc. But what you based your estimates on will often end up being dramatically different than what you end up building.

2) Technology effort is based on platforms that are changing, dramatically, and the diversity is overwhelming

Don't think you can take your fancy web estimation toolkit based on j2ee version XXX apply a few tweeks and then use it for your upcoming Ruby on Rails project, especially if you don't have significant Rails experience. It is just as likely that your innovative portal/SOA estimator most likely won't cut it when trying to figure out an RIA/REST solution, especially if you are trying to estimate at a very detailed coding component level.

3)Development is not even close to a linear excercise.

The first few times a team build a component of type X will take alot longer and be alot riskier than after the tenth. Over time the team should be building the solution in such a way that the 10th component of type X should be way easier to implement, especially of they are working in an intelligent fashion.

4) A heavy dose of guestimation fudge is applied to even the most rigorous of models.

Every complex estimation toolkit has a way to parameterize the output with a heavy dose of fudge. Frequently the estimator will look at the estimate produced by the wonderful tool, sniff, and apply a 40%+ adjustment to get it to feel right.

5) These tools are incapable of incorperating the biggest inputs to software scheduling and variance to effort, which are human factors

Below is a tag cloud that shows some of the biggest factors to consider when estimating software delivery.






While I'm happy to debate the exact priorities of these factors, I'm pretty adamant on a couple of key points.

- Human factors trump development factors: Now I'm not saying that requirements complexity and target platform are not important, they are, but you can take two identical requirements on an identical platform and give them two different teams and see a exponential difference in output.

The same goes for the amount of organizational churn you have to go through to get something delivered. I have observed (and been guily of being part of ) teams that allowed a lack of appropriate accounting for beaurocracy leading to huge delivery variances from original estimates. Or in simple terms, you won't know how long it takes a particular organization to digest and deploy new software until you have hamds on experience with that organization.


Business ownership is another huge one, recently more and more developers are realizing that software delivery is primarily an excercise in communication. Not having accountable business owners properly at the table will have a massive impact in estimates.

Another way of interpreting the above cloud is as follows:

Consistent tools don't lead to consistent results, consistent experience does

Good estimates come from experts who have done the work before, have used the technology in question, understand the business, understand the organizational context and know the capability of their team members. In my experience this is a tall order, and as a result I have not heard of very many good estimates.

Good (and more likely) estimates are done frequently, by a cross functional team, by the people who will actually be doing the work, and have a level of detail that match how soon they will be done.(but that is a topic for another post)

Recently I was one of 4 groups asked to estimate a web site. One approach used function points, one approach used component complexity, one approach used a page complexity tool, and the last approach used planning poker. Eatimators were all experienced web developers.

They all came with 20% percent of each other. So thx but I'm sticking with planning poker.

Do estimation tools have any use?

Absolutely, yes.

They can provide structure to the thinking of an experienced craftsman.

They can help articulate the problem space and highlight any unknowns.

But a little goes a long way, and no estimation should be confused with fact, no matter how detailed they are.


3 comments:

  1. I am more comfortable with estimation tools than you seem to be. But I do agree very much with your last two points about the use of estimation tools. Perhaps that is why I have less concern about such tools since I find these two reasons to be the ones I found most important when I was doing estimation work regularly back in the early 90's.

    We used multiple tools to get to an estimate where the tools agreed "within 20%," explaining the inputs and outputs to our (internal and external) clients to get their concurrence on the results. Indeed, just using input data questionnaires with development management and staff without the tools revealed the two benefits you mention.

    We also then offered a range, rather than point, estimate and strongly encouraged re-estimation along the way.

    ReplyDelete
  2. There is one thing where planning poker trumps estimation tools. Typically, estimation tools are done in isolation. An "expert" sits alone, and hammers out the estimates. However, planning poker is a collaborative technique. Multiple "experts" in different areas (cross-functional) sit together and estimate as a team. Views are shared, issues are raised and discussed. I find this exercise is more rigorous than traditional estimation tools and tends to get at better estimates.

    ReplyDelete
  3. Scott, those are great examples of lighter weight methods that achieve a "good enough" estimate without the overhead and false sense of confidence that the complex estimation tools Jeff refers to creates. I guess the point here is that complex, detailed, onerous upfront estimation tools give you, at best, AS useful of an estimate but with a lot more wasted effort.

    ReplyDelete