Close
Written by
Michał Zgrzywa

Michał Zgrzywa

The best estimates we have

It was a late winter afternoon. Most of the people have already left the office or were about to do so. I was heading to my lab after having a tasty late lunch in our cafeteria. (Ok, I should call it just the late lunch.) I was hoping to get an hour or two of a lone, quiet work time, to finish one of my supreme visualizations. To my surprise, when I entered the lab I have noticed that there is someone in, sitting by my desk.

This was a man, around 50 years old. One could easily feel that he is a person of extraordinary intelligence and wisdom. And he was reminding me someone. After a moment of thought I was almost sure, so I said:

Me: You must be me, but from the future.

Stranger: Yes, you are right.

Me: Wow, I have so many questions…

Stranger: Be quiet and listen. I do not have much time here… or should I say – now. I came to warn you about a serious mistake that your company is about to make. In 2014 you have started an important project. It was called “Project Wombat”.

Me: Yes, I know this project. It has started about a half a year ago. We are now to provide estimates for the remaining work, before signing the long-term contract.

Stranger: This is why I came. The estimates you will provide will be terribly wrong. This will end the cooperation with the customer. What’s more, the entire company will feel hugely embarrassed. Many good people will leave and others will lose their spirit. In grief, many people will start to dress formally, stop making jokes and the company atmosphere will die. It will take many years until the company regains its strength again.

Me: But we have our good way of estimating the remaining work in projects. We calculate an average of the story points delivered during already finished iterations. Then we check how many story points there is still to deliver. We divide the two numbers and we know how many iterations we still need to finish.

Stranger: What? Are you a moron? You calculate an average and estimate one number? A man and a dog have 3 legs on average! One number… So tell me, what is the chance that you will hit exactly this estimate? What is the chance of hitting +10%? + 20%? +50%? +200%? Is it probable? Is it impossible? By presenting one number you can make more harm to your customer than by telling nothing. They might believe you and assume that the estimate is accurate.

Me: Well… When we need a range, we sometimes take two numbers: the number of story points delivered in the worst iteration and the number for the best iteration. We calculate two estimates of finishing: optimistic and pessimistic.

Stranger: Are you joking? And you call yourself a data scientist! So you take two completely unusual iterations and calculates two estimates that are almost impossible to be correct. You are modelling two situations where either everything goes ideally or everything goes terribly. It is not very useful, I must say. Maybe stop bothering with calculations and just say “we will finish between 0 and 1000 iterations”? You will give the similar value to your customer.

(I was starting to hate that guy. Probably sometime in the future something bad happened to him and he got bitter. I promised myself to look out and not become like this.)

Me: So maybe you could just tell me what the correct estimate are?

Stranger: I can’t. It has something to do with preserving time continuum or something. I cannot give you any precise information from the future. But I can motivate you to better use what you already have.

Me: Pity, I was hoping that you will tell me what company I should invest in.

Stranger: What do you mean? Ah, you are talking about the stock market. A few years ago it was finally classified as gambling and forbidden in Europe and most of the US states. You can still invest in a little stock market in Las Vegas, though… Oh, it seems that I need to go. Listen, I have one very important personal information for you. Remember, under no circumstances, you cannot…

…and he disappeared.

After the initial shock, I realised that he was right. We should be able to produce better estimates than we were providing so far. To do so, we will have to get to a lower level of details and think on single user stories, not the whole iterations. And to think in the categories of probability distribution, not single numbers. I started to broadly understand what needs to be done:

  1. We need to gather historical data about the team’s estimates and assess if they are good enough for drawing conclusions.
  2. We need to gather historical data about amount of work the team was doing per iteration and check how variable this was.
  3. Finally, we need to use the historical data to estimate the effort needed to implement the remaining scope. And we need to visualise this properly.

Historical estimates

I was very lucky this time, as the person running the project Wombat was gathering detailed project data from its beginning. Not only had he kept the initial estimates of all the user stories. He has also organised the project timesheet structure in such a way that it was possible to read how many hours of effort was spent on every single finished user story. The first data set we have created was as follows:

Iteration User story Estimate in SP Effort spent in hours
Sprint 1 User story 1 5 42
Sprint 1 User story 2 3 30
Sprint 2 User story 3 5 25
Sprint 2 User story 4 1 6

As you can see, for every finished user story we have we have gathered both the estimate in Story Points and the effort spent on implementing it (in hours, including bug fixing time). We also know in which sprint the story was done. This is very useful, as usually teams work much slower during a few first iterations (people learn the new domain, how to cooperate with new team members, etc.). As this initial lower efficiency should not happen in the following iterations of the project (unless the team composition changes significantly) the data from the initial iterations should be excluded from the data set.  This was the case also in our project, so we excluded the user stories from the first three sprints.

The next step was to assess if our historical data could be used to draw sensible conclusions about the future. To do this, we have created a visualisation of the data:

V1 distribution good

On the graph you can see seven boxplots, one for each possible size of user stories (the vertical axis shows: 1, 2, 3, 5, 8, 13 and 20 Story Points). On the horizontal axis we have worked hours. Each boxplot describes the distribution of the worked hours needed to finish stories of its size. E.g. the boxplot for the size 3 SP shows that among all the finished stories of this size, the minimum effort needed was 5 worked hours, maximum was 30 hours, the median was 12 hours and 1st and 3rd centile were 9 and 18 hours.

The first insight is obvious. Story points do not translate to a single number of hours. In fact, the spread of the distribution is significant. Usually, the maximum value for a story size is much bigger than the minimum value for the next story size. This is a bit surprising. However, despite the wide spreads, the distributions above suggests that the historical data are good enough to be used for future estimations. To describe what would not be good enough, have a look at one more visualisation created for a different project:

V2 distribution bad

In this case the team did not have a sufficient understanding of the user stories when the estimates were created. The order of the distributions is wrong. Most of the “fives” are smaller than most of the “threes”. “Eights” have enormous distribution. One’s and two’s are very similar. You could not base any sensible prognosis on such data. The team should throw away all the high level estimations of the remaining backlog and once again discuss and assign sizes to the stories. The proper data about the effort per size would be available after a few sprints the earliest.

Fortunately, the data in our project were good enough. (The width of the spreads could have at least two reasons: the team could have too little information about the scope or they were not good at estimating in this project yet. Additional analysis could help to distinguish between the two reasons, but this is a separate story.) The next step was to look at the second side of the coin – the team’s capacity.

Historical capacity consumption

To a huge surprise of every fresh project manager (including me at one time), the software project teams do not only write code (!). They spend time on meetings, communication, estimates, deployments, spikes, playing table soccer and many more. To understand how much time is really spent on the user stories work, you must gather proper data (believe me, you will be surprised when you do this).

Once again we were lucky. The Project Wombat’s PM was gathering such information every sprint. The visualisation below shows the split of the sprints’ capacity on different categories:

V3 capacity

After the three initial iterations, the share of hours spent on user stories (Feature Work) became quite stable. The team was spending about 70 % of their capacity on user stories work (including development bugs fixing). This number will be the base for calculation of the remaining project time. As we want to calculate how many sprints we need to finish the project, we have to know how much capacity we will have in each coming sprint and what part of it we will spend on user stories development. When preparing the list of future sprints capacity, together with the PM we have taken into consideration all planned holidays, long weekends, trainings, company events etc. and applied the 70% to the expected available hours. Finally, we have got the following table:

Sprint Expected hours of feature work
N + 1 550
N + 2 440
N + 3 560

Simulation

Having the historical data prepared, we were ready to run the simulation. The remaining backlog was well understood, all the user stories were quite well broken down and had their sizes assigned (in Story Points). We have gathered the following information of the remaining backlog:

Size (SP) How many stories left
2 12
3 16
5 11

For the simulation we have used the simple Monte Carlo method. We have calculated 10 000 simulation cases using the following algorithm:

  1. For each story in the remaining backlog, randomly choose a historical effort (in hours) from the historical data for the proper story size bucket (e.g. for a 2 SP story we randomly choose an effort from the bucket of all finished 2 SP stories, for a 3 SP story – from the bucket of all finished 3 SP stories, etc.)
  2. Calculate the sum of the efforts chosen for all the remaining stories.
  3. Check how many sprints are needed to cover the sum of efforts with the expected feature work hours.
  4. Note the result.

(Such a simulation can be quickly implemented in any statistical package. What’s more, it can be also easily prepared in an Excel spreadsheet or any other calculation tool of your choice.)

In the end, we had the list of expected project lengths for 10 000 simulated cases that were basing on the project historical data. The following visualisation shows the cumulative distribution function for the results:

V4 simulation

On the vertical axis you can see the percentage of all the simulation cases. On the horizontal axis you can see how many sprints are needed to finish the remaining backlog. The visualisation should be read as follows: “If vertical axis shows 33% and horizontal axis shows 22 sprints, this means that in 33% of simulated cases the effort needed to finish the backlog was 22 sprints or less.”

As you can see, the chart allows to pose very quantifiable statements about the estimate. Not only we can tell that the average estimate is 23 sprints (which means that in 50% of cases the project will take less and in 50% of cases it will take more). We can also tell a lot about what is probable and what is not. We can say that there is very small chance (less than 10%) to finish the project in less than 20 sprints. On the other hand, we can say that there is very high chance (more than 90%) to finish the project in less than 26 sprints. This allows to draw informed conclusions about the budget that must be secured for the project and about contingencies that should be made.

What if

Once again, the world has been saved. The conversation with the customer went marvellous. The estimates were understood and accepted. The team has rushed into implementation.

However, I could not resist the feeling that we were lucky to have all that data. What if we didn’t gather them from the beginning? Could we make better estimates without them? Unfortunately, the answer will have to wait to another post.


If you enjoyed this, keep an eye on my blog ExtraordinaryMeasures.net for more stuff like this.

Share this post on

Tags

2 thoughts in Comments

  1. Paweł Słowikowski

    Very interesting post. You were, indeed, lucky to have the well understood, estimated scope of the future project that could be correlated with the excellent range of the historical data available.
    I especially like the way you provide the probability distribution of the number of sprints needed based on the Monte Carlo method. This is much more advanced (reliable?) than the typical PERT calculation in my opinion.
    A question to that method: “For each story in the remaining backlog, randomly choose a historical effort (in hours) ” -> randomly according to what kind of algorithm or probability distribution?

    I am looking forward to the next post -> if you could shed some light on your ideas how to help the clients predict the sprints needed to complete certain scope when the project/technology/team is quite new.

    Reply
    1. Michał Zgrzywa Post author

      Thank your for your comment!
      About choosing randomly: the main advantage of the Monte Carlo method is that you do not have to know or asume the distribution of the data. By choosing samples from the set completely randomly (each historical item has the same probability to be chosen), you are simulating the exact distribution as the historical set has. The items that in nature are more common will be repeated more times in the historical set, and will be chosen more often by the method. Of course, the more items in the historical set the better 🙂
      Estimating for a new team is a domain of proffesional fortune tellers and Technical Architects :). Of course there are methods (like Story Point size callibration) that we are using. However, what I want to describe in the next post is the way to estimate the remaing backlog in a project with some history but without much data gathered. Hope this will be also interesting.

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Read next

Google Hacking – how to find vulnerable data using nothing but Google Search Engine.

1. Very, very, very very short introduction to Google web indexing.
Google uses the proces called crawling (or fetching) to index new or updated pages. The program that does the crawling is called Googlebot (also known as a robot, bot, or spider). Googlebot uses an algorithmic process: computer programs determine which sites to crawl, how often, and how many pages […]

Read more