Version Metric based on Story Points
Hi,
I might have a stupid question but consider the following classic situation:
There is a User Story with X amount of StoryPoints and it is not closed at the end of the sprint because
- It was underestimated
- or due to other tasks team had no time to work on it and therefore not all work in done
During Sprint Planning of the the next sprint, the team decided to take the rest of the User Story into the sprint, but do you?
- Lower the Story Points to reflect the remaining work
- Leave the Story Points to reflect the original estimate
- Raise the Story Points to reflect it was underestimated
If you perform (1) then a metric that will show you the efforts and value in a given version (SP added to a version) will fall, even though we did not remove anything work/value the version
If you perform (2) the next sprint will have a story with X points but the actual work left it only a fraction, so the Sprint Velocity / Closing rate will raise even though the team did not change
If you perform (3) you will have the same issue as in (2) and since you most likely have to remove other work with the same effort, the version metric will stay flat, even though you removed a feature
Am I missing something obvious?
Which of these options would best help the team understand the work and effectively plan a Sprint by crafting an achievable Sprint Goal and ensuring that the work needed to achieve that goal can fit within a Sprint? The only reason to estimate is to help plan your iterations and forecast the work, so any method that helps the team to plan their Sprint effectively and efficiently is the best choice. I've seen all these work well for different teams, so there's no one-size-fits-all approach.
I would point out that there are only 2 approaches. 1 - lowering the estimate to reflect the remaining work - and 2 - raising the estimate to reflect the underestimation - are the same thing. Because the work was started, some effort has been put into it. That effort may overcome any underestimation as well as give a better understanding of the necessary work. The options are to either reestimate the work or not reestimate the work, where a reestimate could result in an increased estimate, a decreased estimate, or no change to the estimate because it is a reasonable representation of the remaining work.
I'd suggest an alternative, though. Stop estimating. You've already spent a lot of time thinking and talking about the process for estimating work. Two of the proposals will require spending even more time estimating or reestimating work. All of this is a waste of time and effort. The Scrum framework only has one requirement for a Product Backlog Item to be ready for selection at a Sprint Planning event: the team believes the work can be done within one Sprint. Some teams make this smaller, focusing on Product Backlog Items that can be done within a few days from starting. Anything more detailed than this is wasteful. The use of flow metrics, primarily cycle time and throughput, can help with forecasting for Sprint Planning as well as longer-term views of the Product Backlog.
The Product Backlog ought to tell the truth at all times about how much work we currently believe remains for the Product.
The only purpose of estimation is to help the Developers get their arms around how much of that work they can take on in a Sprint. Everything else reduces to value delivery and empirical process control. I'd suggest that any measures and metrics ought to help a team in that endeavour.
As @Thomas and @Ian have said, estimation is only for the Developers to determine if the work they want to select could reasonably be completed within the Sprint timebox. A single occurrence , or small amount, of the situation you are explaining will not impact your averages which is what you would be using as a guide. However, I would suggest to the team that conversations should be had about this in the Sprint Retrospective. If this becomes a regular occurrence, then the team might find that their guide point is not useful. Help them focus the discussions on why it happened, not what to do about it if it happens again. Have them figure out why it is happening and put practices in place to prevent it. That will give them more confidence in their estimations going forward.
Consider this empirically. Knowledge comes from experience and making decisions based on what is observed. What has the team learned about this Backlog item based on their experience with it in the previous Sprint?
What decisions or adaptations should made based on what is observed? What provides the most accurate representation of what is needed to get the backlog item to Done, and does the current size estimate accurately reflect this?