If Duration is historically Derived from Complexity Estimation...
Why don't Sprint Teams always make this a retro item to look at how long it took to do something with respect to the complexity that they had assigned it in planning...?
This is not something I have ever heard any Scrum Master talk about doing... Is this because I was taking to the 'wrong' Scrum Masters?
Thoughts?
Why don't Sprint Teams always make this a retro item to look at how long it took to do something with respect to the complexity that they had assigned it in planning
I see this as a great avenue to expose many of the topics that would emerge at a Sprint Retrospective anyway, but in a data-driven way. Although I prefer that Scrum Teams allow themselves the freedom to use the Sprint Retrospective in other ways, if they consider that more effective.
I frequently discuss cycle time with my teams.
For refinement, I advise the developers to first look at their cycle time at a certain threshold (e.g. 85% of items have a cycle time of 7 calendar days or less). I then recommend that they ask themselves whether they believe the item being refined can be Done within that number of days. If not, they should reduce complexity or size in some way, so that they are confident the item can be Done within that time.
This means for that item, one of two things will happen. Either:
- It will be Done within that number of days, and therefore contribute to a more predictable (and probably lower) cycle time.
- It will take longer than the predicted number of days to be Done, which gives the developers a perfect example to inspect why they were wrong on this occasion, and gather data for subsequent adaptation.
I generally advise the teams to inspect at least the slowest few examples over a sprint, so that they can look for patterns in what is causing slowness in the most extreme cases.
This could be a result of complexity, but it could also be the result of items waiting in a queue (e.g. Ready for Peer Review), ineffective refinement, the need for rework, dependencies on others, or decisions to shift focus once work on an item had already begun.
I disagree with the duration being based upon complexity. There are many simple tasks that can take much longer than complex tasks because of the need to wait on dependent tasks or a long running tasks to complete before others can be done. For example, starting a script to do a data migration may take a minute but the script could run for hours. Complexity wise, starting a script is pretty simple and waiting is even simpler. I think that you might not ever see this come up in the manner you expressed because people realize their mistakes and don't want to revisit them.
In teams I have worked with that never came up because every day in the Daily Scrum the story was discussed. During refinement activities, lessons learned from past refinement are applied. During Sprint Planning history comes into play as they plan. Honestly, revisiting an estimate after the fact does not have much value. Because you will never experience that situation again. Just the act of doing work means you are learning. Each time you do a task, you get better at doing it. And if you aren't then you really should think about that. Estimates are guesses based upon the information you know at the time you make the estimate. After that point in time, that exact situation will never occur again.
As @Simon Mayer points out, discussion on trailing indicators such as cycle time is much more beneficial. Those metrics are based upon actual work and not guesses about work. They incorporate learning as you go and do not require special consideration for it.
Why don't Sprint Teams always make this a retro item to look at how long it took to do something with respect to the complexity that they had assigned it in planning...?
Perhaps they are more interested in meeting their Sprint Goals, and in managing their workflow for that purpose. Product Backlog items might have been assumed to be more or less of the same size, and not estimated differently at all.