Reporting on stories and tasks
How does everyone report on stories versus tasks? For example, we use VSTS and have stories and the tasks for each story as shown in this example:
Tasks can then be dragged to their current status column all the way to completion. That part is easy enough. But how do you handle the status of the stories? Do you change the status/state when one task enters that state? When both have? Or do you just report on task completion and close the story when every task is closed?
Thanks.
Think in terms of the metrics which ought to be gathered for process optimization. A team might need a clear view of when work on an item starts and ends, how many are in progress, the age of each item, and burn rates for example.
These considerations will help shape their workflow policy. For example they may decide that work starts on an item as soon as the first task is actioned, and finishes with the completion of the last one. Alternatively they may decide work on an item starts at the beginning of the Sprint and only ends with a release into production.
What actually is it you are attempting to report though, and to whom and for what purpose?
Thanks, Ian. For stories, we're looking to evaluate our estimations. How accurate are our story point estimations?
For stories, we're looking to evaluate our estimations. How accurate are our story point estimations?
Let's first consider why a team should care about that.
The purpose of estimation is to help a team forecast how much work it can reasonably take on and complete, with no tasks remaining, by the end of the Sprint. The overall work capacity for achieving a Sprint Goal is therefore the key consideration.
Individual estimates are subordinate to that concern, and hence it's important not to get lost in the weeds about their accuracy. A Scrum Development Team should not become story point accountants. Sweating over the accuracy of estimates for the wrong reason could be waste.
The accuracy of individual estimates has more of a bearing on the flow of work during a Sprint. To improve flow, it would be reasonable to compare the age of each story against its estimate and other INVEST criteria. For example, if a story has a low estimate and is thought to be independent and testable, but for some reason languishes in progress for 90% of the Sprint, then it would be important to understand why.
For example, if a story has a low estimate and is thought to be independent and testable, but for some reason languishes in progress for 90% of the Sprint, then it would be important to understand why.
A possible explanation for this could be that the estimates might not relate to cycle time. For instance, if estimates represent effort or complexity, they might completely miss other factors that affect cycle time. That might be OK, but even if the estimate was pretty accurate, there is still value in trying to reduce the cycle time.
One way of doing this might be to track the time from when work begins on each story, to the time it is part of a "Done" increment. This allows you to identify outliers, such as stories that took an unusually long time.
The Development Team could then dive into that story in greater detail. There could be many explanations for a long cycle time. One could be the amount of work involved, but another could be that there was a long wait time, because of dependencies on factors beyond the team's control, or just poor self-organization.
Given how your team structures its Sprint Backlog, such that stories are divided into tasks, it may make sense to identify delays, such as when a story was in progress, but no-one was able to move a particular task across the board. Identifying the blockers and wait times is usually a precursor to eliminating or reducing them.
However, in your example, I see the tasks only seem to be "QA" and "Dev". Does QA typically start before Dev is completed? If not, it might make sense to stop using tasks, and simply create a column on the board, that highlights when "Dev" (I presume coding) is complete, and the story is waiting to be tested, then a column to highlight that QA has started. This could help make it even clearer where the bottlenecks are. It could be that the biggest impact on cycle time is when the story is waiting between "Dev" and "QA".
This could also be an entry point for combining the power of Kanban with Scrum, in order to really get a grip on your workflow.
It might be worth reviewing the Definition of Done for possible stations across which work can flow. Generally speaking too much granularity is better than too little, at least until flow has been optimized and certain stations can potentially be abbreviated.
Thank you for the responses.