Effort time Vs Real time
Hello,
I am part of FPGA team and we are using Scrum for our planing.
We are facing an issue where some of our user stories have difference between the team member effort (Which might be small) and the time it takes the task to complete (due to long compute time).
For example FPGA synthesis cycle involve small effort for the team member but it finish only after 24 hours of machine run, and the team member can do other tasks during this time.
So how do we model it in the iteration planing? On one hand the US is small effort and the team can take more tasks , on the other hand the "real time" to complete the task may be long.
Thanks
Matan
Matan,
This is a common mistake with Scrum estimating that new Scrum practitioners make.
Your difficulty has to do with trying to equate the story size (effort) with a time estimate that it will likely take to complete. That is an incorrect approach in Scrum. There are a number of factors that can affect the time it takes to complete a story (team member ability, story understanding, Definition of Done).
I like to use the analogy of a 50-lb boulder. The story example is to take the boulder and move it from point A to point B.
A "strong" team member may take much less time to move the boulder than a "weaker" (i.e. - less experienced) team member. They each may need a different time duration to complete the story, but it is still moving a 50-lb boulder.
Maybe the moving of the boulder is delayed because there is a long convoy of trucks passing between points A and B. The "effort" is the same, but it will take a lot longer.
That is why the preferred estimation method in Scrum is relative estimating. I would suggest avoiding any attempt to equate a time estimation to sprint work.
Hi,
Thanks for the answer.
Let me try to elaborate the problem using your analogy.
Let say I need to move 4 boulders from A to B. And to do so I need to load the boulder to a truck and can load the next one when it return. Meanwhile I can do other stuff.
So "my" effort is relative small (I am in a strong team) but since the truck drive slow it will take the complete iteration to complete the 4 boulder and I can't start the next task that dependence on this completion (I can do other teams).
So do I estimate this task as small? How to I model the restriction of the iteration limit?
Matan
Scrum does not magically make things better. In fact, it often raises the visibility and "pain" of current practices and policies.
In your analogy expansion, you have a slow-moving truck that is needed to move the boulders.
So I would have a few "follow-up" questions:
1) Why is there only one truck? Can there be more than one truck to help with the job?
2) Why are we using trucks that can only carry one boulder at a time? Can we find larger trucks to carry more than one boulder?
3) Why is the truck slow? What can we do to speed up the truck, and subsequently speed up the delivery?
To your question though, always estimate the effort. Do not include any potential impediments, limitations, or "waste" (i.e. - waiting) in your estimation. Those items are good topics for reflection and improvement during your retrospective.
> ...FPGA synthesis cycle involve small effort for the team
> member but it finish only after 24 hours of machine run,
> and the team member can do other tasks during this time.
> So how do we model it in the iteration planing
Since no developer effort is required during the run, the estimated size of the item should not include run time. The Sprint Backlog should however be planned in such a way that any dependencies on item completion are taken into consideration.
In other words plan around your impediments, where their resolution is impractical, so the Sprint Goal can be met. Estimates should not assume the existence of impediments, but a Sprint Backlog should be ordered and planned so as to take impediments into account.
Along the lines Timothy implied, your case is similar to continuous integration methods in SW: after completing a task/US you typically run a suite of tests to verify the change (and to verify it didn't break something else). In a project I have been working on these tests took hours. The solution was to solve this testing bottleneck by adding more computing power.
Hope this helps,
Michael