Deep technical complexity and spikes
Hi!
In our group we struggle with estimation. Improving a search engine, many stories has deep technical complexity. There is always hard performance requirements and there is no telling how a specific algorithm change or new optimization will affect other parts in the domain. So there is always very high risk of estimation being wrong.
We use spikes and prototypes, but still not enough.
Do you have any good advice on this?
Thank you
> We use spikes and prototypes, but still not enough.
Why aren't they enough? There's no constraint on how deep you can take a spike during refinement, so why can't you use them to get the information you need in order to estimate?
If you can't tell "how a specific algorithm change or new optimization will affect other parts in the domain", then is the environment in which you conduct spikes of sufficient quality? From what you say, it sounds as though it is not (or insufficiently) prod-equivalent.
Thank you for your answer!
Well, a specific impact on performance only occurs after implementing the modifications. For example, the same data abstraction/optimization layer is used for many kinds of operations. If they modify this part because of a request of one operation, another one can slow down. The group knows that it might slow down, but it is just a feeling, and the "something will always go wrong" experience.
Spike seem not to be enough to measure this effect.
It sounds to me as though the spike environment is not sufficiently prod-equivalent to be fit for purpose. If the team intend to use spikes as part of their refinement and estimating process, then that situation will need to be remedied.
Remember that if the team can't estimate a piece of work then it cannot form part of their Sprint commitment. In Scrum, a Development Team are under no obligation to accept any work, and plan it into their Sprint Backlog, if they are uncomfortable making a forecast for its delivery.
Plan the work that is needed to get to the estimation level you need as a deliverable in the sprint(s) ahead. That is a worthwhile deliverable within a solid engineering process.
Nice answer, Ian.
It strikes me that the development team might be light on a specific skill, if their work keeps failing the same part of the definition of done: "hard performance requirements." How do they feel? Have you asked the developers how they want to solve this problem? How many of them have encountered this sort of problem on other products? Does this seem normal? I assume that they are able to resolve the issue each time; do they think it is ok to deliver the work not DONE and then fix it? Do they realize the definition of done for this product includes those performance requirements? Get them talking and see where it goes.
Thank you for your valuable insights.
Zoltan, to go back to the original question. Have you asked the team what value there is in estimating the stories? Are the estimates used to change priorities? Based on the estimates, have you ever decided not to implement a story?
Posted By Fredrik Vestin on 03 Jan 2017 02:22 PM
Zoltan, to go back to the original question. Have you asked the team what value there is in estimating the stories? Are the estimates used to change priorities? Based on the estimates, have you ever decided not to implement a story?
They do not see too much value in estimates. They see it as something the PO needs to be able to plan releases and make priority decisions.
I'm quite new in the group (6 weeks so far), but it seems to me that yes and yes are the remaining answers :)
> They do not see too much value in estimates. They see it as something
> the PO needs to be able to plan releases and make priority decisions.
What about for their own Sprint Planning purposes, when it comes to deciding how much work they can take on? How are they currently making this assessment?