Product based validated learning
If each Sprint is essentially to test a hypothesis (for example, Sprint 20, build a Shopping Cart feature because customers are not easily able to purchase multiple items), do you think there is enough emphasis on preparing and/or enabling the Product Owner to learn from a Sprint's results? Obviously, the Product Owner will want to determine as soon as possible whether the Shopping Cart is heading in the right direction or not.
What if...
1. End of Sprint 20 and the Shopping Cart is released
2. End of Sprint 21 and the Luxury Features to the Shopping Cart are released
3. End of Sprint 22 and the Features for the Visually Impaired using the Shopping Cart are released
4. Beginning of Sprint 23 and the PO has now learned that the original Shopping Cart was a poor idea because the users of this particular website prefer to buy one at a time only, due to the nature of the product - PO decides to rollback and remove the Shopping Cart feature entirely
Has the Scrum Team just wasted two Sprints (21, 22) on building additional features for a hypothesis that turned out to be wrong?
Should the Product Owner have prioritised the work differently, adding the additional features only once they'd learned from Sprint 20?
What happens if there is little else on the backlog, so that the Luxury Features was the only set of new features to work on?
-
In the past as a Scrum Master, I don't think I have focused on this aspect of serving the Product Owner enough, if at all. How have you helped the Product Owner with this? Have this ever been impeded by 'the business' in larger organisations? What are some techniques you've applied?
In the situation you describe the validation of the basic shopping cart hypothesis was dependent upon trailing indicators. In order to reduce the possibility of wasted sprints, metrics ought to be selected which are immediately actionable. A team doesn't even need to wait until the end of the Sprint to validate an MVP, as increments can be released at multiple discrete points. A concierged shopping cart function may have been enough to test the basic hypothesis in this case. It's all about reducing waste and the leap of faith needed to invest in continued product development.
Makes sense, Ian. Still, I know I've never put enough effort into that aspect in the past (down to ignorance, not laziness... maybe both), where I coach a Product Owner on learning from an increment. I've never talked to another Scrum Master face to face about it either, and never seen it in them.
Do you have any interesting accounts in this area?
Some organizations deploy MVPs multiple times per day or hour in order to A/B test an hypothesis. They may select a very small (perhaps randomized) cohort of users for each MVP. Where user transactions run into the thousands or millions, the resulting metrics are likely to be of statistical significance.
A common challenge is that even when IT is locally optimized, and CI/CD is in place with cross functional development teams, business may not be culturally ready to make use of this capability.
Hi,
It's not only about knowing the value of the increment that we released, but also the value of the increment that we plan.
As per scrum guide, The PO's responsibility is to maximize the value of the product. For that, PO should consider the feedback from users while defining and prioritizing the requirements. How and why do we decide to go for a shopping cart? Do we get regular feedback from end users?
Also, during sprint review, entire group should collaborate on what to do next and will try to update the backlog. Are we having the stakeholders who can give the right feedback in the review?
Thanks.
A team doesn't even need to wait until the end of the Sprint to validate an MVP, as increments can be released at multiple discrete points. A concierged shopping cart function may have been enough to test the basic hypothesis in this case.
Some organizations deploy MVPs multiple times per day or hour in order to A/B test an hypothesis.
Hi Ian, just coming back to this as I think I'm still unsure.
I assume this would be a practice that works to reduce the leap of faith by ensuring that the Sprint Goal is still relevant. But how does this happen? How is an MVP created so quickly and how is a Development Team expected to plan for that?
Perhaps more importantly, if the hypothesis is tested mid-Sprint and value can be obtained, why would we not just say the Sprint is too large and reduce the Sprint length?
A Sprint Goal allows a complex feature to be tackled for which there can be multiple unknowns. For example, a Development Team may feel confident about being able to develop an online shopping cart, and their forecast of work in the Sprint Backlog may be realistic for such a Goal. However, the Sprint Backlog will not necessarily be a specification, and the way work is implemented may be "negotiable" as per the INVEST criteria. There may perhaps be room to experiment with the UX workflow, or screen layout, so that shoppers are more likely to complete a transaction.
There could be a great many hypotheses to be tested about the optimal implementation during the Sprint. Each may involve the release of an MVP, even if it represents a miniscule change like shifting a button a few pixels.
In context to the rest, the last line really makes it click for me. Thanks.