Nuances in Scrum Guide that might be missed at first glance
Hi,
In the past week, I had the opportunity to talk about some parts of the Scrum Framework. In this post, I will focus only on one of them.
Long story short, the topic was about Sprint Review and what should be shown / talked during this event. Most opinions that I faced, even in older topics on the internet, have in common focus on showing working Increment, and here comes the twist - if we inspect description in SG:
Sprint Review
A Sprint Review is held at the end of the Sprint to inspect the Increment and adapt the Product Backlog if needed. During the Sprint Review, the Scrum Team and stakeholders collaborate about what was done in the Sprint. Based on that and any changes to the Product Backlog during the Sprint, attendees collaborate on the next things that could be done to optimize value. This is an informal meeting, not a status meeting, and the presentation of the Increment is intended to elicit feedback and foster collaboration.
This is at most a four-hour meeting for one-month Sprints. For shorter Sprints, the event is usually shorter. The Scrum Master ensures that the event takes place and that attendees understand its purpose. The Scrum Master teaches everyone involved to keep it within the time-box.
The Sprint Review includes the following elements:
- Attendees include the Scrum Team and key stakeholders invited by the Product Owner;
- The Product Owner explains what Product Backlog items have been "Done" and what has not been "Done";
- The Development Team discusses what went well during the Sprint, what problems it ran into, and how those problems were solved;
- The Development Team demonstrates the work that it has "Done" and answers questions about the Increment;
- The Product Owner discusses the Product Backlog as it stands. He or she projects likely target and delivery dates based on progress to date (if needed);
- The entire group collaborates on what to do next, so that the Sprint Review provides valuable input to subsequent Sprint Planning;
- Review of how the marketplace or potential use of the product might have changed what is the most valuable thing to do next; and,
- Review of the timeline, budget, potential capabilities, and marketplace for the next anticipated releases of functionality or capability of the product.
The result of the Sprint Review is a revised Product Backlog that defines the probable Product Backlog items for the next Sprint. The Product Backlog may also be adjusted overall to meet new opportunities.
I bolded the words on which I want to focus on. In the introduction to this part, the creators of SG used the word "done" just as done, but throughout different parts of SG and also this part, they also use it wrote as "Done". If we inspect it further, we may come to the conclusion, that whenever they use "Done" they refer to "Done according to the Definition of Done", so that when they wrote here just done, they do it on purpose, to use wider common definition / understanding of that word.
For example, if we (Scrum Team), done some process improvements, or research, or Proof of Concept, or maybe just some designs / wireframes and it is clearly still not "Done", we can collaborate about it with stakeholders during Sprint Review as long as we consider their feedback valuable.
What do you think about it? Is it just nitpicking or not?
I agree the nuances between 'done' and 'Done' are likely intentional.
You bring up an interesting example and I think a lot of team's have trouble attaching this type of work to their Scrum Framework.
I'm a big fan of Design Sprints and prefer to include them within the normal Sprint container when there's a lot of new design or proof of concept work that needs complete. These can end with a 'Design Sprint Review' where the team gathers additional feedback on more complete prototypes or proof of concepts that were collaborated on and created during this Sprint.
Another potential place for this type of work could be in backlog refinement or throughout the Sprint if it's feasible to prototype, review, and develop within the time box.
Scrum.org's PSU course really stress-tests what happens around the definition of "Done", particularly as you explore what happens when teams become responsible for product discovery and validation, which can often take longer than a sprint, simply because getting meaningful data often has a minimum lead time.
There is definitely a place for experimentation and learning prior to, during, and after development (and release) of a "Done" releasable increment.
If such learnings can enable better inspection and adaptation at a Sprint Review, why wouldn't you include them?