Good practices around defining suitable acceptance criteria
Hello there
I have experienced the Scrum framework for a long time now and have witnessed numerous approaches for defining backlog items (user stories), including different strategies for defining acceptance criteria.
Sometimes the given-when-then model has been utilized, which I find unnecessarily complicated and unintuitive, and sometimes there was just an enumeration of bullet points pertaining to various domain-related criteria.
I started to appreciate a well-defined Definition of Ready (DoR) and Definition of Done (DoR) which functions as a checklist.
But what I have realized is the lets say "Arbitrariness" of the scope of these definitions, especially during a refinement session. What I mean by this is that sometimes, especially during a refinement session, someone considered security concerns (non-functional criteria), sometimes performance, sometimes an edge case of a business function, sometimes something related to a UX detail.
This made me wondering if we could systematize the discussion and definition of the various criteria, so I started to work on a list (within the DoR and Done) which poses aspect-related question. Questions of security, risks, edge cases, testing, UX, etc., which should be considered during a refinement in order to reduce scope ambiguity, foster clarity, etc.
So my question(s):
- Have you had the same concerns when working with acceptance critieria (or DoR/DoD)? If not, why not?
- Is there a good practice, almost like a checklist, for enabling consideration of different aspects of any given user requirement?
Think of acceptance criteria as representing a certain level of Done, and a level of quality which the Developers are accountable for. So:
- The due diligence you are trying to systematize is reasonable, bearing in mind the commitment they are making, but
- this doesn't necessarily mean that quality measures are never specific to a particular Product Backlog item. They can be.
This isn't a matter of "arbitrariness" -- you're just observing a level of specificity regarding Done which the Developers are accountable for.
- this doesn't necessarily mean that quality measures are never specific to a particular Product Backlog item. They can be.
Thank you, this is exactly what I have in mind as well.
Maybe my question is still not well written. I wanted to have some insights into actual living examples of such lists pertaining to acceptance criteria discussion in order to not miss an aspect.
Like a checklist wheter or not we, as a team, have thought about important aspects like those I've examples to.
The only "list" I have ever worked with in a Scrum setting was the Definition of Done. And even then, it was not always a list. Lists tend to lead to process which can be an enemy of empiricism. I help teams realize that each item is unique and should be treated as such. Using the Sprint Retrospectives, I help them start to realize what type of information is useful for them to understand the work associated to improving the Product on which they work. I prefer to let them organically grow into the situation where they feel they know their product.
You describe a situation that is common, where one individual might think of some concern that the others didn't. That is why refinement is not an individuals responsibility but is a group responsibility. The sharing of knowledge behind each person's thoughts on the matter is what drives teams to be better at the activity. Maybe instead of looking for lists to use you could focus more on the activities the team does related to refinement. Are they discussing their thoughts? Are they learning from the past? Do they question each other's thoughts to invoke conversation? These type of focusing activities have served me and the teams I have been involved in much better than any checklist could.