Zero Done Stories in a sprint (No Shippable PBI)
We started a 2 week sprint assuming that the testing environment is ready. However, we found out during the sprint that it is not ready. We kept working on tasks like like development, code review, and creating test cases. In parallel, we tried to get the testing environment up and running.
Now the sprint timebox is over, and the testing environment issue is not resolved. So, none of the users stories is tested, hence none of them is "Done".
What is the best practice in the sprint review of such sprint?
My thoughts:
- Changing the definition of "Done" for this sprint to exclude testing as we know its root cause, then move testing tasks to next sprint. In this case all stories will be done according to this sprint custom definition. (But no shippable PBI will be delievered this sprint)
- Extend the sprint for 1 extra week to meet the original "Done" definition and deliver a shippable PBI
- Keep things as is, and accept the situation as the team did the required effort then ship items of this and next sprints together
@Muhammad Gouda, Consider if the Sprint Goal was endangered when your Dev Team found out that the testing environment was not ready. If yes, consider how you would inspect and adapt to that situation. Consider the steps you and your team can take to prevent it from happening in the future. If it is an impediment, consider if the Scrum Master should get involved.
Once you discovered that the environment was not ready, yet was essential for getting work to Done per the team's Definition of Done, why was the focus on anything other than getting the test environment up and running? If, instead of working on the test environment in parallel with other work, you had dedicated the team to getting the test environment up and running, would it have been possible to get something to Done that would have added value in the Sprint?
These are good questions to discuss, as a team, during the Sprint Retrospective.
As far as your options go...
Changing the definition of "Done" for this sprint to exclude testing as we know its root cause, then move testing tasks to next sprint. In this case all stories will be done according to this sprint custom definition. (But no shippable PBI will be delievered this sprint)
There's a good reason why testing in your particular test environment is part of your Definition of Done. Without this, do you have confidence that your work is suitable for the stakeholders? Personally, I believe that the Definition of Done should generally get more strict over time, not less. However, even if you made this change when you change the Definition of Done back, you have a pile of undone work. There's also less motivation to get the testing environment up and running if you can get work to Done without it. There's a lot of risks here, and not a whole lot of reward.
Extend the sprint for 1 extra week to meet the original "Done" definition and deliver a shippable PBI
This is not a viable option if you are following the Scrum Guide. A Sprint is a fixed timebox. Although, based on inspection and adaption, you may find out that your Sprint cadence is not appropriate and adjust it, it shouldn't be a one-off case or done because you couldn't finish the work on time.
Keep things as is, and accept the situation as the team did the required effort then ship items of this and next sprints together
I believe that this is the best option. You should use your Sprint Retrospective to understand why this happened and what can be done to prevent similar occurrences in the future.
We started a 2 week sprint assuming that the testing environment is ready. However, we found out during the sprint that it is not ready.
Scrum is based on evidence, not assumptions. The team did not "find out" during the Sprint that the testing environment was unavailable. It's availability waa not satisfactorily demonstrated to begin with. None of this stops a team from Sprinting, but it does shape the kind of goal commitment they can reasonably make.
We kept working on tasks like like development, code review, and creating test cases. In parallel, we tried to get the testing environment up and running.
A more realistic commitment may have been to ensure that all set-up work, including testing environments, is taken care of by the end of the Sprint. That may have been a better focus, and it's something you might consider for the next Sprint.
Remember that something of value and of usable quality has to be delivered by the end of each Sprint. Scrum doesn't say anything about how significant it must be in relation to other work done. 99% of the Sprint effort can be in establishing a test environment, for example.
If just one small feature is then actually completed to Done standard, such as a usage scenario from one Product Backlog item, that can be all it takes to empirically demonstrate value.
@Steve Matthew and @Ian Mitchell, thanks for your advises on things should have been done at first place and how to avoid similar situation in coming sprints. But to be honest, you didn't give a direct answer to my question about how to run the sprint review of such a failing sprint.
@ Thomas Owens, thanks for discussing my thoughts in details, and providing a clear recommendation