We're failing to deliver the release on time and it's not being deployed
For the 4th release in a row we have not managed to cut the release at the end date, it has rolled on for weeks due to quality issues.
We have not deployed the last 3 releases to customer either.
What should a Scrum Master do?
The Scrum Master should get some more information about what's going on:
- What is the relationship between release and your Sprint cadence? Are you supposed to be releasing during a Sprint or at the end of a Sprint or every couple of Sprints?
- What types of quality issues are being found? Is work not getting to a Done state because of issues found? Is a stakeholder performing some kind of acceptance testing on the product and rejecting it?
- What is being done at Sprint Retrospectives to find and address the underlying problems causing these quality issues?
Everything that @Thomas Owens said. This is not a Scrum Master's problem to solve but it is one where the Scrum Master can play a pivotal part in facilitating the team's ability to solve it. You have a symptom of a problem...code is not being deployed to production. Now facilitate the fact finding to understand the root cause then help the team determine their solution. And this may be an iterative process taking time find the actual root cause. Use the Retrospectives, the Sprint Reviews to find it. It will most likely take time outside of those two events to figure it out. To put this into terms that the Developers can relate to, present it as "we have a bug in our process". Same type of techniques used to find a bug in software can apply to finding a bug in a process.
Remember that Scrum Masters do not solve problems, they enable teams to solve problems.
- What is the relationship between release and your Sprint cadence? Are you supposed to be releasing during a Sprint or at the end of a Sprint or every couple of Sprints?
4 sprints in a release, releases are 3 weeks. We find issues after the final sprint, this drags on for weeks. Issues are not always associated to recent changes. Testing does happen as part of development also.
- What types of quality issues are being found? Is work not getting to a Done state because of issues found? Is a stakeholder performing some kind of acceptance testing on the product and rejecting it?
Solution issues found e.g. a feature added, issue found with a scenario which is later deemed essential or in an area which was not thought of being affected before.
- What is being done at Sprint Retrospectives to find and address the underlying problems causing these quality issues?
These issues are not discussed in reviews and retrospectives. They appear outside of the release, teams have already moved onto the next release, the issues come into sprint and are dealt with without RCA.
Thanks @daniel.
We do not have responsibility for Deployment, this is a problem, its with another group. It can take some time for the release to be deployed, in the interum we are on the next release, having to return to the previous one to fix the issues.
For the 4th release in a row we have not managed to cut the release at the end date, it has rolled on for weeks due to quality issues.
We have not deployed the last 3 releases to customer either.
What should a Scrum Master do?
The work is not Done, there are quality issues. Yet the team rocks on regardless and without empirical feedback: each Sprint is fake. Go to the Developers and shine a light on their accountabilities.
The work is not Done, there are quality issues. Yet the team rocks on regardless and without empirical feedback: each Sprint is fake. Go to the Developers and shine a light on their accountabilities.
I agree that it's not Done.
I know that it's not Agile and it's not Scrum!
Developers in the scrumteams are doing a great job, testing as much as they can with the help of the testers. However when the integrated code gets exercised properly it seems to run into issues. We do need to pull this integration testing into the sprint.
The cadence described (4 Sprints between releases, 3-week releases) doesn't make much sense. Do you mean 3-week Sprints with 4 Sprints between releases (so a release is every 12 weeks)? Or that you have 4 Sprints, hand something off to a downstream team, they work on it for 3 weeks while the teams go on to other things? However, this doesn't really change anything else that I have to say.
You say that testing happens as part of the development and that the Scrum Teams are going a great job testing. This doesn't align with finding issues in the integrated code. What is being tested if it's not integrated code? Perhaps it makes sense to test the changes associated with a Product Backlog Item independently to confirm that it is correct, but as soon as that is done, that set of changes should be integrated into the rest of the product and then the necessary testing can be done to make sure that the system is properly integrated. That is, do at least some level of integration testing after completing each Product Backlog Item. Since you're in the software space, use test automation to shift some of the load of this repetitive testing away from people so they can focus on things like exploratory testing or usability testing that is difficult to automate. If you can write automated tests as part of each Product Backlog Item, you can integrate those tests to grow and maintain the test suite with every change.
When you find issues outside of the Scrum Team that are critical enough to have to work on immediately, take the time to do an RCA on them. You don't need to necessarily use a lot of formal tools, and five whys or three-legged five whys would be plenty, with the involvement of the right people. Figure out what the team missed that caused the downstream problems and make sure the team changes how they work. There are underlying reasons why a scenario is found to be essential even though the team didn't realize it in planning or development. There are also reasons why integration is leading to defects. Get to the root cause and fix it.
Thanks Tomas.
4 x 3 week sprints.
Yes there does seem to be a difference in what is being testing during a sprint than what is being tested after a release. I am unsure what the difference is.
RCA, yes, I've done this before and have just met with teams to discuss starting this initiative again, good article here and the use of categories.
https://medium.com/propertyfinder-engineering/root-cause-analysis-as-a-…
You may want to see if the problems being found downstream are actual code defects or if they are caused by a different environment makeup. It is not uncommon that a Development or Test environment will not be configured like the Production environments. In my QA days I always made sure we had a test environment that was configured the same as Production but on a smaller scale.
Root Cause Analysis is a great way to start diagnosing how and why you have this situation. And don't be surprised if you find a problem, address it, only to find that another problem has been uncovered.