Help needed as I am losing my mind 🤯
Hello my friends,
I am currently working as a Scrum Master for a company that provides employee deployment/HR/Payroll software for a retail clients.
The system is very complex from rota setup to calculating budgets. This means that any new feature request requires extensive regression before the release as any changes in a code may impact something somewhere that is not related to the user story but in during integration testing in QA team.
We have 2 streams, Maintenance (Kanban) & Product (Scrum). There is no define process at the moment. Only recently started here and trying to help them define it.
I face a few challenges. Firstly, they use monday.com and do extensive QA as soon as story is done in a sprint. Test cases include functional, integration, security on each user story. Additionally, currently there is a 2 week release cycle and it feels they don't release anything because QA fails 1 out 99 test cases that is a very small corner case.
I want to introduce release planning by having united framework (Scrum) for both streams, planning 2 weeks sprints, then giving a release to the QA team to do user story, integration, security & regression. This will help them to raise bugs and put more ownership to the product to start addressing issues in the system and signing off the release. It will also allow me to track velocity better. Currently items are moved from sprints to QA as tickets and Monday.com is not as good at tracking whole journey.
What would you suggest?
What investment has been made so far in test driven development (TDD / BDD), and automated build & regression testing? Is there a measure of code coverage by test packs for example, with a minimum level asserted in the Definition of Done?
Also, bear in mind that in Scrum there is no "signing off" of a release by a separate authority or team. Once something is Done by the Developers the imperative is to put it to use. They would monitor their own progress towards their commitments -- it isn't a Scrum Master's job to do that.
I see several opportunities for improvement in what you describe.
You use the term "QA team". As a software engineering community, we have found that cross-functional teams with all the necessary skills to carry out all the work, from defining the problem to deploying the solution, are more likely to be successful. You see this principle embodied in several places. In Scrum, the Scrum Team is described as having "no sub-teams or hierarchies" and where "the members have all the skills necessary to create value each Sprint". In Lean Software Development, we recognize that hand-offs between people and teams create waste. Other lean and agile approaches take similar stances. It's OK to have specialists in testing and quality control. Those specialists should be embedded within the team and working alongside people with other skills from the initial requirement to deployment. On top of that, people should be cross-training each other to allow people to help out with various tasks and reduce bottlenecks on specific individuals with specific skills.
It also sounds like you have extensive manual testing. Automated testing increases the speed of delivery and confidence in the work done. By automating tests, you can run them much more frequently, run them in parallel (perhaps on different branches or versions of the software), or run them at all hours of the day. Test automation doesn't replace the need for some manual testing, especially in usability testing or exploratory testing, but it does reduce the need for manual testing, especially in regression testing.
On top of testing, you can consider other types of automation. Using linters, static code analysis, and vulnerability scanners can help ensure you are automating other aspects of code quality and security and getting faster feedback. Like with testing, you don't necessarily need to run all checks all the time. Some checks can be run at night or on weekends, especially if they are long-running or otherwise costly.
I'm not familiar with monday.com other than having heard of it. Although we want to pay attention to people and interactions, we can't ignore tools. If the tools you use aren't well integrated or are getting in the way, it could be time to look at other tools. If monday.com isn't good at tracking the end-to-end work and integrating it with other development tools, then I'd suggest looking at what tools can offer end-to-end work tracking and integration that would make it easier to understand and visualize the workflow.
I have over 20 years of experience in software quality assurance. I have found over that span that using dedicated engineers to run a set of regression tests after the developers say they are done is wasteful.
- Very often the tests will not find anything that is so major it has to stop release of software.
- This requires the developers to hold off working on the next iteration. Or it leads to developers having to save work on a new branch, switch back to the old branch, refresh themselves on that branch, find the real cause of the issue, fix it, push it, switch back to the new branch they were working, refresh themselves on that branch, start working again. Only to risk that they will have to do this all over again if anything else is found. Very wasteful and frustrating for everyone.
- This implies that the developers are not trusted to do their job.
- There could be issues with the upfront refinement of the work if something is found so late in the process.
I could go on with those but I think you get my point. I am a BIG proponent of automated testing. Anyone in software quality should be familiar with the Testing Pyramid. Notice that the unit, integration layers are much bigger than the others? That is because those tests find issues faster, narrow the focus of the offending area of code, and make resolution discover/implementation much faster.
I have worked with a number of teams where I used my Quality Assurance Team to help educate the Developers on how to write good unit and integration tests. They reviewed the code at check ins to make sure that adequate testing is being included in the code. They would even contribute to the code if you have some people that are capable of doing so. (Think Software Development Engineer in Test ((SDET)).
Software development has evolved to a point where long regression testing cycles are no longer necessary. There are a multitude of tools and techniques available. As a Scrum Master, I'd suggest that you bring this up to the team at a Retrospective. Have them do some research into techniques that they could start implementing to improve the process of delivering software. I'm sure they are as frustrated as you and they would jump at the chance to improve. Sure it could be work for them. Sure it could initially slow down delivery of new features but that would be short lived. In the long run it will improve the teams' ability to deliver valuable increments of change faster and more frequently.
I completely agree with Daniel. Bringing these challenges up during a Retrospective is a perfect opportunity to engage the team in open dialogue about the current process. Encouraging the team to research and discuss techniques for improving software delivery can empower them and may lead to innovative solutions that benefit everyone.
It's important to acknowledge that while implementing new practices might require an initial investment of time and potentially slow down feature delivery temporarily, the long-term gains in efficiency and the ability to deliver valuable increments more frequently will be worth it.
As a Scrum Master, facilitating this discussion not only enhances team collaboration but also fosters a culture of continuous improvement. By involving the team in these decisions, you support their ownership of the process and create an environment where everyone feels comfortable suggesting and testing new ideas.