Skip to main content

Specialized scrums for system and solution integration testing, while functional testing done on demand

Last post 01:26 pm December 26, 2025 by Thomas Owens
3 replies
05:43 am June 25, 2024

Hi Team,

We are currently looking at a unique proposal where there would be 2 specialized scrums for performing system integration testing and solution integration testing in conjunction to the actual development scrums, helping to left shift all the integration testing efforts. These specialized scrums would contribute towards performing continuous integration tests for deliverables from all the different scrums teams (8 total). In order to support this model, there would be no test resource statically deployed in any of the development scrums, but only deployed when there is any testable deliverable. This would mean having a validation resource deployed on demand within any scrum team during the sprint where there are any testable user-stories. This would cover all functional testing needed to accept the story and achieve the Done criteria and leaving no story untested within the sprint, if it is sized well to have the development and testing completed within the sprint boundaries. While the system integration scrum and solution integration scrum teams would pick the deliverables in a continuous fashion and perform the required integration tests. These scrums would operation as Scrum@Scale and to address the communication gap, each individual scrum along with the system integration and solution integration scrum would be part of the Scrum of Scrums and leverage meetings like the Scaled Sprint Planning and Scaled Daily Scrum for effective communication. Do you think this approach would still suffer in terms of communication and if yes, what could be the possible workaround/solutions to address that. Thanks

Regards,

HG


09:59 am December 22, 2025

I have seen variations of this model work, but only when the communication and ownership lines are very explicit. The biggest risk is not the lack of embedded testers, it is diffusion of responsibility. When validation is on demand and integration testing sits in separate scrums, teams can quietly start assuming “someone else will catch it later.” That usually shows up as late surprises or integration backlogs.

Two things tend to make or break this setup. First, very clear entry and exit criteria between dev scrums and the integration scrums. Integration teams need predictable signals for what is ready, what changed, and what assumptions were made during functional testing. Second, tight feedback loops. Findings from system or solution integration testing have to flow back into the originating scrum fast enough to still matter within the sprint or at least the next one.

Scrum of Scrums helps, but in practice it is rarely enough on its own. What I have seen help is shared visibility at the testing layer. Having a single place where functional validation, system integration runs, and solution level checks are logged against the same increment makes conversations more concrete. Even a lightweight setup using something like Tuskr to track runs and outcomes across scrums can reduce a lot of “we thought that was covered” discussions. The model can work, but only if quality ownership stays with the product teams and the integration scrums act as amplifiers, not safety nets. If they become a catch all buffer, communication problems will surface no matter how many ceremonies you add.


07:19 pm December 22, 2025

Do you think this approach would still suffer in terms of communication 

They would suffer in terms of transparency because there would be extensive dependencies between teams. From what you describe, no one team would be able to produce a Done increment of work under its own steam, however narrow the slice of functionality may be. Compensatory mechanisms would have to be introduced which introduce room for error, including the communication problems you suspect. Accountability for meeting Sprint Goals and having a Done increment is correspondingly obfuscated.

and if yes, what could be the possible workaround/solutions to address that.

On the whole it is best to encourage people to self-organize into teams, creating a bounded environment for them to do so. A facilitated workshop may help. Time-box it with clear rules: each team must be able to produce a Done finished increment, fully tested, and with the necessary skills to do so, and no more than about 10 people in each team. Members may be full-time or part-time in a team, but their full commitment would be expected for the time they are there. Scrum is commitment focused and accountability driven.

 


01:26 pm December 26, 2025

This may make sense if you need independent verification and validation. But if you don't need independence, this adds handoffs and complexity, making it more effective to put the experts and specialists in integration testing directly on the teams to the extent possible. This could even mean putting system integration testing on the teams while only keeping an independent solution integration testing.

In this kind of model, "Done" would be defined based on what each team can do. That would mean putting the highest quality work into the integration testing. The teams working on each system would need aligned Sprints. They don't necessarily need to be the same length, but they would need to end together sometimes. The times when the Sprints end concurrently would be the handoff to the integration test team. They would be able to develop, run, and report on testing in their own Sprint cadence.

There are open issues, though.

How does feedback from the integration team get back to the development teams to resolve issues? This can lead to longer loops in which the development team needs to understand and resolve the issue, then hand it back to the integration test team for another cycle. Depending on where the team is in a Sprint and the issue, it may sit in the backlog pending refinement and selection, leading to unresolved issues for days or potentially weeks. This delays receiving downstream feedback from users and customers of the integrated product, especially in real-world usage settings.

What does the integration test team do during potential downtimes between finishing a test cycle and receiving the next product increment from the development teams? Maybe they could work on getting an early start understanding and setting up for testing. Or maybe this is a good opportunity for leveling up the teams capabilities and ensuring that they are able to produce increments that sail through the integration testing without finding issues.

I'd like to understand your context better, though. How many products or systems do you have? How many solutions? How large are each of the Scrum teams working on the systems? How many teams per system?


By posting on our forums you are agreeing to our Terms of Use.

Please note that the first and last name from your Scrum.org member profile will be displayed next to any topic or comment you post on the forums. For privacy concerns, we cannot allow you to post email addresses. All user-submitted content on our Forums may be subject to deletion if it is found to be in violation of our Terms of Use. Scrum.org does not endorse user-submitted content or the content of links to any third-party websites.

Terms of Use

Scrum.org may, at its discretion, remove any post that it deems unsuitable for these forums. Unsuitable post content includes, but is not limited to, Scrum.org Professional-level assessment questions and answers, profanity, insults, racism or sexually explicit content. Using our forum as a platform for the marketing and solicitation of products or services is also prohibited. Forum members who post content deemed unsuitable by Scrum.org may have their access revoked at any time, without warning. Scrum.org may, but is not obliged to, monitor submissions.