Skip to main content

Optimizing QA Validation in a Specialized Environment

Last post 03:32 pm June 1, 2024 by Pierre Pienaar
7 replies
09:40 pm May 20, 2024

Hello fellow Scrum professionals,

I’d like to discuss challenges related to QA validation in our specialized semiconductor industry. Our teams work on cutting-edge technology, solving complex problems with highly specialized skill sets. Requirements are very complex involving advanced physics, chemistry, math, etc. However, achieving true cross-functionality and sprint goals can be challenging due to this specialization. 

Context:

  • We release desktop software several times a year. We do have CI/CD pipeline to deploy incrementally, but we only release when larger features are done which takes a quarter or long. 
  • Our Definition of Done (DoD) includes both development and automated testing.
  • QA engineers act as gatekeepers, approving PRs before merging to default.
  • Each Scrum team has one or more QA engineers. 

We're generally able to follow the DoD for each user story and enforce quality control using automated regression and unit tests, etc. So even if a user story meets the DoD, it is not considered done-done until the QA validation. We release software on time and without major defects several times a year, and that's how success and value is measured here.

Challenges:

  1. Incomplete Stories: Sometimes, user stories meet the DoD but lack full QA validation by the end of the sprint. Think about the situation where a user story passed all automated testing defined in the DoD two days before the end of the Sprint and QA didn't have enough time to fully validate it. Another situation is when several user stories need to be completed for the QA to fully validate and it doesn't happen in the Sprint. This leads to incomplete stories or Sprint goals with "dev-done for feature A" in it (note: this "dev-done" includes automated quality assurance).
  2. Resource Allocation: Swarming on tasks can be difficult due to specialization or coding constraints (devs say adding more resources is impossible).

I want to highlight that we continously inspect and try to get more accurate with capacity planning, sprint goal setting, etc. We do incremental validation as much as possible and avoid waiting until the end of the sprint when possible. 

Question:  We’ve considered separating QA stories from dev stories to track validation efforts separately in our Sprint or on our product backlog. However, we’re concerned that this approach might create a false sense of “done” (since QA validation is still pending, even if automated tests mentioned on the DoD pass). Our Product Owner also worries about overhead and potential gaps. Adding QA validation to the Definition of Done (DoD) won’t fully address the issue of incomplete stories.

What recommendations do you have for addressing this situation?


10:34 pm May 20, 2024

It sounds as though empirical feedback is only obtained at quarterly intervals or longer, and that this appears to be good enough for the industry you are in and the complexity faced. Success and value are being delivered. I'm not sure therefore what purpose Sprints serve in your case, and why Scrum is needed at all.


07:05 am May 21, 2024

We’ve considered separating QA stories from dev stories to track validation efforts separately in our Sprint or on our product backlog.

I don't think this is a great idea as it is a horizontal split. Think of splitting your stories vertically so that dev+QA can be completed in a sprint.

If QA Validation is mandatory, then it should be included in your Definition of done. Your issue of incomplete stories might be due to multiple factors (which you should check) like resource issues (less QA resources), stories are not split well (hence don't get completed), sprint length is too short (will increasing the sprint length work?)


06:18 pm May 21, 2024

Thanks, @Ian Mitchell. I forgot to mention that we have bi-weekly Sprint reviews and very engaged stakeholders, resulting in a good mix of feature demonstrations and working sessions, informing future adaptations. Stakeholders get to test software incrementally on testing environments. I think empirical feedback is provided on a sprint basis, and we benefit from Scrum for this and other reasons. 

The desire is to increase predictability by having higher Sprint goal completion rate. We miss Sprint goals either because they get passed to QA engineers too late in the Sprint (although automated testing has been done), or becuase the story turns out to be more complex than anticipated. When discussing the latter in the Retrospectives, the devs say it couldn't be sliced further nor would it benefit from swarming on. 


06:20 pm May 21, 2024

Thanks, 

Thanks, Anand Balakrishnan. The QA resource shortage is a real thing and unfortunately not easy one to solve, but everyone is aware of it. Adding QA done in addition to existing automated quality measures to the DoD won't solve the problem itself. We miss Sprint goals either because they get passed to QA engineers too late in the Sprint (although automated testing has been done), or becuase the story turns out to be more complex than anticipated. When discussing the latter in the Retrospectives, the devs say it couldn't be sliced further nor would it benefit from swarming on. 


12:13 pm May 22, 2024

The QA resource shortage is a real thing and unfortunately not easy one to solve

May be you can check with some developers if they can take up some QA responsibility. 

Adding QA done in addition to existing automated quality measures to the DoD won't solve the problem itself.

You also agree that value is delivered only when dev + QA is accomplished right ? So without QA it is as good as not done.

Another option is your QA story will always be one sprint after development sprint. You need to manage your sprint and sprint goals such that you have some features for review which is QA certified which can be demonstrated. Not certified features can also be demonstrated for feedback (as some automated tests are done) but you need to clearly call out that these are not QA certified and hence not Done.


12:47 pm May 22, 2024

The commitment of the sprint is Sprint goal and not the sprint backlog !. The sprint backlog may change during the sprint without impacting the sprint goal. You have mentioned you do incremental validation in the sprint. So why do you want to change that approach ? Divorcing dev and QA activities from the sprint will introduce new challenges to the team. Sprint goal = Sprint backlog approach is what need to be changed imo.


03:32 pm June 1, 2024

What you describe is a very common issue in Scrum teams. @Anand mentioned vertical slicing, and basically making the user stories smaller can solve the problem.

Instead of a story taking up most of the sprint, breaking it down into smaller chunks, allow dev to finish sooner and QA to start testing earlier. Smaller items that can be finished in around 3 days is ideal. 

Also irrespective of the backlog item size, investigate if QA can get involve sooner. The scenario you described is typical if you have QA at the end of development. Investigate if QA can get involved earlier in the cycle. The term "left shifting" of QA is widely advocated for.

As somebody mentioned, let dev help with testing, especially writing automated tests. If you cannot break the stories smaller, get QA validation as a priority in the same sprint, and complete automated tests and other QA work in the next sprint (not ideal, but we trying to make something work).

As a side note, I don't like QA to be gate keepers for dev work to merge to default (or master). Dev should have ownership of their work, plus trust and QA then test in default or master. Your company does not release often (seems around quarterly), and I don't see a need for gate keeping.


By posting on our forums you are agreeing to our Terms of Use.

Please note that the first and last name from your Scrum.org member profile will be displayed next to any topic or comment you post on the forums. For privacy concerns, we cannot allow you to post email addresses. All user-submitted content on our Forums may be subject to deletion if it is found to be in violation of our Terms of Use. Scrum.org does not endorse user-submitted content or the content of links to any third-party websites.

Terms of Use

Scrum.org may, at its discretion, remove any post that it deems unsuitable for these forums. Unsuitable post content includes, but is not limited to, Scrum.org Professional-level assessment questions and answers, profanity, insults, racism or sexually explicit content. Using our forum as a platform for the marketing and solicitation of products or services is also prohibited. Forum members who post content deemed unsuitable by Scrum.org may have their access revoked at any time, without warning. Scrum.org may, but is not obliged to, monitor submissions.