Does Regression testing needs estimation and story points?
My scrum team involves in regression testing for a full sprint before every release. We are currently not estimating the regression sprint in story points. But do we have to? we have multiple releases in a year and regression sprint(s) are common. We have difficulty tracking the teams progress and deriving average team's velocity. Can the teams efforts on legacy defects, fixing automation usecases and regression defects be estimated? I need to mention that each scrum works on stories, defects, automation in each sprint. And during regression, its manual and automation testing and fixing the regression defects.
Welcome any insights on this.
Can the Developers produce a Done, finished Increment of immediate usable quality without regression testing? In Scrum, this needs to be assured at least once every Sprint, so empiricism is established and maintained, and not just several times a year.
This seems like an XY problem. Why do you have a full Sprint of regression testing before every release? Especially since you claim that you have automation testing in place. The root problem here appears to be a weak Definition of Done and a failure to ensure that each product Increment remains in a usable state.
A Product Backlog Item is something that needs to be done in improve the product. The team maintains a Definition of Done to describe the state of Product Backlog Items and the product Increment so that there is a shared understanding of what it means when the team says work is done. It also ensures a minimum level of quality. After any given Product Backlog Item meets the Definition of Done, an Increment is created.
I would say that the fundamental problem is a weak Definition of Done. If the team is working on automation in each Sprint, why do they need a full Sprint of regression testing? They should be integrating and testing each Product Backlog Item as part of the Definition of Done. If there are any roadblocks to this, the team should be identifying and resolving those.
Now, I'm not advocating for no special activities prior to a release. Creating a snapshot, rerunning the full suite of automation tests, manual exploratory testing - these are all good things to do. However, they shouldn't require the team to dedicate a full Sprint to execute and then resolve issues. In fact, there should be few, if any, issues coming out of these activities before a release, if you have a robust Definition of Done and take advantage of your automated testing early and often.
But setting that aside, if you know that your release activities, whether its regression testing or something else, will take a full Sprint, what is the value in estimating? You have a timebox in which to perform your regression testing and any other activities needed to finish your release. You can just ignore these "release Sprints" when looking at your velocity.
@Keerthi PT, what is your Sprint duration?
For some reason, many teams decide on heaving short sprints despite being unable to produce quality Increments.
From the Scrum Guide
Increment is additive to all prior Increments and thoroughly verified, ensuring that all Increments work together. In order to provide value, the Increment must be usable.
Thorough verification is expected with each Increment, regardless of when you choose to release it. Regression testing is about ensuring the product continues to function effectively when new changes are introduced. Consider how you can do this with each Sprint and Increment, rather than building it up. The longer you go without performing regression, the higher the risk and cost to fix issues are likely to be.
I totally agree how the ideal way how automation should simplify regression Thomas Owens
However, we are struggling with a backlog of test automation suite due to tool migration and we are not showing good amount for green on our test results, where the workflows manually work well.
Its a cascading problem which we are on top of it to cover as soon as possible.
However, going back to my concern, if we are dealing with team having agile newbees, its important to determine the velocity of the team, and identify developers who might be dragging their efforts, taking advantage with story points estimation.
I have witnessed few teams moving back to estimating in hours, to avoid this problem, which I am not inclined to.
However, going back to my concern, if we are dealing with team having agile newbies, its important to determine the velocity of the team, and identify developers who might be dragging their efforts, taking advantage with story points estimation.
Not sure how estimations are done in your case. In the sprint planning, the estimates should be agreed to by all (including seniors , newbees etc.). If the team agrees to the estimate then I dont see a problem here. Also in the daily scrum you can check how the task are going with respect to the sprint goal.
Most of the time it is people from outside the team who take advantage of story points and do things like map story points to hours (1 point=1 day), compare velocity of teams instead of looking at the value that teams deliver. If for some reason a story/task is not completed in the sprint it needs to be addressed in the sprint retrospective meeting, probably there was a reason why the task was not done (or got dragged on).
However, going back to my concern, if we are dealing with team having agile newbees, its important to determine the velocity of the team, and identify developers who might be dragging their efforts, taking advantage with story points estimation.
This is not a problem with estimation. The problem of overestimation and inflation exists regardless of if you estimate in hours, points, t-shirt sizes, or something else. There's nothing stopping a Developer from saying that something will take 6 or 8 hours if it will take 3 or 4, that something is 8 points when it's really closer to a 3 or 5, or that something is a large when it's really closer to a medium.
The kind of behavior that you describe is simply unprofessional behavior. In my experience, it becomes clear that someone is either dragging out work or struggling through the discussions and replanning at the Daily Scrum. If your Developers are truly professionals, when they see someone who is like this, they will try to take action. If it's someone who needs a little help or opportunity to learn and come up to speed, they'll replan around that. If it's someone who is dragging the work out and not working effectively, they can raise that as an impediment and work with the right people to solve it.
So, no, velocity isn't as important as you make it out to be. In fact, it's not important at all. The use of flow metrics are even more valuable, since those are based on the actual completion of work rather than the team's estimates of size or complexity prior to starting work. Being able to observe when a work item - often a Product Backlog Item - is moving through the workflow slower than expected is a more powerful tool to identify when there may be problems.