Skip to main content

Progress not visible till the completion of Regression testing

Last post 03:58 pm October 28, 2024 by Daniel Wilhite
6 replies
01:15 pm October 24, 2024

Cannot visualize the Teams progress via Burndown / Burnup charts as the Sprint backlog items (task) as per the defined workflow cannot be closed until and unless the regression testing as a part of DOD is done and which is generally happening at the end of the Sprint. What is the best approach for regression testing so that the Teams progress can be visible via Burndown / Burnup charts


11:43 pm October 25, 2024

Early and often, so the risk of failed tests is not deferred until the end.

Why would a task burn-up or burn-down fail to show progress though? Regression testing is presumably just one technical task among many that are needed for work to be usable and Done


05:37 am October 26, 2024

burn-up or burn-down fail to show progress as Sprint backlog items are closed only after the successful regression testing only which is happening at the end of sprint.

What is the best approach to handle regression testing for each sprint?


12:56 pm October 26, 2024

Although a burndown chart wouldn't show progress, you could consider a cumulative flow diagram. If your workflow is accurately decomposed into states, you can show how much work is in a given state. Depending on the states of work, you may need to define a more granular workflow for this visualization to be effective.

It may also be worth understanding why regression testing is happening at the end of the Sprint. You may want to look into how much testing effort, especially around regression testing, is manual effort and what you can do to reduce the manual effort and allow you to do more incremental regression testing or automate the regression testing.


03:36 pm October 26, 2024

A couple other things to try:

  • Leverage a Scrum board and see if the Devs want to break down the 'to do' column into the tasks a PBI flows through
  • Can the team leverage work item aging chart? You may need a scatter plot and cycle time to aid in the work item aging chart.

Here's a reference: https://www.scrum.org/resources/blog/4-key-flow-metrics-and-how-use-them-scrums-events


01:02 am October 28, 2024

As mentioned in the posts above, break items down into tasks. The tasks can then be displayed on a Scrum board as "Not Started," "In Progress," and "Done," making progress visible at each stage.

A common reason testing can only be done at the end of a sprint is because the items are too large. Wherever possible, refine backlog items to be 2 to 3 days' worth of work, with a maximum of 5 days. Aim for stories that are roughly in the 3 to 5 story point range. This way, regression testing on some items can be completed sooner rather than waiting until the end of the sprint cycle, allowing progress to be visible earlier.

Implement a continuous integration (CI/CD) pipeline with automated tests to cover regression testing. As soon as code is checked into the repository, automation tests will run on the new code build, providing early feedback on regression issues.

(Consider showing the Burndown chart in hours. While hour estimation, especially hours left to do, can be inaccurate, it can still provide an indication of progress.)


03:58 pm October 28, 2024

This isn't a Scrum problem.  It is a process problem and a lack in faith in the Developers. When I say Developers, I mean everyone involved in making changes which would include the people writing code and the ones creating/running tests. 

I have over 20 years of experience in software testing. In those years I have found that regression testing doesn't find a lot of problems unless there is no testing done prior.  If you wait to do all regression testing after all the changes are done, you then have to determine which of the changes broke the application. However if you test each change individually, you get faster feedback and quicker turn around on the fix. The introduction of automated unit, integration and system tests by the coders can remove the need for a large part of the "at-the-end" regression tests. In fact, I have managed a number of applications where the only testing done before it was released was the automated tests created by the coders.  The quality assurance people shifted their focus to reviewing tests written by the coders, suggesting tests that would prove sufficient and doing manual validation as the coders were working so that feedback was quicker. If you spend the time wisely to automate at the lower levels of the code stack, each time code is checked in you will be regression testing the application. Then your burndown will be able to show the picture in the way you want. The purpose of regression testing is to ensure that new changes do not break the existing functionality. 

I would also suggest that you look into better ways of tracking progress. Burn down charts don't always give the best picture as you have found.  @Thomas and @Chris offer suggestions that I have found very useful. And @Pierre's suggestion is a simple thing that I have used MANY times.  However, he didn't mention the practice of limiting work in progress (WIP).  That is a kanban practice of using the board to move items to done as fast as possible without sacrificing quality.  The idea is that you do not start work until work in progress is completed.  If you see work piling up in a column, the team will swarm to get that work moved on.  Set limits on columns that historically build up.  In this case it would be your regression testing column.


By posting on our forums you are agreeing to our Terms of Use.

Please note that the first and last name from your Scrum.org member profile will be displayed next to any topic or comment you post on the forums. For privacy concerns, we cannot allow you to post email addresses. All user-submitted content on our Forums may be subject to deletion if it is found to be in violation of our Terms of Use. Scrum.org does not endorse user-submitted content or the content of links to any third-party websites.

Terms of Use

Scrum.org may, at its discretion, remove any post that it deems unsuitable for these forums. Unsuitable post content includes, but is not limited to, Scrum.org Professional-level assessment questions and answers, profanity, insults, racism or sexually explicit content. Using our forum as a platform for the marketing and solicitation of products or services is also prohibited. Forum members who post content deemed unsuitable by Scrum.org may have their access revoked at any time, without warning. Scrum.org may, but is not obliged to, monitor submissions.