Manual regression testing tasks in a sprint?
I have heard that it is advisable to not bring in new backlog items into a sprint before all sprint backlog items are completed first.
Do you think it can be ok to not complete manual regression test tasks before bringing in new items into the sprint? (Since it makes more sense to do manual regression tests on a version that contain all the changes, so you don’t have to redo them later.)
It's up to the Development Team to self-organize and come up with the most effective Sprint plan.
What do you think the risks are likely to be though, if no item can be observed as being complete until the end of the Sprint?
How do you get a potentially shippable Increment without completion of the regression tests (manual or automated)? How do you know that the product actually "works" after the bug fixes, performance improvements, usability changes, etc, etc?
Deferring testing to a later sprint increases technical debt. So, the discussion needs to be about trying to quantify the value that additional scope brings in versus the technical debt incurred by not completing regression testing?
Also, since the regression testing is manual, the cost of automating it should be weighed against the cost of manual regression every sprint. SO, the team and PO should also discuss the value of automation versus the cost of manual regression testing every sprint -- as the product grows, the manual regression efforts and time will increase exponentially.
Stop starting, start finishing
http://www.allaboutagile.com/stop-starting-start-finishing-unfinished-w…
"Since it makes more sense to do manual regression tests on a version that contain all the changes, so you don’t have to redo them later."
Regression Testing means to regress the application -- as coding is completed for each PBI -- to ensure that new changes (code or db changes) to the application did not introduce defects, cause integration problems or performance degradations to the application or parts of the application that were functioning and performing correctly before these changes were introduced. This is practical and feasible in an automated testing world and might be problematic in a manual testing environment -- which should be an impetus to do an ROI analysis on automated testing and accomplish that in future sprints.
High-performance scrum teams have automated tests in place which rapidly regress the application as new changes are introduced to the builds. They may have execute partial smoke tests or full-blown regression tests, depending on how long the automated tests to execute. (Automated) Continuous testing and continuous deployment is how companies like Itsy do multiple deploys a day.
Not all products and tests can be automated. Of course, some components may hit a 100% automation rate, but it's not an assumption that can be made anywhere.
It does make sense sometimes to do a single pass of manual regression that covers several changes in the same area, For example, we release every two sprints, run various levels of automation as sprint progresses, and then do an in-depth manual validation before releasing. We do get some technical debt, but: (a) it is mostly covered by automation, and (b) with a small team, we need to consider trade-offs between minor quality impact during sprints and extra effort on re-testing the same functionality prior to external release.
1. I agree with everybody that to consider automated tests is a good idea. In the scenario I am describing it is agreed that manual regression tests are needed.
2. I agree that generally the development team should come up with the most effective Sprint plan. That means that they should have the option to bring in more product backlog items into the sprint even if for example the task “Run manual regression tests on a version that contains all planned changes” is not yet completed. Do you agree?
3. Even if it is required to execute the sprint backlog item: “Run manual regression tests on a version that contains all planned changes”, can we still consider the other sprint backlog items to be done even before that task is completed?
Q1) Do you agree that the development team should produce a Potentially, Shippable Product Increment in accordance to the Definition of Done at the end of the sprint?
Q2) Will the manual regression tests be executed in the current sprint, OR, not?
Q3) How mature is the product/application?
Robert: Yes, development can consider pulling in new work and prioritizing this work over manual regression testing -- however, as has been hinted by some of us, there's a risk to delivery and quality. So, my answer as a member of the development team would be -- it depends on the following:
Timewise, where in the current sprint cycle -- middle of sprint or end of sprint? Is there time to run the manual regression tests in this sprint in addition to the new work that the Dev Team would pull? How much buffer is built in the sprint plan for bug-fixing and re-testing if regression testing uncovers critical bugs? Is the test data and test environment ready for the manual regression tests?
"Q1) Do you agree that the development team should produce a Potentially, Shippable Product Increment in accordance to the Definition of Done at the end of the sprint? "
A1: Yes, I agree.
"Q2) Will the manual regression tests be executed in the current sprint, OR, not? "
A2: Yes, the tests will be and are required to be executed in the current sprint.
"Q3) How mature is the product/application?"
A3: For the scenario I had in mind, not very mature..
since the app is not very mature, there may be latent defects which have not been discovered -- depending on the value of the new work that the development team is evaluating -- the team has some risk mitigation plans to consider (these mitigation scenarios depend on an assessment of the current state of quality of the application (as reflected by code and design reviews, defect metrics and non-functional testing results, code quality metrics) + resource capacity of the development team:
(a) (if app. quality is assessed as high) pull in and design/code a subset (but not all) of the highest value PBI; then do manual regression testing
(b) (if app is stable but quality is not high nor low) regress a subset of highest risk test cases first and if no critical or showstopper defects are uncovered -- in parallel the design work could be completed. if no high severity defects are discovered in the initial regression testing, complete the coding and unit testing and remainder of regression testing
(c) (if app is not stable and quality is low) complete regression testing without adding any new work. depending on the results and if adequate time remains, add in new work and regress a subset of highest risk test cases.
Thanks John, that was helpful, I have not thought about it from that aspect before. To confirm that I got it right:
If the risk of finding critical defects while running regression tests is high (c) then running the tests before considering adding more changes to the sprint can be the best option.
I also like the compromise (b) to run some regression tests, but not all before adding more changes is considered. I think it is a wise option for the scenario I had in mind. (For the scenario I had in mind the product is not so mature from the aspect that unit tests and test automation are lacking and also that more user feedback is needed. The product is more mature from the aspect that risk that critical defects will be found when running regression test is low.)
If app does not have to be productionalized and ,since obtaining customer feedback is needed, then Option (b) might be your best bet. Another flavor of Option (b) could be for the team to divvy up its capacity and do both new development and subset of regression testing concurrently (in parallel).
I meant if app does not have to be productionalized at the end of the current sprint, but the goal is still to achieve a potentially shippable product increment...If the app has to be productionalized then I personally would vote for original Option (b) given the current state of the application that you described
During the retro, the Scrum Master and team should discuss (for future sprints) the topic of unit testing as the app is being developed + conduct a ROI analysis of test-automation (at the minm. the high-risk application areas & most time-consuming manual tests). As a best-practice, the Scrum Master should also assist the team in implementing a proactive, light-weight risk-management process.
My question is : what is the role of Software testing in Sprint, where it fits and when it can be executed in the sprint?
I basically feel that regression can be done using any automation testing tool but the code changes i.e. new functionalities should be tested manually. When scrum says "Potentially releasable product" i consider it has fully functional for that sprint or changes made for the given PBI. because this is all based on the requirements of the stakeholders.
if we are talking about customer oriented product based development, then at the end functionality and the fullfillment of the requirement matters- hence manual testing is mandatory.