Best Practices for Testing "by sprint"
Hello everyone,
First, sorry if my english is not proper, not my main language.
I am QA in a scrum team. We recently moved to a new software to write our test cases.
There is a demo and some examples by the company, and what they do is that they run a list of test for each sprint of the project. One test per story.
I have never worked that way - I usually have small acceptance tests in the stories and when the epic or project is totally completed, I write a proper test case (or I automated the tests).
My team and I thought that we might try the other method - create one test per story within the sprint and when the sprint ends, we execute the tests.
Is that the correct way to do? What I am finding out after one sprint is that we might have a story that carries on to the next sprint - a story that had not been completed by the time that the sprint ended. This means that the test cases I wrote during the sprint will need to be modified.
How do I manage that?
What happened is that my colleague executed my tests and saw a field that wasn't supposed to be there. The case was failed as the UI was not matching the test case.
Do I keep that test as failed, and correct it with the modifications done in thesubsequent sprint?
Do I remove that test from the run completely when I see that the sprint ends and that we still have stories to finish coding? If I do that, I feel like my runs will be almost empty all the time..
My team and I thought that we might try the other method - create one test per story within the sprint and when the sprint ends, we execute the tests.
That would defer a lot of risk to the end of the Sprint, and potentially threaten the Sprint Goal. Waterfalling a Sprint is usually a poor risk control mechanism. It would be better to apply focus and to limit work in progress, testing and completing each story early and often. You would thereby increase the likelihood of genuinely completing enough work each Sprint to meet the Sprint Goal.
Any scope that remains uncompleted may be planned contingency unnecessary for the Goal. This can then be re-estimated on the Product Backlog for possible consideration in a future Sprint Planning session.
Oh yes, sorry, I might have explained improperly. We are not waterfalling the sprint. We still kept the acceptance tests in the stories to de-risk the sprint. The stories do not remain blocked in the QA status, they get to the Done status.
Our intention was to also create the more detailled tests in our automated software (or more detailled manuel testing) and run those at the end of the sprint when most of the stories are completed aside from a few that may be in Code review.
Maybe I am seeing this incorrectly!
In the acceptance cases from a story, we write test based on that specific story only.
In our more detailled tests, we want to test the integration of that story with the other stories in the board and push the tests a bit more. Doing so allows us to pinpoint errors in the integration with the other services, but that is not testable in a single story at the moment where that particular story is closed as the dependancies are not done.
That is why we were sort of using both methods.
Thank you for your answer.
I agree with @Ian. Waiting until the end of the Sprint to test is a bad idea and I from your comments I think you agree. As someone that has had 20+ years in QA, the method that has served me best is to test something as soon as it is able to be executed. Even if the developers say they are still working on it, test it and give them early feedback. The earlier they get feedback, the easier it will be for them to correct any issues.
I am also going to suggest something new for your QA group. In a rapid, iterative process testing becomes even more important. I am not sure if you are familiar with the concept of the "testing pyramid" but you should look it up. The developers should be responsible for writing good unit, system and integration tests that can validate functionality quickly and ideally with every build. These should be written as part of the development work and reviewed during the code reviews. Something that I do, and have had a lot of success on other teams doing, is review those tests. Give feedback on their effectiveness and value. Help the developers learn how to test well. By doing this, we have been able to promote code to production based upon passing builds without any further testing.
Remember that based upon the Scrum Guide, testers are considered to be part of the Developer role. Take a more active role in the entire process. Testing should be done continuously and you should not be a gate keeper.
Thanks Daniel! I agree with this "Testing should be done continuously and you should not be a gate keeper". I will work more on that side. We are currently working on trying to be more involved in the unit tests but I am not very good at coding so I may need to take a few courses before I can manage to write proper tests (I read about test scaffolding where you can use some broader properties in your script so that the test will continue to work even if the code changes slightly but that is way out of my league for now!).
"Even if the developers say they are still working on it, test it and give them early feedback." this we do, for sure. That's part of our acceptance cases on the story itself.
Ok, well my question is answered now, thanks both of you! I found that demonstration from our software pretty strange, I had never seen "end of sprint test executions" before and although it seems a good way for supervisors to have an overview of what was tested during a sprint and what failed, it's not very practical. I am glad we didn't stop writing and executing our acceptance cases as we did before. I just wish we had a better visibility on those - it's kinda hard, with our method at least, to get a good summary view of those tests. We can see the unit test coverage but it's not exactly it..
Thanks again, I will learn more on Scrum and also on the testing pyramide!
Stories typically have several tests which test to the acceptance criteria written for that story. These tests could be unit tests written by the developers or system and integration tests which is mostly the focus of testers. System and integration tests can be automated by automation testers. You should always refer to your story definition of done. Test cases can fail with a minor bug and in most cases, a minor bug simply goes to backlog to be prioritised but the story is considered "done". Hope this helps.