Too many bugs
Hi All,
Lately, I have seen that the tester is reporting too many bugs with the code written by the developers. We usually work on a 2-week sprint model (10 working days) where the developers push their stories as n when they finish for testing or they need to do code freeze by the end of the 7th day if they are not able to push any stories before that.
What I have observed is that the tester has raised a lot of bugs which were created on the 9th day of the 10 days we have and we have a review and retro on the 10th day with 3 stories in open state due to the bugs created.
Although our team does the production deployment outside the sprint cycle(that's the rule of the company) however, that still does put me in a difficult state since my next sprint would have started when the production deployment goes live. This way the developer has to fix bugs for the last release also has to take up new work the alreadt started next release.
Please suggest.
Thanks
Why is work being “pushed” through, and apparently from one skill silo to another? Why aren’t Development Team members forecasting how much work they can genuinely complete in a Sprint to a release quality Definition of Done?
If the organization effectively asserts that work does not have to be releasable at the end of each Sprint, might that be contributing to the problem?
Having worked in the regulated environments of aerospace and healthcare/pharmaceuticals, I recognize the need for independent QA. First, make sure that, for the product that you are working on, you need independent QA time. If you do, I believe the current model makes sense - a feature freeze after 7 business days and then 3 days of final QA time.
I would caution to make sure that this QA time tends to be uninterrupted as much as possible - try not to do your regular sprint activities in this window, maximizing the workday that QA spends in competing their testing and allowing developers to focus on bug fixes. I'd also differentiate between "code freeze" (no more code changes) and "feature freeze" (no more new work being added, except to identify newly added bugs to the system). That is, only fix bugs related to work done in the Sprint. Existing bugs in the system should be logged and triaged normally.
The fact that QA is finding bugs in the last round of testing is indicative of problems in the first 7 days.
Even with an independent QA timebox at the end of a Sprint, QA still needs to be considered part of the Development Team. They need to be considered during Sprint Planning, attending the Daily Scrums, and participating in Sprint Retrospectives. Whatever your refinement process is, they also need to be included in that. Often, QA is less staffed than development, so they are a bottleneck - you cannot bring more work into a Sprint than QA can complete in their 3 day validation window.
Your Definition of Done should include testing. Before the "code freeze", developers should have tested their code and written appropriate automated tests. It should be passing automated builds before it's committed to the mainline of the source control repository. QA should have seen the work already and have test cases written, and even dry run (before the post-freeze testing), asking questions of the developers to make sure that the test cases are complete and accurate. In other words - the final 3 days should generally be a formality for record keeping.
If bugs are being discovered after the code freeze, or even after the deployment by users, consider applying Root Cause Analysis techniques to determine why the bugs aren't being detected before work is considered Done or in the 3 day test window. Something is missing or going wrong - find out what it is and what can be done differently.
Also - consider the timing of the Sprint Retrospective. If it's not already, consider placing your retro in the late afternoon of the 3rd day of QA testing. Try to have your QA work finished by that time, so you can talk about your entire Sprint. You can do your Sprint Review either late in the day on the day of freeze or close to the retro time.
I second the first part of Thomas's post. First, make absolutely sure that fully independent QA is necessary. Even in regulated environments that is not always the case, or only for certain QA activities. For example, in aersopace, depending on the criticality level, it may be acceptable for a developer to do testing if it's not the same developer as the one who did the coding.
So make sure that only those activities which absolutely need to be done independently are done by a seperate QA member. Anything else can and should be done by the cross-functional development team (of which the QA member ideally should be a part of). That way, you don't push as much work downstream.
Regardless of having an independent QA or the particulars of your release timing, the issue seems to be one of recent coding quality. What type of bugs are being reported (are they base code errors? logic? incompleteness?) and what has changed in your team that could possible contribute to the increase?
Worse case, it sounds as if you need to cut back on the number of stories being targeted to be delivered within each sprint until the root quality issue is uncovered and corrected.
Hi All,
Thanks for reverting.
We do have an independent QA who picks up the work on the 8th day of our usual 2-week sprint. She does get stories before the 8th day as well on few occasions subject to when the developers finish off few bits early.
The bugs being reported are code related. only. Seems like the developers are missing few things and not considering all the scenarios. In this gave print retrospective, I did mention this, however, didn't make a big deal about it since I wanted to give them another chance to rectify it themselves. I would be having a detailed conversation if I do see this happening again. I do understand that they cant do such exhaustive testing that a tester does but they should take care of few minor details.
Thanks
We do have an independent QA who picks up the work on the 8th day of our usual 2-week sprint. She does get stories before the 8th day as well on few occasions subject to when the developers finish off few bits early.
Does that mean the developers usually are coding all Backlog items until day 8? Why is that?
I do understand that they cant do such exhaustive testing that a tester does
Why not? What is it they are lacking?
It's a little concerning to me that the independent QA only "on a few occasions" gets work before the 8th day. I would expect that work is broken into pieces. In a 2 week Sprint, you have 10 working days. I may not expect anything ready on day 1 or maybe day 2, but I would expect some pieces of functionality to be delivered throughout the Sprint. I'm not sure where the issues are, but I'd look at the backlog refinement process and Sprint Planning. I'd also be watching in-progress work - it seems like the team may be starting a lot of work at once instead of moving work toward completion in smaller batches. I'd be considering some techniques, perhaps from Kanban, around limiting work in progress.
The statement that developers "can't do such exhaustive testing that a tester does" also bothers me. Even with independent QA, it's still a single team. This looks like there's stuff being "thrown over the wall" at QA, instead of a collaboration. QA should be planning their test cases very early, so before a developer calls the work ready for QA testing, they should know how it's going to be tested and test it appropriately (since they are developers, consider the ability to develop automated tests that can be run often as part of test suites). A Scrum team should be cross-functional. Consider having the QA teach the developers good test techniques. Make sure that testing is part of the Definition of Done. Make sure that developers are taking responsibility for their work.
Aarti - so, if the majority of issues being uncovered are pure code related, then you may want to review the coding approach being taken. Are they using test driven development? pair programming? are they allocating enough time for unit testing, etc. No one is perfect, but continuous review/improvement are part of the process and if this is the biggest sore point - then some focus needs to be put in to the current coding standards/approach. You don't need to wait for the next attempt - if you change nothing, nothing will change.
Hi All,
To give you live example, we are currently working on our sprint5. Started on 28th March and would end on 10th April. We only had 9 days since there was a company holiday on 30th March. Today is the 5th day, and only 1 story has been pushed to testing. This has been the case since the beginning. They obviously don't breach the dev days but they don't finish off before time either.
I have not put this across to the dev lead yet since the impact is not observed as such. They are not able to deliver work before time let alone asking them to do a exhaustive testing.
I am not sure how to put this across to the dev lead on their work approach. Please suggest.
Thanks
Hello Aarti,
I'm a Test Analyst and Test Coordinator in Agile Development Environments (Scrum, Kanban and DSDM) a.k.a. an "Agile Tester".
I'm reading in your texts always the singular "tester", "QA member" and "she", so am I correct that there's only 1 tester in your Dev Team?
How many Developers are there in that Dev Team?
From my own experience, I will suggest that the ratio of max 3 Developers to 1 Tester should not be crossed or independent testing quality and test automation capacity will suffer.
"only 1 story has been pushed to testing" - how granular are your stories? Maybe you should spend more time to break them apart into smaller ones, that could be more or less independently testable?
I think Filip has a good point. If the entire Scrum team is only working on one story and can just about complete that (depending on your definition of done) - then it seems there's more decomposition that needs to take place during the planning phase.