Best Practices for Testing Process in Scrum
Hello,
I am always struggling to put Test process in Scrum. What is the best practices for Testing Process to put in Scrum framework ?
1) Unit Test
2) Regression Test
3) Functional Test
4) Integration Test
Scrum doesn't say anything about testing because it's "a framework for developing, delivering, and sustaining complex products". It transcends the type of product that you are developing, and what types of testing you need to do and how testing fits into the process is different for producing different types of products.
Even when the product is a software product, how you incorporate testing is different. For example, I've worked in regulated industries - aerospace and pharmaceutical. In order to maintain compliance with regulations, there are rules put around testing. You can use Scrum and be in compliance with the required regulations. However, if you did not need to maintain compliance, you may not want your testing to look like it does for a team or organization that does need to maintain compliance.
So, consider the Scrum Guide. It says that you have a Definition of Done for every Product Backlog Item and Increment. In order for a Product Backlog Item to be done, what kinds of testing must be performed? In order to an Increment to be done and potentially releasable, what kinds of testing must be performed? Given a Sprint cadence, how do you fit the required test design, development, and execution into your Sprint cycle? The answers are going to vary widely.
This is one of the challenges I faced when I transitioned my project from the waterfall model to Scrum. The test team did not want to write test cases unless the team have the output, and by the time Dev has an output worth testing, it is too late in the scrum to start testing.
We tackled the issue by:
Breaking each selected product backlog item into many small subtasks which can be delivered and tested.
Taking the help of testing team during integration testing itself, so that they can identify potential issues in advance. To do this, the testing team starts writing test cases based on each subtask, which will fail if the outcome is not met. This, in turn, gives the development team an advance notification of a potential issue.
Now, my testing team helps dev team during the integration testing and they test each sub-story deployed in test environments, which helps us to wrap up the testing before the sprint review meeting.
I will not say the transition was easy, but at the end, it was worth it.
Why is testing and development handled by different teams? Why separate these skills out?
Testing should be factored during sprint cycle.
As per definition Development team are cross functional team and they should be able to test the solution, during the sprint cycle.
As per Ian question, Development team should have the necessary skills to turn Sprint Backlog into a releasable increment at the end of the sprint that meet the Scrum Team Definition of DONE, all type of testing that are require to produce such increment should be included in the Scrum Team Definition of DONE such Unit Testing, Regression Testing, Functional Testing, Integration Testing (Automation Testing). Although this may varies from team to team which means team with less experience may require Testers to be included in the team. This type of practice will give development team capability to practice how to write different type test
In my experience, in large and complex projects, when we demand a high level of quality, even leaving aside the tests required by regulations, we generally generate a large number of tests in which We invest a lot of effort to meet the requirements, which surely leads to delays or low value delivery due to the high consumption of the team capacity.
In most cases, we put more effort into the test development than in the development of the value / feature.
Is there any way or method to manage or balance this situation?
I remember my previous development director doing a scrum session with the teams.
He made us (Devs and QA) write the different things that (1) developer and (2) QA do in every sprint on post-it notes and paste on a white board. He then ask us to remove which we think should be done by both Developer and QA.
After the session, there was only 1 note left, coding.
All other aspects we wrote down (e.g. Testing, clarifying requirements, documenting features or discussing items) can be done by both Devs and QA.
If your QAs are the only ones doing testing, it will be a bottleneck at some point in the sprint.
Thanks guys,
One of the main conditions to run a successful agile development, to have a cross functional development team, has high skills such as testing, write testing scenarios, Devops etc ...
What I'm planning to do is to have QA team write test cases according to Acceptance criteria/DoD for each story,
so Dev team will take the responsibility of the testing for each case in the story,
and take this effort in consideration in sprint planning,Test cases will be part of DoR for the story,
this is for Unit Testing,
We also need an overall Exploratory testing round for the whole sprint on integration environment,
which will be conducted by QA team after the sprint and before the release.
I see a couple big challenges. In Scrum, a cross functional Development Team has all the resources needed to complete a 'Done' Increment by the end of a Sprint. There are no separate Dev and QA teams.
We also need an overall Exploratory testing round for the whole sprint on integration environment, which will be conducted by QA team after the sprint and before the release
If work to is needed after a Sprint in order to release (sometimes called a hardening Sprint), that is an anti-pattern in Scrum, and ill advised. What you are stating is that the Increment is not 'Done' at the end of a Sprint, and the team is missing one of the most important aspect in all of Scrum. As a Scrum Master you should not tolerate this impediment, and bring this to the Dev Team to include all testing in the Definition of "Done" and ask them how they can get to "Done" every Sprint, including 'exploratory testing'.
Run every Sprint as if it is your last Sprint. Finding surprises and quality issues after a Sprint is over hurts focus, and costs more. The later a problem is found, the more expensive it it to fix.
Also the DOR (Definition of Ready) is another anti-pattern, which leads to gates and contractual thinking, and how Agile is that?
All the members of the dev team are responsible for delivering the done increment, the definition of done should be strong enough to cover the quality as inbuilt. Inbuilt quality means the increment should be tested at each level. The biggest challenge industries face when they are less focused on automation. If automation has been done at each level of the development process (unit test, API test, and UI test level) and being improved in each sprint. It saves time, makes developers and QA skills set work together.
1: Unit test - Automation does not take much time
2: API - Automation does not take much time.
3: UI automation does take time - But when we use docker container services and run on the AWS platform, we create virtual machines on demand and run all the UI in very little time. (For example on AWS Fargate service, we can create 256 virtual computers on-demand, each with 3 CPU and run 500 UI test cases in 5 minutes ).
By this given approach, the team does not wait for the regression testing being done at the end of the sprint.
Scrum testing is testing done in Scrum methodology to verify the software application meets requirements. Scrum Testing also involves checking non-functional parameters like security, usability, performance, etc. There is no active role of Tester in the Scrum Process.
Thanks Thomas Owens and Ashutosh Kumar Rai. I found it useful explanation. Development and testing go hand-in-hand in Agile Scrum. Hence the role defined is "Development Team", there is nothing like 'Testing Team' as per Scrum guide!!!
As per Scrum guide 2017: Characteristics of 'Development team' :-
- Scrum recognizes no titles for Development Team members, regardless of the work being performed by the person;
- Scrum recognizes no sub-teams in the Development Team, regardless of domains that need
to be addressed like testing, architecture, operations, or business analysis; and, - Individual Development Team members may have specialized skills and areas of focus, but
accountability belongs to the Development Team as a whole.
Fitting Testing into Scrum or Agile is still a struggle, with the fact that Scrum does not advise having separate Roles as "Tester/QA".
Traditionally we have had QA Role and that has got into our DNA and within every project team a Testing team is included by default.
The same person, part of Scrum Development team, doing Coding and Testing, is something entirely deviating from the way things have been done until now. Literally if we ask a Coder (Developer) to do Testing (Functional Testing), they wouldn't even like it. Because they are Coders (Developers) and that is how it has always been.
I haven't really found a solution to this. Unless a Developer really changes his/her mindset and agrees to do Testing as well, we cannot stick to the principle of the Development team being Cross Functional.
Developers are explained in this manner in the current Scrum Guide
The specific skills needed by the Developers are often broad and will vary with the domain of work. However, the Developers are always accountable for:
Creating a plan for the Sprint, the Sprint Backlog;
Instilling quality by adhering to a Definition of Done;
Adapting their plan each day toward the Sprint Goal; and,
Holding each other accountable as professionals.
Previous versions contained statements to indicate that a Development Team would have all the skills necessary to do their work and that no titles were recognized.
Most companies these days have job titles Software Engineer and Quality Assurance Engineers. Within the people that have the Software Engineer title you will have people that specialize in front-end, back-end, database, and other specialities. So if it helps, think of the Developers in a Scrum Team being all those people with Engineer in their titles.
@Shashi Kar, I think that your difficulty in understanding is because you still see two teams, a development team and a quality assurance team, instead of a single team of people that have the ability to do all the work needed. When you see it as two teams you will usually create a handoff and separation of duties. A Scrum Developer team all work on the same thing...to accomplish the Sprint Goal. Embed QA on your Sprint Teams and let them work together.
@ Daniel Wilhite: Thank you for the reply.
Let me explain a little more on my struggle. Typically, before we called it Scrum or anything. We have monthly releases. In which, we had 2 weeks of coding and then 1.5 weeks of Testing. During the 1.5 weeks of Testing, the coders, keep fixing the Bugs and then the release. This was kind of waterfall.
Now if we need to apply this in Scrum and we have a 2 weeks Sprint. When does the Coding happen and when does the testing happen.
@Shashi Kar, IMHO coding and testing should happen in parallel, as soon as part of coding is done the testing follows. It's not a good practice to separate these two activities as both are necessary in order to call the work done.
@Shashi Kar,
As an Agile Coach and Test Coach it was a struggle in the beginning to unite development and testing in one Sprint, but eventually it becomes easy.
Development and testing must happen in parallel, take a look at the at the "3 Amigos" approach of BDD (Behavior Driven Development), even if you do not use the Behavior driven test automation frameworks.
In short: the PO explains the user story at the same time to the Developer and the Tester. While the Developer creates the code, the Tester creates the Test Cases and Test Data. Once the code has been finished, the Tester tests the code. If no bugs are found and all the criteria of the DoD are satisfied, than the User Story will be closed.
Important here is that the User Stories are small (preferably max 15h of Development) and once the Test Cases have been used one time with their correct result, they (especially the Positive Result Test Cases) should be transferred to the Test Automation Expert to be automated in the Test Automation Framework.
Hope that this helps.
Why separate testing and coding? All automated testing is code. Unit, integration, system tests can all be written and maintained at the same time that functional code is done. Much of the testing needed can, and should, be done at the lowest level to provide quick feedback. Much of the testing done by "testers" or "QA" is end to end system level testing. It is the most expensive and difficult testing to create and maintain. Have your "testers" work with the people writing the functional code to review their unit, integration tests. This will help to educate functional coders on how to ensure that they are covering their code efficiently and effectively. Any testing involving the user interface should be testing the user interface and not validating that the system works. UI testing can occur with unit tests in most of the common UI languages in use today. Stop thinking of testing being the job of someone special. All developers should be responsible for the quality of the work that they do. If they are not validating their work, they are not being responsible.
Sorry, but wrong due to oversimplifying the domain of testing.
I've tried to write several replies on your post, but they end up into writing half an instruction handbook.
Although an Analyst-Programmer has some basic skills in analysis and testing, their main focus is on coding. Even with that main focus and most of the time they also specialize in a subdomain of coding (front-end or back-end or mobile or certain dev languages....) it becomes a full-time profession to maintain and to stay competitive (continuous improvement).
This is also true for the Software Tester, who has some basic skills in coding and hardware, yet their main focus is on testing and again the testing domain is so huge that even as a full-time tester you have to specialize in a subdomain (Test Engineer, Test Analyst, Test Automation Expert....).
So unless you find a White Raven that lives in a parallel world where the days have 48 hours instead of 24, the chance that you find an Analyst-Programmer that can test just as good as a Test Analyst or vice versa is non existing.
So while the task of testing and coding are done by different individuals, that doesn't mean that testing and coding are separated.
Both work from the same starting point although one has a more technical and the other has a more business-orientated approach, a part of the work is done separated and in parallel, but another part is done by bouncing ideas from each other, giving different point of views and helping/teaching/learning from each other in becoming better.
The trick is to know where and when those interactions have to take place in the Dev Team and make them a learning/co-operation experience that the Team can do themselves.
On my projects I have testers as part of the "Development Team"
I have also defined Sprint QA separate from Release QA
Sprint QA is the minimum QA needed within a Sprint - Technical Unit Testing (TUT), Functional Unit Testing (FUT), Regressions Testing (RET), etc.... Sprint QA is typically performed in the DEV and TEST environments.
Release QA is QA needed for a Release. Often, several Sprints of work need to be 'stitched' together before a product can be released in my world. For example for a COTS product that requires modules or objects to tested as a system before release. Some examples of Release Testing include: End to End / System Testing (E2E), System Integration Testing (SIT), User Acceptance Testing (UAT), etc.. Also include any incremental data migration testing like MC1, MC2, etc... where each MC can be defined as a specific set of data needed for release. Release QA is typically performed in the QAT environment. I create specific 'Release Sprints' for these test activities.
I have many slides to explain this, but hope you get the gist.
I have many slides to explain this, but hope you get the gist.
Can you clarify how this ensures a fully tested, Done, and immediately usable Increment is planned and completed every Sprint?
“Can you clarify how this ensures a fully tested, Done, and immediately usable Increment is planned and completed every Sprint?”
It doesn’t.
It doesn’t. Not at every Sprint.
I'm aware my reply is a bit late, but I'd like to chime in and say that unit tests, regression tests, functional tests, and integration tests are all important pieces of the puzzle. An example for functional testing could be testing the user interface to ensure that all buttons, links, and forms work as expected or testing the user workflows to ensure that they are intuitive and error-free.