How to manage client testing in sprint
Hello all,
I have an issue in my team, we try to adopt scrum in our team but we found it very difficult (no scrum master to guide us but many people certified PM1). We already have many occurence of sprint but each time we have difficulties :
We have a pipeline of request ask by business users and review by members of our team with analyst skills to write stories.
Stories are estimate in a meeting with all the team (dev + analyst) and then pushed in the sprint (2weeks duration) in sprint planning meeting if it's ok. I found one difficulty here : have proper acceptance criteria.
-> To planned our sprint we calculate a capacity linked to the number of dev in the team and the time they will be available. We have in mind that sometimes priority can change quickly so we only put stories to fill 80% of the total capacity. (And because estimation is only estimation, we can't be 100% accurate on it).
So at this time developper develop a story and put it ready for testing, a member of our team with analyst skills test it and then... here are the problems. Story is kept in "testing" until we have the client answer. But sometimes they can answer : 1,2,3 weeks later because business not available anymore.
But then what to do with those stories at the end of the sprint ? What is the best approach ?
-> Do we have to change our DOD to says : story is done when first QA step is done. But how to manage stories done but still waiting client answer to be put in a release ? And it means in our head "work is complete, well done" but client can goes back with 3-4 defects 2 weeks later.
-> Do we have to put it in the next sprint ? But because we planned the sprint linked to the estimation it will fill the sprint with work already done. Is it because our stories are too big ? We could have some >10points.
Also we work with many different client, and they think we work too slowly because they don't understand why the request without value opened 2 month ago is still in draft. How can you change this in there head to improve our image in the company ?
Thank you in advance to have read this post :) If someone have some answers to guide me it could be very helpful !
analyst skills test it and then... here are the problems. Story is kept in "testing" until we have the client answer.
Client answer for what? Don't the Developers have the skills to finish the work, including all testing?
Hi,
Client answer to have the go to push in production the story because they test also the story in our test platform to validate if it well develop or if it's what they really want.
We have a lot of new requirement given by the business when they test the story. So they don't want that we deliver the product in production without the new enhancement.
We don't know how to manage this in scrum :
- Story waiting for client testing and "go" for production -> client are not part of the dev team
- Story complete but not pushed in production because of new requirement given by client during their test.
By writing this I guess we have a problem with acceptance criteria (is not really manage today). We should probably say to business to iterate on new requirement to have story pushed in production even if new requirement are given during client test.
Do we have to change our DOD to says : story is done when first QA step is done. But how to manage stories done but still waiting client answer to be put in a release ? And it means in our head "work is complete, well done" but client can goes back with 3-4 defects 2 weeks later.
Cutting corners on the DOD would be a bad idea if client testing and approval is mandatory before the release. I had a similar issue in one of my engagements and adjustment was done with respect to the sprint length. We kept the sprint length as 1 month and out of this 2 weeks would be development, 1 week UAT and 1 week regression. We had one more environment other than UAT where we deployed the code and as and when the story is deployed, we asked clients to do one round of testing over there before the actual UAT. You need to hold the clients accountable for UAT completion and once it is not signed off we do not move it to Production. So every 1 month sprint ends with a build to production. You could follow a similar strategy.
Do we have to change our DOD to says : story is done when first QA step is done.
You're either done or not done, there is no in between. There is no such thing as "99% done", "almost done", or "done but more testing is needed".
Story is kept in "testing" until we have the client answer. But sometimes they can answer : 1,2,3 weeks later because business not available anymore.
If someone on the team holds the Scrum Master accountability, why is the long cycle time and waiting not viewed as an impediment and why is it tolerated?
I also might wonder why the Product Owner can't connect with the customers to shorten the feedback loop? Shouldn't they be the voice of the customer?
The Sprint Review exists to help fix this problem. Instead of waiting for the stakeholders to test, they attend an event where the "done" work is discussed with them to determine if there are any needed adjustments that can be made to the Product Backlog. Feedback on work that is "done" would be the topic of those discussions. Look at the section of the Scrum Guide that explains the Sprint Review event. If you are not having this event, then you aren't using the Scrum framework.
This is the section of the Guide that describes the Definition of Done.
Commitment: Definition of Done
The Definition of Done is a formal description of the state of the Increment when it meets the quality measures required for the product.
The moment a Product Backlog item meets the Definition of Done, an Increment is born.
The Definition of Done creates transparency by providing everyone a shared understanding of what work was completed as part of the Increment. If a Product Backlog item does not meet the Definition of Done, it cannot be released or even presented at the Sprint Review. Instead, it returns to the Product Backlog for future consideration.
If the Definition of Done for an increment is part of the standards of the organization, all Scrum Teams must follow it as a minimum. If it is not an organizational standard, the Scrum Team must create a Definition of Done appropriate for the product.
The Developers are required to conform to the Definition of Done. If there are multiple Scrum Teams working together on a product, they must mutually define and comply with the same Definition of Done.
It is the commitment that is made by the Developers for every increment. I coach teams that the Definition of Done only contain specifications that can be satisfied by the Scrum Team and does not contain any actions for entities outside the organization. This is how you communicate to those outside the team what work has been done up to this point so that they know what to expect from you. You have no control over the external resources so you will always be at their mercy if you include their work as part of the Definition of Done.
Also note that no where in that description does it say the work has to be "in production". It is not uncommon for teams I work with to have "done" work that is not deployed to production. By requiring everything to be in production in order for it to be "done" opens yourself up to problems. What if the Product Backlog Item does not describe a complete functionality and only describes partial functionality so that feedback can be obtained on the final solution?
It also doesn't sound like you are working on a single product. You seem to be working on multiple small projects for individual customers. The basic premise of the Scrum framework is that all work done by a single team is for a specific Product for which a Product Backlog exists that enumerates the changes needed for that Product. Without the Product defined, you are again not using the Scrum framework.
Time boxed intervals and user stories does not mean you are using the Scrum framework. The techniques can be used outside of the Scrum framework but it is not Scrum. If you decide not to use the entire framework, you will be faced with problems that the framework could avoid. So, you have to decide how to deal with these issues in the methodology that you create. An entity cannot be agile if it has to depend on others. A cheetah and an elephant are not agile. The cheetah could be agile. The elephant could be agile (compared to other elephants) but the two together might not be considered agile. So focus on what your organization can do by itself and build your processes around that. Anything required from outside the organization is not something you can control so you can't ensure it will be able to keep up with your agility.
From previous experience on projects, work items are marked as done when the team has tested them and confirmed they work as required. The ticket is considered closed from the team's perspective when it is handed over to the support team, the client, or released to production. If the client reports an issue after 2 to 3 weeks, this would typically be viewed as an "escaped" defect. In such cases, either reopen the original ticket (tools like Jira allow this) or create a new ticket describing the defect and link it to the original ticket.
In your scenario, where client feedback is pending, treat errors or changes reported later as you would handle defects reported in a production system.
Rather than over-complicating ticket management, focus on reducing the number of defects or changes coming back from the client and identifying ways to minimize them. For example:
Were the original requirements vague or incomplete?
Did the team misunderstand or incorrectly implement the requirements?
Was the testing process inadequate?
Hi,
First thank you to all of you for your multiple answer.
To explain more about the context :
Our sprint contains multiple small feature from multiple any internal users of the company (usually linked to ITSM processes).
So customers can change form one sprint to another.
We have PO/proxy PO who tried to more involved customers to our methodology but it's complicated to have quick answer from customers sometimes.
And yes we don't have a review meeting with all customers at the end of the sprint because of the difference between each new functionality requested.
So what I learn here :
Story can be done without being in production : We can remove customers test from our DOD. What we have is : job done by developer -> QA test done -> Story completed and put in test platform
Maybe to replace sprint review : we could add in our DOD a small demo to the customers to have quick return from them and not wait complete their tests.
If defects/new requirement for the functionality are found from customer during their tests : we open new stories then we planned it in next sprint (or put it in current sprint if dev give their agreement).
So we removed bottleneck in our process BUT we have to manage and follow the backlog of complete story waiting for customers test.
small demo to the customers to have quick return from them
This is a good idea. I originally assumed the customer is not available for the sprint review or demo, but if you can connect with them via a demo, then that will be a good option.