Determining velocity for large team + different team members
Hello all,
I'm a scrum master and have challenge in determining my team's velocity. The team is been around 6 months with 2 week sprint and we haven't determined the teams' velocity. The biggest challenge is we have different engineers and the nature of work differs. For instance, we have software developers, application engineer, QA and mechanical engineer. Here other team members have less or nothing to interact with the mech engineer. His work is very much independant of rest of the team. There are active 4 developers, 3 QA, 1 each of the rest. As the nature of the work differs, the pointing system also varies. I see we are too big team for an agile team but i would like to see ideas on how i can handle velocity in here. Let me know if there are any other information that i can provide to make it easier. Appreciate, all of your response.
First things first - what is the teams' opinion on that problem? Did you talk with them about it? 🤔
Putting aside the type of scale or "perfection" of your estimates. Velocity is nothing more than a lagging indicator, which tells you about what has already happened, so what are you trying to determine here? Just measure average velocity in the past 3 - 6 sprints and that would be enough for that metric. You shouldn't be overthinking it, or be too dependant on it - if you do, you will probably enter some pitfalls like i.e. reestimating "Done" PBIs that the team feels were underestimated.
Velocity can be a handy metric - but it is not even a part of the Scrum Guide. What ultimately matters - Is the Scrum Team able to create a "Done", useable, and potentially releasable product Increment within a Sprints' timebox?
If not, then follow your good sense and observation, you already mentioned that in your opinion the team is too big, maybe you should focus on that, not on "determining" velocity? Or maybe there are other things that are not in place, did you consider i.e. things like Refinement, Sprint Goal or Scrum Values?
Going to the fundamental concepts, do you have a team? Can you define the product or service that this team supports? Are all of these skills necessary to deliver that one product or service? From what you describe, especially with statements like "the nature of the work differs" and some members of the team being more isolated with respect to the work that they do makes me think that you may not have the right structure for your team or product.
@Piotr Górajek - the problem why i'm trying to identify teams velocity is because often we have items overflowing to the next sprint. So identifying how much load the team can take according to the teams capacity is important to ensure we deliver what we commit to and that we commit what we can possibly handle. Also, team and PO has had concerns that we may be over committing in the sprint. Hence, i have been working on to address this issue and from many sources i learnt determining teams velocity and commiting work accordingly can help.
And yes, we have started working on backlog refinement and do have sprint goals.
I'm a scrum master and have challenge in determining my team's velocity.
Start off with the Team's Definition of Done. Is it of release quality? Do the team meet it each and every Sprint by delivering increments that are fit for immediate release?
If not, the velocity is zero.
The word velocity is not mentioned at all in the Scrum Guide and neither is any type of "pointing". The intention of Scrum is to produce a potentially releasable increment of value in every Sprint. So instead of trying to find a magic formula for determining how much work your team can do help the team determine how to recognize how much needs to be done in order to provide the increment of value.
You say that you want to determine the velocity because stories are being carried over and the team feels that they are overcommitting. So analyse the data you have and look for opportunities to improve. Here are a few things I'd look at based on your comments.
- Are the stories more complex or involved than the team originally anticipated and thus taking the team longer to complete than originally thought? That points me towards an issue with refinement, not taking on too much work. They aren't identifying the work properly.
- Are you trying to have enough work for everyone on the Development Team to have 80 hours of work in the Sprint? If so, stop doing that because you will usually uncover some information while doing work and that takes time that was not anticipated.
- At the end of the Sprint are some people idle while others are trying to complete stories? Is there anything that can be done by the idle individuals to help the ones that are still working?
- Are there interruptions for the Development Team such as production defects that pull them away from the work that they intended to complete from the Sprint Backlog? If so, find a way to mitigiate that problem.
- Look at the team's throughput and cycle times instead of the number of story points being completed. Those measure the actual time it takes to complete items versus a measure of how well the team can guess at the work that is to be done.
@Piotr Górajek, @Thomas Owens and @Ian Mitchell all have some extremely valid and thought provoking suggestions. There isn't going to be a single thing that you can do to address this. It will take effort to investigate from multiple perspectives in order to determine some things that you can try. Yes, I said "try" not "do". You will most likely need to experiment with multiple alternatives in order to find the right combination of techniques to improve.
Excellent advice so far. I would just add that you need to be using the Daily Scrum as a key inspect/adapt event for your Development Team to assess progress on the Sprint Goal and the sprint forecast.
If the team has any concerns about completing items in their Sprint Backlog, as long as the Sprint Goal is still viable, it is better to de-scope an item that has not started yet than to allow unfinished work to carry over. This may also help your Development Team determine what they are capable of delivering each Sprint.
Keep in mind, the Scrum Guide only tells teams to use their capacity and past performance as guides to help them determine what they can deliver. How the team does that is completely up to them.
Velocity is for the team to track and forecast its own progress compared to how it did so far. Velocity is optional. Given the fact that you have isolated team members not working in tandem with the rest of the team and that the team size is not within recommended limits proposed in the Scrum guide, my advise would be to focus on making the team more cross-functional and self-organized rather than focus on velocity at this time.
Please note that velocity has no value if the team is not cross-functional and consistent capacity established. It's better to focus on the mandatory things from scrum before dwelling into the best practices that are not mentioned in the guide.
I would be more concerned of the below anti-agile patterns than velocity:
"For instance, we have software developers, application engineer, QA and mechanical engineer. Here other team members have less or nothing to interact with the mech engineer. His work is very much independent of rest of the team"
1) If rest of the team thinks there is nothing to interact, how can it be cross-functional? where is the shared responsibility? What are the testers testing? How are they estimating in relative story points (which is a necessary input to calculate velocity)?
2) Thought it's hard to completely get away from specialized skills set, the team needs to practice calling themselves as "Development Team member" rather than based on their specialization. Also the rest of the team needs to interact with the mechanical engineer to get his inputs during the Backlog refinement sessions. To understand what kind of work he will be contributing towards the Sprint goal and how it integrates with the work form the rest and how it can be tested.
I will give an example of similar situation in my team. We have iOS, Android and Cloud services developers, and we have Mobile testers and Services testers in the same scrum team (yes, unusually large scrum team). When we get a scope item which involves development across all of iOS, Android, services change and that needs to be tested, we are all working as ONE team with same sprint goals. However, we have had sprints where there were iOS only work (example Apple Wallet integration is iOS only and not applicable to Android) or we have only Mobile testing effort when where development is once, but testing needs to be done for each state/region where the feature is going live. In such case, development and testing for 1 state may get over in 1 sprint where entire team worked on it together. However, in subsequent sprints, only Mobile testers are occupied to finish testing the feature for rest of the states where the feature will be rolled out in US. At that time, we end up looking for work to keep developers busy.
This cycle is creating a situation where we always have lopsided scope utilizing portions of teams for diverse items. While we are assessing in terms of utilization and value delivered overall and making progress by using everyone's capacity and continually delivering. There have been concerns expressed by clients that we do not have a determined velocity for their business to forecast. The sprints where everyone works on a common goal, total points is less as compared to when team members are working on diverse independent goals (higher points) and hence no consistency.
How do we handle this?
How many products are you working on, and does each of them have a clear owner who knows how to maximize product value?
Ian, it is one single mobile app with multiple features in it. We have clear owners for the features, however we are struggling with forecasting the team's velocity for future releases due to this anomaly of certain sprints with workstream based scope items (iOS only stories, Android only Stories, Testing only stories).
However, in subsequent sprints, only Mobile testers are occupied to finish testing the feature for rest of the states where the feature will be rolled out in US.
It sounds like your Development Team aren't producing a "Done" releasable Increment every sprint, but are instead completing milestones in a waterfall, and then handing off to other parts of the team, so that the "Done" Increment can be achieved later.
A "Done" Incremement is required every sprint, otherwise transparency is lost, and inspecting velocity or progress become less meaningful.
it is one single mobile app with multiple features in it. We have clear owners for the features, however we are struggling with forecasting the team's velocity for future releases due to this anomaly of certain sprints with workstream based scope items (iOS only stories, Android only Stories, Testing only stories).
It doesn't sound like you have a clear owner for the product.
Without a Product Owner, you can expect no clear view of what the product is, or how the work remaining for it ought to be accounted for it on the Product Backlog, or how to make any forecasts regarding product release.
Well, Simon - we are releasing the feature in increments. The feature is rolled out per state (or few states, example CA, WA is released first, followed by NM, CO, followed by GA, VA, etc) with subsequent releases. Hence, we do have a DONE for the increment/sprint which is tested and certified and ready for release.
Ian - Product team can decide to have a feature which is specific to a user channel. For example if Business wants to enable Apple Wallet to make as a payment option, it is a defined feature having clear owner for the feature in question. It is applicable only for iOS and Apple Wallet is not applicable for Android. However, the scrum team is composed of developers for both channels, iOS and Android. In this case, Product Owner for Bill Payments can decide whether to enable a particular payment type (Credit Card, ACH, Digital wallet, Gift Card). So, yes, there is a clear product owner and roadmap, Just that roadmap might not always have features rolling out for both channels (iOS and Android).
The challenge is scope items which does not need skills of full scrum team to work on causing an imbalance around utilization. One area of change we are working on is building the team to be cross functional to avoid such challenges to forecast a velocity. Any other suggestions?
So, yes, there is a clear product owner and roadmap, Just that roadmap might not always have features rolling out for both channels (iOS and Android).
My advice is to reconsider the Product Owner role, and how it ought to be implemented for one product with one Product Backlog. No-one else is accountable for the Product Backlog and how it is structured and ordered. When this is genuinely clear, and work is "Done", the PO can make forecasts and projections for release.
Well, Simon - we are releasing the feature in increments. The feature is rolled out per state (or few states, example CA, WA is released first, followed by NM, CO, followed by GA, VA, etc) with subsequent releases. Hence, we do have a DONE for the increment/sprint which is tested and certified and ready for release.
Ah, I didn't quite understand that at first. Now it's clearer to me. It's not a situation I've personally encountered, but I'm intrigued by it.
When you have specialists in a Development Team, keeping people busy in itself isn't going to give you a return on your investment. Optimize for value delivered, rather than work done.
I'll refer to your non-testers as programmers, to differentiate from all members of the Development Team, who are considered Developers.
On a high level, it sounds like you currently have a short period where it makes sense to invest in programming, followed by a longer investment in testing (it would be interesting to know if that tends to involve bug fixes, and/or customization and compliance changes per state).
Is it a reasonable assumption that any other programming work would depend on testing? And would this programming work be of lower perceived value than rolling out the latest Increment nationwide? If so, then this investment in programming would be wasteful.
What can be done to eliminate waste? Would a greater investment in testing (either more testers, or more efficient tools and processes) help to remove the bottleneck?
How willing are your programmers to help out with this effort? Maybe they would do the same work as the testers (even if they're not as efficient), or be able to program automated tests, or maybe they can refine or develop in a way that optimizes for a shorter testing time.
Is it an option to develop less functionality within a single sprint, and have that tested for a national release within the same sprint, so that the next sprint will allow further value delivery at a sustainable pace?
Simon!
When you have specialists in a Development Team, keeping people busy in itself isn't going to give you a return on your investment. Optimize for value delivered, rather than work done.
I completely agree to this and we are finding ways to utilize the capacity to deliver value.
How willing are your programmers to help out with this effort? Maybe they would do the same work as the testers (even if they're not as efficient), or be able to program automated tests, or maybe they can refine or develop in a way that optimizes for a shorter testing time.
This suggestion is very helpful. Probably programmers may not be as efficient as testers, but can be involved in programming automated test suites (involves coding which is their strong suit) and increase the pace of overall testing. This will help utilizing programmers capacity along with delivering value continually. I will run this idea by our team and check the pulse and willingness to contribute in this fashion.
Also, rolling out regionally is by plan and not due to lack of readiness. Business wants to roll out features in phases across states based on a given state being operationally ready to start consuming the feature, infrastructure's capacity to incrementally evolve (increase cloud instances organically) to take additional load as the feature expands to more territories and the like. Hence we are in this unique position and this is becoming perpetual to an extent that now we need a solution. Initially we anticipated this to be one off cases and once the feature was done, we will get back to our usual feature release cycle. However, this is becoming a pattern and hence I sought out to this forum for help.
Simon, I really appreciate you taking time and providing valuable insights. I have got something to work with!!
My advice is to reconsider the Product Owner role, and how it ought to be implemented for one product with one Product Backlog. No-one else is accountable for the Product Backlog and how it is structured and ordered. When this is genuinely clear, and work is "Done", the PO can make forecasts and projections for release.
Ian - yes, we definitely need to influence on narrowing down to that one product backlog and have full accountability on the product side so that we can navigate these situations. Will take this up with our business team in next retro. Thank you very much!
Also, rolling out regionally is by plan and not due to lack of readiness.
That's interesting to hear, in combination with your testing strategy. Having a releasable Increment is not the same as choosing to release your Increment to any/all customers.
Just one final thing to consider: does it still make sense in your context for testing to be tied to a regional/state release? Can it not be something that is included as part of the "Done" Increment, meaning that the Increment can be released anywhere at a later stage, whenever desired/required?