[Dev] Pull Requests Being Neglected
Hey there, fellow Scrum-sters,
our team is facing an issue for a while now, which may not be limited to Scrum processes, but I still think it fits here and am positive, you may have some helpful tips.
So, this is where we stand:
We have a prioritised, refined and estimated sprint backlog. We also have a mutual understanding of the definition of done. (The definition of done includes: a potentially releasable item, which has successfully undergone the "pull request" and "ready for deploy" state - we use kanban with several states in JIRA).
Our dev team starts working from the top priority PBI downwards, however as soon as they have "worked through" their tasks, they open a pull request and start with the next PBI. This is obviously not how it should be done, as the PBIs are not set as "done" at the end of the sprint, because everyone is neglecting the pull requests. There is a common understanding of the "definition of done" (and this is a potentially releasable item for us, which has successfully undergone the "pull request" and "ready for deploy" state).
However, no one really feels responsible for taking care of pull requests. Occasionally dev members would ask others to check their PBIs that currently have open pull requests during the Daily Scrum, yet nobody really takes care of them afterwards.
We thought, it might be a good idea to introduce a "pull request" hour right after the Daily Scrum, but we are not sure, if this is a good way of trying to deal with it. If you have any suggestions, how to make processing pull requests more platable to our Dev team. :)
Thank you in advance!
If you are using Kanban, do you have work-in-progress limits? If so, what are they and do you check on them at your Daily Scrums? When you reach your WIP limit, how does the team react or what does the team do?
With regards to a "pull request hour", what does the Development Team think of this? Does it align with the Scrum Values or the principles behind the Agile Manifesto or the principles of Lean Software Development?
Thank you for your reply!
I will try to answer your questions to my best knowledge, although I am still new to Scrum (and the least experienced person in our Scrum team):
If you are using Kanban, do you have work-in-progress limits? If so, what are they and do you check on them at your Daily Scrums? When you reach your WIP limit, how does the team react or what does the team do?
While we do try to aim at "performing/processing" WIP limits, your question made it very clear to me, that we do not perform limits consequently right now.
This is partly due to our too loose handling, as well as a restructuring of our dev team (the team size was reduced from 9 to 5 people on short notice). We try to avoid more than 5 PBIs, however your question "how does the team react or what does the team do?" shows, that we don't really know, what to do once we reached the limit. The developer who has worked on the corresponding ticket, usually asks others during the Daily Scrum to look into the pull requests, however they often remain neglected. So there is no real consequence in exceeding the limit.
Can you recommend some methods/techniques on how to effectively proceed as a consequence of reaching the WIP limit?
With regards to a "pull request hour", what does the Development Team think of this? Does it align with the Scrum Values or the principles behind the Agile Manifesto or the principles of Lean Software Development?
While I do see, the "pull request hour" might be considered as an additional process and thus potential waste, it was the Dev team's idea to face and channel the problem and have an event as a structure to "take care" of it. However, it is not carried out right now.
Everyone in our Scrum team knows, it is a problem currently and we tried different approaches, which we thought would get us a little closer to a result, however we also came to understand that we did not really grasp the source of the problem.
Thank you for taking the time to read, think and reply :)
So there is no real consequence in exceeding the limit.
It sounds like there's a very real consequence of not having "done" work at the end of the sprint. Is the team satisfied with presenting a bunch of unfinished work during sprint review? Do developers really want their names associated with that work?
If you have a WiP limit that the team agrees to but doesn't follow, it might just be a matter of finding enforcing that rule. One way to do this might be to lead by example and bring it up at your daily meetings, both to explicitly remind people that you're following the team rules, and to implicitly remind them that they're supposed to be following the same team rules.
"Yesterday I completed my development task. We're now at 3 open pull requests and two tasks in development. I can't start my next development task until one of the pull requests is closed. Today I'm going to tackle a pull request to keep our WiP at 5 or less, and then take on another development task."
I have a few thoughts here.
First, instead of just limiting your total number of PBIs, limit the work in the Sprint by some better form of capacity. Unless you are really good at having your PBIs decomposed into roughly the same size, you'll want to consider how big the PBI is. Story points are one method, but t-shirt sizing can be another. Really any level of size estimates would work for this.
Also consider the steps in your workflow and limit the number of work in progress items in each step. If your steps include "To Do", "Code and Test Development", "Code Review / Pull Request", and "Done", your entire Sprint Backlog would start in "To Do". Limit the number of things that can be in the "Code and Test Development" and "Code Review / Pull Request" states. Use a visualization of the states of each item during your Daily Scrum. Don't let developers open a Pull Request if there are too many things in that state.
At your Sprint Reviews and Sprint Retrospectives, talk about your flow. Consider tracking things like how long each item was in each state. If you have estimates of size and the time in each phase for each PBI, you can use that to sanity check your estimates. Although not always true, I would suspect that in most cases, larger and more complex PBIs will take longer to code, test, and review. If you start with some measurements, you can have some data to talk about and figure out how to improve.
My advice is to leave aside the issue of so-called “pull requests” for the moment. The team is evidently not self-organizing around a pull-based system yet, despite the word “pull” being used, and hence flow is sub-optimal.
Is there a clear and valuable Sprint Goal for team members to commit to and focus on? What are the consequences of not achieving it? With a Sprint Goal to aim for, the team may have reason to adopt good patterns and practices, such as the limitation of WIP and response to pull signals.
Thank you for your various replies!
@Jason: It sounds like there's a very real consequence of not having "done" work at the end of the sprint. Is the team satisfied with presenting a bunch of unfinished work during sprint review? Do developers really want their names associated with that work?
You are absolutely correct! My wording was wrong. What I wanted to express was, that exceeding the WIP limit won't have an instant consequence during the sprint on a daily basis, however it clearly has a consequence at the end of the sprint.
Concerning your question: "Do developers really want their names associated with that work?", I feel like the team currently lacks a bit of cohesion due to the aforementioned personal restructuring, so I doubt they feel fully responsible, although they are technically responsible.
@Thomas: I have a few thoughts here.
First, instead of just limiting your total number of PBIs, limit the work in the Sprint by some better form of capacity. Unless you are really good at having your PBIs decomposed into roughly the same size, you'll want to consider how big the PBI is. Story points are one method, but t-shirt sizing can be another. Really any level of size estimates would work for this.
I am afraid we might have a misunderstanding here. We do estimate PBIs, the estimation is, however, unrelated to the WIP limit count.
@ Thomas: Don't let developers open a Pull Request if there are too many things in that state.
That is the part, that we are struggling with. "Don't let them" is easier said than done - which is why I was requesting some potential techniques / methods for the dev team to facilitate the process of WIP limits instead of trying to force it on them.
@Ian: Is there a clear and valuable Sprint Goal for team members to commit to and focus on? What are the consequences of not achieving it? With a Sprint Goal to aim for, the team may have reason to adopt good patterns and practices, such as the limitation of WIP and response to pull signals.
I think, you nailed it! While there is an abstract sprint goal, I think we fail to define it thoroughly. If we took the time to elaboratively define a sprint goal, the dev team probably will identify more with the sprint, which might lead them to feeling more responsible to achieve the sprint goal (and follow certain techniques, such as WIP limits).
I am afraid we might have a misunderstanding here. We do estimate PBIs, the estimation is, however, unrelated to the WIP limit count.
How do you decide which PBIs to bring into the Sprint Backlog, if you don't use the estimates? One first WIP limit is to limit the total size of the Sprint Backlog (which is one form of WIP) to limit based on your estimates.
For example, if you are using Story Points to estimate the effort and complexity of PBIs and Yesterday's Weather to help determine your capacity, Yesterday's Weather is your WIP limit for your Sprint Backlog as a whole. Then, you would have other limits for how many things are currently in progress, how many things are being reviewed, and so on. These can be based on a raw count or on the value of the estimate associated with the PBI. I, personally, prefer raw count.
Also, don't forget to capture some data and metrics for discussion in Sprint Review and Sprint Retrospective to help you improve the processes and methods that the team used. Something easy to talk about is how many of the backlog items didn't get done and why. If you're effectively planning a Sprint based on estimates and recent past performance, why were you not able to finish the things that you expected to finish? How long does it take an individual backlog item to go from start to finish? Consider that there may be multiple starts and finishes - two easy ones are the total time from the start of the Sprint to the time it met the team's Definition of Done and the time it took from entering a state to leaving that state (eg how long it took to code and test, how long it took to be reviewed).
That is the part, that we are struggling with. "Don't let them" is easier said than done - which is why I was requesting some potential techniques / methods for the dev team to facilitate the process of WIP limits instead of trying to force it on them.
Although, in Scrum, the Scrum Master role is defined as a servant-leader, sometimes, a heavier hand to identify a good set of starting practices and enforcing them is needed. The Scrum Guide provides a framework. Since you're also using Kanban, the Kanban Guide for Scrum Teams is probably also useful. Both Scrum and Kanban have things that you need to do. One example of them is a WIP limit. Scrum and Kanban aren't always the best or right tools for a team to use. But I've found that if you're going to go down a road of process improvement, starting with the things that have widespread success will at least orient you. Then, make small changes. But do so intentionally - understand why the process frameworks you started with had you do something and make sure that the change is truly the right thing for the organization. The right thing isn't always the easy thing for the team, especially when the change is new - you need sufficient time to evaluate a process and make minor tweaks.
Not sure if it really works, but what about shifting the goal posts?
Change the DoD from "ready for deploy" state to "deploy in an environment" once development is done so that the developer is forced to close the pull request and merge the code. If you have an automated deployment pipeline which can be utilised, leverage it.
Your team can use this environment to give the demo of Sprint during the Sprint review meetings.