Running Scrum with dependencies on third-party
What best practices can you suggest for running Scrum in an environment where systems need to be integrated with other systems for which changes can only be implemented by third-party companies/contractors, which leverage a Waterfall approach?
Imagine a situation where you start with User Stories and you don't know which interfaces to which systems you will need (which you will need to contract with third-parties). When running a Sprint and defining the needed interfaces should we emulate the interface? Define the relevant interfaces and put the Story back into the Backlog until the relevant interfaces are implemented? How best to manage such situations when running Agile projects?
This is a great question. The approach could vary depending on the specific circumstances. If the interfaces won't be known until they are available, then a Product Backlog item with a high estimate due to the uncertainty makes sense; once the information is known then the feature can be re-evaluated, sliced if necessary, and added during Sprint Planning. If the Development Team can define the interfaces that they require or the third-party has specified the interfaces, then emulation might make sense or you could use the wait until it is known approach as above.
Hi Andrew,
this is indeed a tricky situation. My team has been living in such an environment for a long time. Hope our experience will help you. My team is developing applications that often interact with HW equipment which is also developed in-house. So the issue of uncompleted or missing APIs is quite common.
There are several points I would like to mention:
First, obviously, we are not in total control of the product, so PSP definition could be tricky. We could say that our PSP is the part of the system that we own, but this doesn't feel right to me. Eventually the product that will be delivered to customer is comprised of several parts of which mine is only one. So we need to come to terms with the fact that PSP is not totally ours.
Second, the most problematic issue is the one of missing or incomplete interfaces. My take on this is that you cannot add user stories that depend on an interface to the sprint backlog until this interface is well-defined. It doesn't have to be implemented in the other system, but it must be well defined. In other words, a well-defined API is a precondition for the Definition of Readiness (DoR) of an item. Once it is defined you can implement your side of it. You can also test it by using software mocking technique. And this brings me to the third point of DoD.
How do we define the DoD in this case? I think the correct way is to define Done is when we are ready for integration with the other systems. Again, this is the only one under our control. So for example, part of DoD could be completing and passing all the mock tests. Naturally, on a product level, you will need to allocate some time for inter-system integration - this can be part of current or next sprint.
Hope this helps,
Michael
> What best practices can you suggest for running Scrum
> in an environment where systems need to be integrated with
> other systems for which changes can only be implemented by
> third-party companies/contractors, which leverage a Waterfall approach?
Which party wishes to use Scrum in this situation, and what benefits do they hope to accrue by impementing it?
Where does the obligation to change lie?
+1 Michael.
Completely agree, and reflects my experience as well.
As a guideline, try to not "accept" sprint work that is dependent on outside development to meet DoD. "Stubbing" (mocking) is a common practice where inputs and outputs are simulated in order to best determine completeness and quality of the team's deliverable.
This sounds like a scaling issue. So agile teams have to work effectively with waterfall teams. Its not uncommon for several agile teams to have dependencies on multiple centralised functions that are typically non agile. Lets says that each scrum team has a dependency on the same function. How do we prioritise these dependencies. Surely the centralised function could be overwhelmed by a backlog of requests that it can't service and synchronise within the time-box constraints. Its effectively taking away the empowerment of agile teams to self organise because of such dependencies that are probably orchestrated in a command and control fashion. Take a thin vertical slice thru the system to discover all these dependency points. Organisational specialisms ie reporting function OR a BI function dependency etc. The product is NOT complete unless it has such functionality. Stakeholders will want to see that the product meets acceptance criteria by observing that say the BI functionality works in a dashboard that interfaces to the work of the scrum teams. Thus I think the DOD should actually state that the task does interface to these dependencies and full end to end testing has been performed!! I think this should happen early on as a priority in order to verify that all facets of the functionality are operational within a thin vertical slice. Communication and Trust are going to be huge factors in such a setup.