Vertical Slicing techniques for user stories in ETL (DataStage to Abinitio) migration Project
Hello,
I have below question for which I am looking for inputs/ suggestions from you all.
Background:
I'm currently with ETL squad who are deployed to migrate DB jobs from DataStage to Abinitio and we also use Golden Gate setup to extract and pass on the data to destination tables. We already scope defined, i.e. migrate all the jobs from DataStage to Abinitio so that DS will be shutdown for end users.
But, as per our sprint (Planning, refinement) sessions, we organize and prioritize which jobs should we migrate into Prod. The user stories are written in a way that "Conversion of PXJ_WEBATM_HOST_RPT job" which the developer(s) is aware of the graph in the DataStage and then will work on it.
As scrum master, I have coached team to split the user story into minimal tasks which can be done in a day so that we can estimate in a better way. The current tasks each story has is 1. Development, 2. Unit testing, 3. Code Review, 4. SIT/UAT testing, 5. Prod Acceptance / Reject, 6.Done.
Question:
Do we any specific slicing techniques which can be tailor fit to the ETL projects? The team are pretty new to the slicing techniques and they have no idea.
Thanks in advance for your suggestions
Cheers!
You say that scope is already defined. What uncertainty is there, which can then be brought under control, Sprint by Sprint? Any "slicing" of work should help the team frame and meet Sprint Goal commitments for this purpose.
Ian as I mentioned, the Scope is more or less like end outcome rather on incremental wise. "We already have scope defined, i.e. migrate all the jobs from DataStage to Abinitio so that DS will be shutdown for end users"
This is what the Org would like to see down the line after 4 months. And to achieve it, we are picking it up in chunks (each module and migrating) and pushing to prod every 2 weeks. This scope is very clear and we are aligned with it.
My question was rather on the user story side. When the team picks stories, during planning session, the activities / tasks involved to achieve the HOW part will be created by the team. As of now, we are following very basic way to do it. Is there any thing specific to ETL. Having this will not accelerate our release cycle or the other way around.
Thanks.
Ian's response is - as always- deep suggestion whether you need or not Scrum (when there is no uncertainty, there is no need and maybe no value in "doing Scrum") :) This forum you can receive expertise regarding Scrum itself.
Let me try to help you with your case, although bear in mind, that this mentoring requires to be adjusted and validated via number of experiments in well run Scrum.
My experience here is as follows: you move concepts (data pipelines?) from one BI technology to another BI technology. The uncertainty (which is important factor to consider when using Scrum, as suggested by Ian) here is usually "what is done in technology A, could we also do this similarly in technology B?" or "this ETL mapping is specific for technology A, how could we design and implement it in in B". You should probably split your work vertically per full data pipeline (something that may require to design part of dat model, provide dta transfromation, cleansing, mapping etc.). This is your full user story.
To split this story to smaller stories, when required, you could consider r&d work "how to do it in B" or to consider to implement general case first, and then to add additional stories to enrich the process by cleansing, special cases, data validation etc.
vertical story (full data pipeline) --split1-> spike (r&d) + work
vertical story (full data pipeline) --split2-> basic case + cleansing data + data validation
From your explanation, it seems the project is more technical and not much of end customer facing. SInce your team knows the end goal to be achieved(like the total number of jobs to be migrated) and all you want to know is the 'How to do' part.
Here I agree with Ian and Tomasz as your team needs to figure out what problem you will address by doing in Scrum way because you may not have a definite increment as such to create in every sprint. And if you want to create Product Backlog items for each job separately, Then you can think to work with Kanban principles. This way the team can look for the throughput to measure how many jobs migrated in given time so that helps to estimate better.