A story of lead-time in digital
BUILDING BRIDGES – In another article for their series, lean digital company Theodo tells us about their efforts to measure and reduce its lead-time to deliver product features.
Words: Benjamin Grandfond, Chief Technology Officer, Theodo
Three months ago, Michael Ballé visited us at Theodo and, once again, he asked us where he could see the lead-time for our digital products. Embarassed, we answered: “Well… hum… you know, it is actually very hard to see how long it takes, because...” Really, we had no good reason not to measure it.
At Theodo, we build and deploy web applications for our clients. Companies come to us when they need a technical team to build their application fast. In order to start a project, we have to convince our clients that we deliver value faster than the others. Isn’t this alone a good enough reason to track how fast we are in our delivery?
DEFINE WHAT AND WHEN TO MEASURE
First of all, let me give you some context on how we work. Day after day, our developers build what is written on Trello card – what we call user stories. These represent the smallest shippable part of a feature, which can’t take more than a day to create. A group of user stories constitutes an epics, which represents a full feature. In turn, a project is made up of a number of epics.
For example, an epic could be: “As a seller I can send an invoice to the customer for his order”. And one of the stories to achieve this could be: “As a seller, in the customer’s list of orders, when I click the Send invoice button, I see the preview of the invoice before sending”.
The lead-time is the time between the moment you receive an order from a customer and the moment you deliver it to him. In our environment, clients order applications without knowing exactly what their components will be; then, week after week, they add epics to the backlog. Finally, they decide when to start to build those features and the team creates stories for this epic.
So our initial problem was to understand which lead-time to track. Should we focus on the project lead-time? The epics lead-time? Or the user stories lead-time? We couldn’t choose, so we decided to track all of them, defining each as follows:
- The lead-time of a project starts on the first contact with the client and ends when the first feature is released publicly;
- The lead-time of an epic starts when the epic is created by the product owner and ends when the last user story of the epic is released in production;
- The lead-time of a user story starts when the user story is created and ends when it is released in production.
I started to work with my colleagues Frédéric and Grégoire (both are lead developers on different projects) on the last two lead-times. Frédéric focused on the lead-time of the user stories of his team, while Gregoire set off to understand the lead-time of his team’s epics. Our first results showed that it took us between 27 and 86 days to deliver epics, and between 1 and 43 days to deliver user stories.
IS THERE A PROBLEM TO SOLVE?
At this point, we asked ourselves whether or not we had a problem on our hands. Without a clear standard, we couldn’t say if these numbers were bad or not. So both Frédéric and Grégoire started to define the lead-time they thought we should target and confirmed with the client that these were acceptable for them.
Frédéric learned that the ideal lead-time for user stories is less than 10 days:
- 7 days in the analysis phase: between the creation and the first line of code;
- 1 day in the development phase: between the first line of code and the last one on the user story;
- 1 day in the validation phase: between the last line of code and the moment the product owner validates the user story;
- 1 hour in the deployment phase: after the validation until the user story is deployed online.
We found that 7 user stories out of 19 were late:
- 5 took more than 7 days in the analysis phase;
- 3 took more than 1 day in the development phase;
- 3 took more than 1 day in the validation phase;
- 18 took more than 1h in the deployment phase.
On his side, Grégoire learned that epics must be done in less than 28 days:
- 14 days in the analysis phase: between the creation of the epic and the creation of the user stories;
- 7 days in the ready-to-be-developed phase: between the creation of the user stories and the first line of code;
- 7 days in the development phase: between the first line of code and the release of the last user story of the epic.
Based on this, we found out that 8 epics out of 10 were late:
- 8 took more than 14 days in the analysis phase;
- 3 took more than 7 days in the ready-to-be-developed phase;
- 1 took more than 7 days in the development phase.
THE COST OF STOCK
With a clear understanding of the first steps necessary to get closer to the standard lead-time, we could now tackle the analysis phase: the stock of stories the team will have to develop. We dived deeper into it on Frédéric’s project to see the “age” of the stories. Eighty-eight percent of them were more than 7 days old.
This phase could be split in three steps. First, when created, the user story is stored in the backlog. At some point, the product owner prioritizes the backlog. He moves what he wants the team to develop in another column called “how to be specified”. Once per week the team specifies how they will code every ticket of this column. In the end, they move the story in the “how specified” column before developing it. Here is the distribution:
- In the backlog: 65 stories out of 65 were more than 7 days old.
- “How to be specified”: 14 stories out of 17 were more than 7 days old.
- “How specified”: 6 stories out of 14 were more than 7 days.
On Gregoire’s project this exercise was even more important, as 11 epics out of 11 had been created more than 14 days earlier.
We know stock has many consequences, even though we haven’t estimated their cost yet. Nevertheless, we were aware that the team went over every item week after week asking themselves whether they should do it right away or later on. On his part, the product owner spent a lot of time managing his backlog, having to find the best way to prioritize stories and epics. Furthermore, our clients were losing business opportunities. Users who are expecting features to help improve their productivity are disappointed not seeing them released, and customers still struggle on features for which improvements are in the backlog and thus see their sales go down little by little.
To reduce the lead-time, we realized we had to reduce the stock of epics and user stories. The first step was to identify the sources of stories and how many user stories they generated.
- 18.7% of the stories are created with the feedback of the end users and issues on the application they face;
- 28% of the user stories are created by the team when they find something to improve;
- 53,3% is the work in process: the rest of the stories we didn’t manage to finish in previous weeks, the investigations that we started and didn’t manage to complete, and the stories the product owner deprioritized.
This revealed all the waste we have in our process. The analysis phase is our inventory of stories and epics. All along the week, the team moves up and down the stories and epics in the backlog to prioritize them. They read again and again each story to make sure it is time to work on it or not – a kind of over-processing. And we over-produce, too, adding new stories to the backlog, identifying technical strategies for them while they are defective.
NO PULL AND BROKEN FLOW
We focused on overproduction and defects. Years ago, we defined a minimum stock of stories that the product owner has to create to ensure the team always has something to do, but not a maximum. But we discovered, among other things, that one product owner continued to create stories even though the backlog was already full. Here, the pull system seemed to be broken, with the product owner pushing stories without considering the capacity of the team. Ironically, this product owner complained that he didn’t have enough time to work on important things like gathering user feedback, user experience tests, and so on. It seems that we found a solution to his time problem!
There is another interesting thing. We don’t distinguish between designing the feature and splitting the epics into user stories. It happens at the same time, which results in a lot of missing or incomplete information. This also results in other departments (designers, translators, marketing, infrastructure team) discovering at the very last moment that they have to do something critical for the delivery of the feature. In such instances, the flow is broken.
This is where we are today. The window is a little bit cleaner and, even though I believe the future will continue to be challenging, we are now aware of what we are missing. More importantly, we are now facing our problems, which means that our tomorrow can only be brighter and full of new learnings and innovations. I’ll keep you posted on our progress in a couple of months’ time…