I’ve heard the word ‘efficiency’ used a lot over the years in the context of agile product development. The conversations usually go something like this:
“This seems like an inefficient process … we have to do everything multiple times, we often have to do re-work and fixes and it seems really inefficient to me. If we could just take the time to do it right the first time, it would be a lot more efficient.”
What nobody ever explains is this … what is it that they want to be efficient at? There are a multitude of variables involved in software product development and (I am no longer using the word ‘project’ .. I am purposefully trying to remove the word from my lexicon) we can only optimize a few variables at a time. So here is the central question:
“When you say that you want efficiency, what is it exactly that you want to be efficient at? And what are you willing to give up to get those efficiencies?”
Upon digging a little deeper into these conversations, they almost always center on the efficient use of individual people’s time.
“I don’t want to have to do this multiple times. I know that this isn’t complete yet and that there are going to be issues that are going to come back that will force me to spend more time on it. It isn’t an efficient use of my time. I’m too busy for this.”
Now of course, if you were to have this conversation with every person and every function on the team, they would probably all say the same thing. Agile teams move fast, on purpose, and if our goal, the thing that we want to be efficient at, is the use of individual team member’s time, then the thing to do would be to give everyone more time to do their particular thing.
What do you think this would do to our schedule … if we gave everyone more time? Of course, it would grow immensely. Long periods of time would pass before we ever delivered anything. Sound like waterfall? That’s because it is! The traditional approach is to ask each group how long it would take them to do their part with the assumption that they do it once and do it right and that they won’t have to touch it again. Low and behold, we have the 2 year software effort that never actually delivers much of anything!
This is a classic example of ‘local optimization’. It is a strange paradox that when you design a system to make each individual part ‘time efficient’, you end up with an overall system that is highly time inefficient. Years can go by without a delivery. This also turns out to be a very expensive way to work as well. The most efficient use of people’s time results in the most expensive way to deliver. So, in this model, we sacrifice global overall-schedule and budget in exchange for making individual time more efficient. Curious, isn’t it?
What’s to be done? Well, there are many other variables involved in product development. Suppose we try to optimize something other than the individual’s time? Here are just a few variables that could try to be more ‘efficient’ at:
- Opportunities for Customer Feedback and Adjustment
- Number of chances to build it right (quality)
- Frequency of Delivery
Choosing to become more efficient at any of these will take us down the agile approach of iterative development where we do things multiple times ON PURPOSE.
If we want to optimize opportunities for customer feedback, we would use tools from Lean Product Development to develop MVPs and MMPs in order to put something out that customers can try and respond to so that we can make adjustments (do it again!) based on actual customer feedback.
If we want to optimize the number of chances that we get to build the system right, we would develop many interim versions and subject them to performance testing, usability testing, security testing, etc so that we can find the issues early and have time to address them. (yes, through rework).
If we want to optimize for frequency of delivery, then we might go down the DevOps path of automating everything possible so that machines can do as much of the constant rework as possible. The machines are always building and re-building, testing and re-testing, etc.
Now, here is the strange thing about all of this. Most of us would probably agree that the more chances you have to get something right, the more likely you are to get it right. And when you optimize for the ‘number of chances to get it right’, you actually get FASTER delivery. Becoming really good at giving yourself multiple chances takes you down the road of quickly doing small bits of work that may be incomplete. But you put those small bits of work out there early and often and get feedback on them quickly so that you can make the necessary adjustments. This usually turns out to take quite a bit less time than trying to get it exactly right in a single pass, which by the way, I’ve never actually seen done in 30 years of software development experience. Agile methods may sacrifice the individual’s efficient use of time in exchange for greatly improved overall speed of delivery. AND we get lots of changes to get it right which is why quality usually goes UP when using agile.
Many firms have reported 30-40% improvements in time-to-market from agile with simultaneous improvements in quality. Not bad for a system that is ‘inefficient’.