So you've got the bright idea that you need to measure your software project productivity. Or maybe it was your bosses idea.
But what to measure? Remember that whatever you choose to focus on, your team will take seriously, and shift their behavior to perform well against that metric, for better or worse.
Let's take a look at some popular options:
1. Lines of code per developer (KLOC)? Bleh! That metric rewards verbose and duplicated code, and discourages reuse and leverage of third-party libraries. Not exactly what you were looking for, is it?
2. Function points? This tries to measure an abstraction of functionality in your system using a somewhat arcane analysis. You might need a consultant to help you sort through this one. Then you can debate the results.
3. Ah...how about the number of tasks completed? That's got to be a good metric, right? Except that you might wind up with lots of "filler" tasks created by the team in order to satisfy this metric. "Want more tasks completed boss? I'll get right on that!".
4. What about total time worked? This has the same problem as #3 - time worked has very little correlation to productivity. You might wind up with lots of overtime, but no progress. Ever have a boss that was more concerned with "how hard" everyone seemed to be working instead of what was getting done? Right.
OK, this is getting frustrating. It seems like whatever you try to measure will backfire, and give you a result you don't really want.
But what if there were a magic metric that would not only give you a clear picture of your project health, but would as a side effect encourage higher productivity?
Well, there is a good candidate for this, and it's called Running, Tested Features
(RTF). Ron Jeffries has written about this here
, and there is a video interview with Ron, complete with whiteboard scribbles available here
In a nutshell, RTF is a measure of how many things of real business value got done, where "done" means that they actually work and are ready for deployment.
On a typical yearlong waterfall-style project, you might have several months of planning and analysis, during which the RTF count would be zero. At some point, work would start, but might work on framework development, infrastructure, etc., which would still produce an RTF value of zero.
In fact, you might get to within 3 months of the deadline before the first real feature is "done". That might the point at which the team discovers a serious technical challenge and explains that the project will be at least 6 months late.
A typical Agile project by contrast works in time-boxed iterations, no longer than 4 weeks in duration. The output of each one is a set of one or more RTFs, in order of customer priority. During the iteration, there is a little analysis, a little design, some development and testing, and a little documentation, if necessary.
Both projects might finish at the same time, but which one is more likely to find out about a technical risk sooner? Which one might be able to release if market conditions require the date to be moved up?
In terms of productivity, measuring RTF is a quick way to see the state of the team. A healthy Agile team should be able to consistently deliver a set of stories over time, with any unexpected challenges or risks averaging out against features that turn out to be easier than expected.
A sudden slowdown over a few iterations might be a warning sign of an internal or external quality problem (e.g. the team is spending more time fixing bugs than adding features, or the architecture is too brittle to readily accommodate change).
The key, though, is that this metric leaves enough time to react and address project problems before they impact the business.
So what are you measuring in your software projects?
, project management
Get your copy of the new book! Agile Thinking: Leading Successful Software Project and Teams