Showing posts with label metrics. Show all posts
Showing posts with label metrics. Show all posts

Monday, February 11, 2013

The one ratio to rule them all

I was reading from "Good to Great", excellent book, and it mentioned one of the things successful companies did was chase a simple, single ratio. Sure, there's always more to it, but that one ratio helps focus the team. For most software departments, that ratio is something like:
(Requested Features / Incident Count ) per developer per month.
Basically, you want to crank out lots of features and have very low incidents. "Emergency" deploys, production bugs, outages – those essentially count as incidents.
All the good things either help increase requested features, or decrease incidents, and hence drive this ratio up.
Increase: Requested Features
Decrease: Incident Count
·         Application frameworks
·         Reusable blocks
·         Code generation
·         High quality, motivated, developers
·         Tools
·         etc…
·         Automated build and deployment
·         High unit testability
·         Low QA bug count
·         DevOps, application monitoring
·         Developer conscience
·         etc…


Most of the time, a team gets into trouble when it starts chasing things other than this ratio:
·         Hours on a timesheet (instead of actually producing something)
·         Face time in meetings or the office (instead of actually producing something)
·         Thinking how their manager thinks (which may not actually increase feature output)
·         High Lines-of-Code count (more codes != more features)
·         "Feeling" busy (merely feeling busy doesn't mean cranking out useful code)
·         Cool technology for the sake of cool technology  
·         Job security via undocumented process
·         Coding to show how smart you are (as opposed to code to make the requested feature)
·         Rating/grading the quality of developers (as opposed to making the business happy with working features)
·         Avoiding mistakes (as opposed to the addition of value)
Sure, these things by themselves aren't bad (avoiding mistakes and using new technology), but they are secondary to the real goal. As long as a developer – or worse, an entire team –  chases them instead of the real goal, they will always be less than great.

Sunday, February 26, 2012

Bluffing Line Count and why it's still useful

I'll take it as a given that every developer knows that line count is a problematic metric. You know you got problems when someone with power thinks that "Homer did twice as much work because he wrote twice as many lines". Why is line count so unreliable? Here are some reasons to get started:
1.       Significant lines vs. raw, i.e. whitespace, comments, brackets on new-line
2.       New code (creates lots of it) vs. production bug update (5 hours to fix 1 line)
3.       Generated files (proxies, designer) or massive code generation
4.       Copying 1000 lines from open-source online (algorithm works, so low risk, but you've added a lot of code)
5.       Complexity of code, i.e. a 20-line algorithm may be more work than 1000 line of casual UI and data hookup.
However, if you're trying to coordinate more than 10 developers, and you have no other metric, line count still has some value because it quickly tells you something is going on. (i.e. the "something is better than nothing" philosophy.) You've got to look at the trends, not the absolute values.
·         It's useful to know if your team's average developer produces 500 lines of code (LOC) per week (of course this varies from team to team), then seeing someone produce either 50 or 5000 should catch your attention. Sure, there may be a good reason, but you at least want to be aware of what that reason is. Is the guy generating 5000 massively copying and pasting code, re-inventing the wheel for quick-to-write utility code, or using a passive code-generator instead of your team's ORM framework? Is the guy only doing 50 not checking anything in, and waiting to surprise the team with 4 weeks of work the day before code freeze for one "glorious" check-in?
·         Line count is ubiquitous and everyone can understand it.
·         Line count is very cheap to calculate; many tools can provide this.
·         Line count is the basis for two more relevant metrics: code churn, which tells you how many lines per file is changing per changeset (and hence per developer), and code duplication (I personally love Simian for this).
·         You can also write reports splitting line count by file name to see the ratio of business, entity, data-access, unit test, UI, etc… For example, is someone checking in 1000 lines of business logic, but with zero unit tests? It's something worth investigating.
You cannot reduce an art like code craftsmanship to auto-generated metrics. But the metrics do offer clues to what is going on.  It's good to be aware, but never judge a developer on metrics alone.