The sooner in the development life cycle that you catch a bug, the cheaper it is to fix. A simple design flaw consumes hours of coding time, more hours of testing time, potentially gets passed into training and written into documentation, has other components built on top of it, and gets deployed into production. Sometimes the original developer is gone, leaving the team precious little knowledge of how to change the erroneous code. I've seen many projects where a production error, something as simple as an ill-placed null reference exception, pulls the entire team into fixing it. Usually there are people shaking their head, "if we had just spent 60 seconds to write that null-check when we first developed the code." It's sad, but true.
As time increases, bugs become more expensive to fix because:
The bug propagates itself - it gets copied around, or other code gets built on top of it.
The code becomes less flexible - the project hits a code-freeze date, or the code gets deployed (where it's much harder to change than when you're first developing it)
People lose knowledge of the erroneous code - they forget about that "legacy" code, or even leave the team.
This is why many of the popular, industry best practices, are weighted to help catch bugs early - code reviews up front, allow time for proper design, unit tests, code generation, and setting up process the right way, etc... But, a lot of development shops, perhaps because they're so eager to get code out now, often punt ("we'll fix it later"), and end up fixing the bugs when they're most expensive. That may be what's necessary when a project first starts ("There won't be a tomorrow if we don't get this code out now"), but eventually it's got to shift gears and kill the bugs when they're the cheapest.
No comments:
Post a Comment