Tuesday, June 15, 2010

BOOK: Economics for Dummies

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/book_economics_for_dummies.htm]

 

I have never met a developer who said they had enough time to "properly" finish their project. Sure, everyone starts the project with dreams of how this will be momentous - somehow the stepping stone to curing cancer and world hunger - but then reality sinks in and the team scrambles to make the best of their limited time.

And that's where economics, the science of how people deal with scarcity, comes in. I had to take a micro and macro Econ course back in college for my engineering degree, but back then it was just an 8:00am commitment. When I was reading Joel on Software's blog, he picked my interest with economics again with his talk of compliments and supplements, vendor lock in, the chicken & egg problem, etc... So, I got a copy of "Economics for Dummies". I saw it as Part II of The Complete MBA for Dummies.

Living up to the "Dummies" genre, it was an easy read. The concepts of utility, marginal revenue, return on investment (ROI), consumer surplus, diminishing returns, and supply and demand are good things for any developer to know. Much of this may seem like common sense in today's world, but a book helps one to articulate what their head thinks is common but they can't find the words for.

Besides assisting with prioritizing features and making calculated risks, it helps a technical person continually appreciate the business side of things.

 

Sunday, June 6, 2010

Dealing with an IT bureaucracy

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/dealing_with_an_it_bureaucracy.htm]

A background on bureaucracies

By definition, large companies employ a lot of people. Large companies also tend to become bureaucracies in order to manager all those employees. Therefore, lots of people are working in a bureaucracy.

At least in my experience, bureaucracies usually:

  • Require a lot of red tape, which in turns hurts cross-team coordination
  • Split people into specific roles and teams ("separation of duties"), so they must constantly coordinate with others
  • Encourage escalation of problems instead of individuals finding immediate solutions.
  • Require many people to approve a specific decision
  • Regulate most activities
  • Are very risk-adverse, and hence punish risk more than it awards innovation
  • Are very big

Bureaucracies tend to have a negative connotation, sort of like living out a Dilbert comic strip. So, if bureaucracies have such a bad rap, then why would a company ever become one?

  • As the company grows, it's an understandable way to govern masses of people.
  • If a company employees micro-manager personalities, the bureaucracy is a natural consequence.
  • Big companies can't afford risk: People will sue them, Hackers will attach the security of their systems, millions of users depend on their product, etc...  So bureaucracies use red-tape as a safety net.

To help appreciate the positive tips of how to deal with a bureaucracy, let's first explore some futile approaches.

  • Whining sessions - Everyone will complain that it's a mess, but nothing will get done.
  • Emotional appeals - Bart in IT feels your pain that your 1GB workstation is slow, but he's not allowed to give you an upgraded memory chip.
  • Asking someone to do something outside their role - Ok, so maybe you get lucky and Martin, the local DBA, helps install a virtual machine, but in general you can't expect this.

Practical Tips

Besides common sense (be polite, do your homework, communicate clearly, etc...), here are 11 tips for dealing with a bureaucracy:

  1. Know what each role should do. In a bureaucracy, each person has their role, and that is all you can expect them to do. It is not an entrepreneurial start-up where everyone does everything they can to make the team succeed. While you may get occasionally surprised, you can't expect someone to go above and beyond. For example, perhaps only the legal/procurement team can purchase tools, or only the security access team can give your user account rights to the Customer database, or only Support can view production data, or only the Graphics department is allowed to create the official icons used in the application (despite you could do it yourself with your 5 years of hobbyist Photoshop skills), etc... You can't expect a fish to fly. Each person has a role, and a bureaucracy beats people into fulfilling just their specific role.
  2. Know how information is passed between departments. A major side-effect of a bureaucracy is that it takes many departments to do even simple things. This means that even the most mundane request could bounce through 5 departments like a ping-pong ball. How does the request get passed along? Is there some official ticket/issue software that tracks everything, are official emails sent, does it only happen in face-to-face meetings? For example, if nothing happens until "a ticket is opened", then learn how that ticket system works and be prepared to open tickets. Even if you go directly to your buddy in security access to help resolve something, he'll still need a ticket to track his time against. A dozen hallway conversations with the senior VP herself may have less impact than that one ticket you actually submit. You need to leverage these communication channels, else you're just screaming in the wind.
  3. Allow time for requests to percolate through the system. Because even a simple request may need five signatures from five departments, requests can move slower than molasses. Therefore, when you're designing a solution, try to determine what ticket requests you'll have as early as you can, and then submit those as soon as you can. While those tickets are dripping through the system, you can flush out the internal details of your design. Sometimes, submitting the ticket "reserves your place in line", and you can update the ticket with more details as you find them (think of it as giving that department a "heads up"). The last thing you want to do is calculate every possible edge case, and then (two days before the deadline), submit a flood of ticket requests.
  4. Let the person causing the pain feel the natural results of it. - Where possible, when someone is blocking the project due to some artificial rule, let them feel the natural consequence of that project being blocked. I.e., don't enable bad behavior. For example, if someone in sales keeps entering data the wrong way, consider not enabling them by writing an automated script to continue cleaning everything up. If you do, they'll essentially think that their current approach is working, and why would they ever change?
  5. Distinguish between roles and titles. Say "managers don't have access to source control", but you (a new manager) really need access. If your company allows one person to play multiple roles, then perhaps you can pass muster with "I'm not asking you to change the security chart and give managers access to source control, rather I'm trying to (temporarily) contribute with the developer role, which needs access to source control."
  6. Make red tape problems known to your manager. Don't be malicious or whiney, but simply report the facts to your manager. "I can't complete the report module until procurement gets the Amazingnator rendering engine." Don't just eat it yourself and try building your own Amazingnator rendering engine over the weekend. Sure, it may make you today's hero, but the continual burden of not getting the proper resources because another department is broken will leave you bitter and exhausted.
  7. Know the people who approve the tickets - Even the biggest bureaucracy ultimately boils down to individual people. If a ticket is languishing in the Data Services department, it's beneficial to ask Marge from Data Services if she's heard anything about the ticket, and if she has any advice.
  8. Where feasible, keep skills in your own team - Because cross-team coordination is often slow in a bureaucracy, having the skills in your own team such that you don't need to go to another team can be a good ace-up-your-sleeve. I remember a web consulting gig when .Net first came out where the manager split development into separate roles - C# guys and SQL guys. He thought this would be easier to staff ("you only need devs with one skill or the other"), and result in higher quality ("each dev is an expert in their niche"). The problem for applications at that time was that C# and SQL were so intertwined that one without the other was like trying to run with only one leg. Coincidentally, some sub-teams secretly had their own SQL guys, and those teams flew.
  9. Stack the deck - You know when you ask the procurement department to purchase a tool, that they're going to want forms filled out - how much does it cost, what's the business justification, does your manager approve, are there alternatives, etc... Ask upfront what forms they need, and have those ready. Else, you may be "sent to the back of the line", and need to wait days (weeks!) to get your chance again.
  10. Focus on what people can do without spending money. This is related to knowing what each role does. Maybe that department can't give you what you your request because it costs money (more staff, purchase a tool, additional hardware, etc ...), but they can do helpful things that cost nothing, such as: provide "temporary" security access so you can run a test yourself, let you borrow a resource that they're not currently using (such as a VM server you can log into), switch the order of two tasks that has no impact on them but simplifies your life, offer information such as who you should unofficially talk to "make it happen". Ironically, although time is money, it is often easier in a bureaucracy to get time than get approval to spend money.
  11. Mentally prepare yourself that it is inefficient. People can psychologically deal with crap if they're in "I need to deal with crap mode". So, just set your expectations and prepare yourself that bureaucracies are slow and inefficient. If things do go well, then congratulations, but if not, at least you'll be psychologically prepared for it.

See Also:

Book: Making Things Happen, Book: Managing Humans

Wednesday, June 2, 2010

Most projects fail for non-technical reasons

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/most_projects_fail_for_nontechnical_reasons.htm]

I’ve seen projects fail, and it sucks. It sucks team moral, it sucks resources, and it sucks energy from other projects. Granted, there are degrees of failure, but generally a project is considered a failure when it is significantly over schedule, over budget, under quality, ships with too many bugs, or simply never even ships at all. I wouldn’t consider a project to be a failure if afterwards you find a more optimal way, or management throws the completed project away because of new business direction – in that case the project itself still succeeded.

It’s well known, and well experienced by developers, that many software projects fail. In the 80’s or 90’s, insufficient technical skill often contributed to project failure – the mere act of programming was complicated, the frameworks still young, even finding the right syntax was challenging. Today, there are powerful frameworks, open source projects to pull from, tools to assist almost any technical problem, Google, and years of precedent for most types of projects. Some may say that mere “coding” has become so easy that it’s as if platform companies like Microsoft are trying to make all developers dumb, or at least lower the bar so that anyone can develop.

Of course, projects can still fail today due to insufficient technical skills, but most of the time these days, they seem to fail for non-technical reasons: constantly changing requirements, poor communication among teams (because today’s complicated projects require lots of cross-team coordination), scope creep, bad estimation not allowing the team enough time to do it right, insufficient development infrastructure not allowing the dev team to actually build and deploy code, bureaucratic red-tape that prevents the team from procuring the right tools, poor team chemistry that results in internal conflicts, poor project management, lack of user input, etc…

Ironically, even if the project fails for these non-technical reasons, it still shows up on the technical folk’s desk. Ultimately, some manager or business sponsor hammers the developers with “Why couldn’t you build this?” Granted, a star technical team has a much better chance to handle the rapidly changing requirements, do more work with less time, or build their own tools and infrastructure “under the radar”.

The point is to be a star dev, you must push through successful projects. A dev who only does “moderate” technology on a profitable project will be viewed as far more successful than a dev who does “cool” technology on a failed project. These days, because most projects fail for non-technical (i.e. “soft”) reasons, developers who want to be stars should invest something in their soft skills.

Sunday, May 30, 2010

Three cautions with mocking frameworks

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/three_cautions_with_mocking_frameworks.htm]

I'm a big fan of unit testing. I think in many cases, it's faster to developer with unit tests than without.

Perhaps the biggest problem for writing unit tests is how to handle dependencies - especially in legacy code. For example, say you have a method that calls the database or file system. How do you write a unit test for such a method?

One approach is dependency injection - where you inject the dependency into the method (via some seam like a parameter or instantiate it from a config file). This is powerful, but could require rewriting the code you want to test.

Another approach is using mock or "isolation" framework, like TypeMock or RhinoMock. TypeMock lets you isolate an embedded method call and replace it with something else (the "mock"). For example, you could replace a database call with a mock method that simply returns an object for your test. This is powerful; this changes the rules of the game. It's great to assist a team in adopting unit testing because it guarantees that they always have a way to test even that difficult code. However, as Spiderman taught us, "With great power comes great responsibility". TypeMock is fire. A developer can do amazing things with it, but they can also burn themselves. If abused, TypeMock could:

  1. Enable developers to continue to write "spaghetti" code. You can write the most tangled, dependent code ever (with no seams), the kind of thing that would get zero test coverage, and TypeMock will rescue it. One of the key points of unit testing is that by writing testable code, you are writing fundamentally better code.
  2. Allow developers to get high test coverage by simply mocking every line. The problem is that if everything is mocked, then there's nothing real left that is actually tested.
  3. Make it harder to refactor because the method is no longer encapsulated. For example, say a spaghetti method has a call to a private database method, so the developer uses TypeMock to mock out that private call. Later, a developer refactors that code by simply changing the name of a private method (or splits a big private method into two smaller ones). It will break the related unit tests. This is the opposite of what you want - encapsulated code means you can change the private implementation without breaking anything, and unit tests are supposed to give confidence to refactoring.

TypeMock can work magic, but it must be used properly.

Monday, May 24, 2010

Developer balance of power

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/developer_balance_of_power.htm]

You need people to get the project done, but people are eventually error prone. Just like in government there are "separation of powers", software projects can also benefit from such separation. As a general rule, for production code, the same person should not both:
  • Code and Review - The reviewer checks the code quality (It's too easy to give a free pass, or have bias, to your own code)
  • Develop and Test - The tester checks the developer. (The dev already thinks their code works fine)
  • Build and Deploy - Having someone else deploy what the developer built encourages easier and objective deployment, and helps invalidate the it works on my machine.

Sunday, May 23, 2010

Why it is faster to developer with unit tests

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/why_it_is_faster_to_developer_with_unit_tests.htm]

I keep hinting at this with various blog posts over the years, so I wanted to just come out and openly defend it.

It is faster for the average developer to develop non-dependent C# code with unit tests than without.

By non-dependent, I mean code that doesn't have external dependencies, like to the database, UI controls, FTP servers, and the like. These types of UI/Functional/Integration tests are tricky and expensive, and I fully emphasize why projects may punt on them. But code like algorithms, validation, and data manipulation can often be refactored to in-memory C# methods.

Let's walk through a practical example. Say you have an aspx page that collects user input, loads a bunch of data, and eventually manipulates that data with some C# method (like getting the top N items from an array):

    public static T[] SelectTopN(T[] aObj, int intTop)
    {
      if (aObj == null)
        return null;
      if (aObj.Length <= intTop || aObj.Length == 0 || intTop <= 0)
        return aObj;

      //do real work:
      T[] aNew = new T[intTop];
      for (int i = 0; i < intTop; i++)
      {
        aNew[i] = aObj[i];
      }

      return aNew;
    }

This is the kind of low-hanging fruit, obvious method that should absolutely be tested. Yet many devs don't unit test it. Yes it looks simple, but there's actually a lot of real code that can easily be refactored to this type of testable method (and it's usually this type of method that has some sort of "silly" error). There's a lot that could go wrong: null inputs, boundary cases for the length of the array, bad indexes on an array, mapping values to the new array. Sure, real code would be more complicated, which just reinforces the need for unit testing even more so.

Here's the thing - the first time the average developer writes a new method like that, they will miss something. Somehow, the dev needs to test it.

So how does the average programmer test it? By setting up the database, starting the app, and navigating 5 levels deep. Oops, missed the third-null; try again. That's 3 minute wasted. Oops, had an off-by-one in the loop; try again. 6 minutes wasted. Finally, set everything back up, testing the feature, score! 15 minutes later, the dev has verified a positive flow works. The dev is busy and under pressure, got that task done, so they move on. 4 weeks later (after the dev has forgotten everything), QA comes and says "somewhere there's bug", and the dev spends an hour tracking it down, and it was because the dev didn't handle when the array has a length less than the "Select Top N", and the method throws an out-of-range exception. Then the dev makes the fix, hopes they didn't break anything else, and waits a day (?) for that change to be deployed to QA so a test engineer can verify it. Have mercy if that algorithm was a 200-line spaghetti code mess ("there wasn't time to code it right"), and it's like a rubix cube where every change fixes one side only to break another. Have even more mercy if the error is caught in production - not QA.

Unit test is faster because it stubs out context. Instead of take 60 seconds (or 5 minutes, or an hour of hunting a month later!) to set up data and stepping through the app, you just jump straight to it. Because unit tests are so cheap, the dev can quickly try all the boundary conditions (outside of the few positive flows that the application normally runs when the dev is testing their code). This means that QA and Prod often don't find a new boundary condition that the dev just "didn't have time" to check for. Because unit tests are run continually throughout the day, the CI build instantly detects when someone else's change breaks the code.

Especially with MSTest or NUnit, unit test tools are effectively free. Setting up the test harness project takes 60 seconds. Even if you start with only 5% code coverage for just the public static utility methods - it's still 5% better than nothing.

Of course, the "trick" is to write your code such that more and more of it can be unit tested. That's why dependency injection, mocking, or even refactoring to helper public-static helper utilities is so helpful.

Over the last 5 years, I've heard a lot of objections to avoid testing even simple C# methods, but I don't think they save time:

Objection against unit testing being fasterRebuttal
You're writing more code, so it's actually slowerIt's not typing that takes the time, but thinking.
I can test it faster by just running the appWhat does "it" really mean? You're not testing the whole app, just a handful of common scenarios (and you're only running the app occasionally on your machine, as opposed to unit tests than run every day on a verified build server)
"Unit testing" is one more burden for developers to learn, which slows us downUnit tests are ultimately just a class library in the language of  your choice. They're not a new tool or a new language. The only "learning curve" is that conceptually it requires you write code that can be instantiated and run in a test method - i.e. you need to dogfood your own code, which a dev should be prepared to do anyway.
My code is already perfect, there are no bugs, so there is not need to write such unit tests.Ok, so you know your code is perfect (for the sake of argument) - but how will you prove that to others? And how will you "protect" your perfect code from changes caused by other developers? If a dev is smart enough to write perfect code the first time, then the extra time needed to write the unit test will be trivial.
If I write unit tests, when I change my code, then I need to go update all my tests.Having that safety net of tests for when you do change your code is one of the benefits of unit tests. Play out the scene - you change your code, 7 tests fail - that's highlighted what would likely break in production. Better to find out about breaking changes from your unit tests rather than from angry customers.
Devs will just write bad tests that don't actually add value, so it's a waste of time.Any tool can be abused. Use common sense criteria to help devs write good tests - like code coverage and checking boundary cases.
I just don't have timeFor non-dependent C# code, you don't have time not too. Here's the thing - the code has got to be tested anyway. What is your faster alternative to prove that the code indeed works, and that it wasn't broken by someone else after you moved on?

Related:

Wednesday, May 19, 2010

Using SMO to automate SQL tasks

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/using_smo_to_automate_sql_tasks.htm]

I used to use shelling out to the console with osql, or splitting out all the "GO" keywords and using ADO.Net to execute non-queries.

And years ago, I guess that was fine.

But Server Management Objects (SMO) are phenomenal. They just work. With a few lines in C#, you can create or kill a database, install schema scripts, enumerate the database, and much more. You also get exceptions thrown if there is an error (much easier than parsing the osql output from the command line).

Good stuff.