Sunday, June 8, 2008

The problem with "It's not what you know, it's who you know"

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/the_problem_with_its_not_what_you_know_its_who_you_know.htm]

I remember when job-hunting back in college, lots of business majors would tell me "It's not what you know, it's who you know." Some kids even used it as an excuse to avoid studying in order to go to parties instead ("Why waste time studying pointless knowledge when what really matters is having a strong social network?"). While there is some merit to the idea - i.e. you do want to build your network - this paradigm doesn't apply well to skilled labor that can be objectively measured, like software engineering.

 

If a job doesn't require much skill, such that there are tons of qualified candidates, then of course personally knowing the hiring manager is a competitive edge. From their perspective, if all else is equal, hiring a known acquaintance mitigates risk. But, if a job does require a lot of skill, such that recruiters are actively competing to find that top talent, then they will beat a path to your door. In software engineering, if you have the knowledge, then people will want to know you. It's a two-way street: developers what to be employed, and companies want the best employees.

 

I think of it like talent in the NBA - some players just play better than others (I believe all people are equal, they just some have different skills). That's why scouts are running all over the nation, constantly trying to woo the top free-agents. If you're the top NBA draft pick, even if you don't know anyone yet, scouts are going to want to know you.

 

Sure, I understand that cronyism and nepotism exist, but in software engineering, such corruption would put that recruiter at a serious competitive disadvantage. Worst case, I'd expect that a corrupt manager's greed would trump their cronyism, and they'd hire the best talent. Anything else would essentially be throwing away money.

Tuesday, May 27, 2008

The new Lake County .Net Users Group

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/the_new_lake_county_net_users_group.htm]

I'm a big fan of user groups - it's great being able to meet other professional developers. That's why I'm excited about a new user group being started in the ChicagoLand area: the Lake County .Net Users Group (LCNUG). It meets at the College of Lake County. Scott Seely, an author and former Microsoft employee, will be kicking it off with a presentation on Windows Workflow Foundation on June 26th. If you live in the northern Chicago suburbs, consider checking out the new LCNUG.

Sunday, May 25, 2008

Performance tips for a faster machine

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/performance_tips_for_a_faster_machine.htm]

We all want faster machines. Slow machines, especially ones that freeze up, constantly interrupt one's thought process and can pull them out of the zone. It's not just the extra 20 minutes spread throughout the day, it's also all the time lost to re-focus yourself after waiting for a long process. I'm no machine performance expert, but here are tips I've learned.

 

1. Run Defrag.

 

You can run this via the command line, such that you hook it up to a weekly script. This MSDN explains: "Disk fragmentation slows the overall performance of your system. When files are fragmented, the computer must search the hard disk when the file is opened to piece it back together. The response time can be significantly longer."

defrag c:\ /v /f

 

2. Clean up your hard drive.

 

A crowded hard drive makes your machine run slower - there's just less wiggle room for the operating system. I've heard some suggest that you should have at least 25% free. You'll need two big things for this: (A) a backup drive for offloading infrequently used files, and (B) a tool to find obsolete files. One good, free, tool is CCleaner, which detects most of the common spots for dead-weight files. Another tool, SequoiaView, shows all files sizes in a treemap graph so that you can easily see which files are taking up space.

 

3. Clean out your registry

 

If you're continually installing and uninstalling programs, your registry may get bloated, causing big slowdowns. Modifying your registry is dangerous and could irreparably corrupt your entire machine (i.e. back up your registry and machine data first). But, given the potential performance gain, it's still worth doing some easy changes. While there are several commercial  registry cleaner products out there, CCleaner is free and works well - it plays it safe and only removes the obvious registry errors. CCLeaner has a feature to clean out much of the garbage from your registry.

 

4. Adjust your UI settings

 

Windows XP (I haven't touched Vista yet) lets you choose the balance between "pretty UI" vs. "fast UI". The idea is that pretty graphics (shading, rounded corners, transitions, etc...) take extra resources to render. If you're a developer who wants speed and doesn't care about gradient-shaded window panels, you can turn that stuff off: In "My Computer > Advanced > Performance Options", adjust for "best performance." This will make everything look like old, grey, boxes - but it will be faster.

 

5. Kill or Block certain "hog" processes and system startup apps

 

Background services are a big culprit for hogging resources, because these services could always be running. Skim through your Window Services to make sure that all the currently running (or automatic ones) are ok. If a service doesn't sound familiar, ask your IT department if you can kill it. In addition to services, applications that automatically start up when the machine turns on can make for a slow system startup. CCLeaner has an option for this as well, where you can explicitly block unwanted apps from automatically starting up.

 

6. Avoid running too many programs at once

 

This is pretty obvious. Under Task Manager, the Performance and Processes tabs can show you your CPU, Commit Charge, and stuff like that.

 

7. Uninstall the programs you don't need

 

The more stuff on your machine, the slower it will run. For example, if you no longer develop with VS 2003, remove it. This is also a good reason to avoid installing all those games on your poor, overworked PC. (But if your laptop requires a certain game on it to function, that may be understandable)

 

8. Use batch scripts to turn off processes when you don't want them

Sometimes you need that heavy service running in the background, but sometimes you don't. For example, SQL Server can take a lot of resources. Consider having a batch script that starts it up and shuts it down - not just opening and closing SQL Studio, but stopping the actual service. You can use the "net" command in a batch script to start and stop services:

net start "SQL Server (SQLSERVER2005)"
net start "Distributed Transaction Coordinator"
 

net stop "SQL Server (SQLSERVER2005)"
net stop "Distributed Transaction Coordinator"

9. Startup script

I try to avoid re-booting my machine because I loose all my sessions - open windows, loaded files, running applications, etc... One thing that slightly eases the pain is having a batch script (clickable from my desktop) that re-opens all my standard stuff, like NotePad, Browsers, Cmd, and Windows Explorer. I don't necessary want this as part of my startup, but it saves me a minute to just click the batch and have several applications all re-open themselves.

 

10. Run Disk Cleanup

Sometimes your machine may run slow because of a bad disk. Consider running Disk Cleanup. This MSDN article describes more (it also mentions freeing up disk space and defragmenting).

 

Other ideas

There's always more you can do. I found these other articles to be informative reads:

 

Thursday, May 22, 2008

What makes a process good?

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/what_makes_a_process_good.htm]

I've seen that there seems to be two different views of what "process" means, I'll refer to them as the "pro" and the "con":

  • Pro - Steps to help save time and reduce tedious and error prone tasks. Examples include: continuous integration, automated unit testing, and code generation.

  • Con - Red tape, bureaucracy, and damaging politics - something that exists so that ivory-tower-folk can feel important. Examples include: wasting hours to reformat a private word doc so it meets "standards", or manually going through all your (perfectly functional, production) code to switch the naming convention from Pascal to Camel case.

I've seen projects where someone, with good intentions insists that "we need to improve our process", while others, also with good intentions, just cringe. The problem is that even though they're using the same words, they still mean different things. Obviously any good project should avoid the bad and emphasize the good.

 

With this, I offer several criteria that a good process should meet:

  • As a complete package, it should make life simpler. It should solve a specific problem that the team agrees needs solving. (run your tests, manage your source control, etc...)

  • It must always functionally work, from end-to-end. Process that produces bad output will just cause you bigger problems. It is better to have a slower process that works, then a faster process that randomly fails.

  • It should have public results so that everyone can see what happened

  • It should be publically documented, such as on a team wiki, so that others can understand it (as opposed to always bugging you for questions)

  • It should be easy to maintain (automated [perhaps with MSBuild], machine-independent, abstract out variables like pathNames to a config, etc...)

  • It should be easy to setup. Process that is a pain to install (perhaps requiring third party components that you don't have licenses for) will eventually just be ignored.

There's probably more criteria, but a process that fulfills all these is off to a very good start.

Wednesday, May 21, 2008

A reactive learner is also a reactive problem solver

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/a_reactive_learner_is_also_a_reactive_problem_solver.htm]

It's much easier to be reactive than proactive. A reactive person looks at what already happened and tries to make sense out of it - they're always playing catch-up. A proactive person needs to understand the rules beforehand so well that they can anticipate the possible scenarios that may happen.

 

This is why so many developers are re-active learners. For example, they'll first be given a problem or coding task, then they'll google to figure out the syntax, concepts, and usually code with trial and error until it appears to work (i.e. "coding by coincidence"). It's a very re-active approach, and it suffices for an average programmer.

 

The problem is that a re-active learner will always be a re-active problem solver because before you can solve the problem, you need to first understand it. This means you need to learn the concepts involved. Furthermore, if you haven't learned the concepts yet, you can't know how the system will react, which means it will likely react in ways you didn't intend. For example, a developer who never learned about concurrency or scalability will be in for a big surprise when their procedures start running in production with multiple users. Such an error can sound very cryptic, especially if "it works on my machine", but randomly fails in production. Instead of designing their code proactively to deal with the problem (where such a solution would be much cheaper), they'll have to try to reactively first learn what happened, and then hope that it's still solvable.

 

Therefore, a good long-term approach is to not just reactively google coding questions as you come across them, but to also proactively read the actual books and language specifications themselves. That way, you know before hand what to expect from the technology, without waiting for it to do something intentional - when learning the solution may be too late.

Sunday, May 18, 2008

Using Linq to sort and filter entity lists

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/using_linq_to_sort_and_filter_entity_lists.htm]

One of the big new features for .Net if LINQ - Language Integrated Query. There's lots of good tutorials out there - you can see 101 examples, download a free GUI editor from the guys who wrote C# 3.0 in a Nutshell, or even just google it. As I was toying with Linq (via the very good chapters from C# 3.0 in a Nutshell), I especially enjoyed using Linq to query objects. Here's the test snippets I was running.

 

First, I created a trivial entity:

 

    public class Employee
    {
      public Employee(int age, string strFirstname)
      {
        this.Age = age;
        this.FirstName = strFirstname;
      }
      public int Age { get; set; }
      public string FirstName { get; set; }

      public override string ToString()
      {
        return this.FirstName + " (" + this.Age + ")";
      }
    }

 

Then I set up an MSTest project (perfect for stubbing out things like this), and added this initializer to always give me an Employee array:

 

    [TestInitialize]
    public void SetData()
    {
      _employees = new Employee[]
      {
        new Employee(39, "Homer"),
        new Employee(39, "Marge"),
        new Employee(7, "Lisa"),
        new Employee(9, "Bart"),
        new Employee(3, "Maggie")
      };
    }

    Employee[] _employees;

 

Now, I can write some test snippets.

 

Filter an Entity list

 

Given an array of Employees, I can filter them by business rules, such as getting all employees over 10 years old. We used to need to handle this (in C#) by writing a custom loop that checked each item. Now, Linq gives us a Domain-Specific-Language to just handle this, usually with 1 line of code.

 

Note the "n => n.Age < 10)", this is where the syntax may seem new. This is where you can put your appropriate filter expression. Linq then provides a ToArray method to ensure that an array of employees are returned - as opposed to some differently type dynamically-crated object.

 

    [TestMethod]
    public void FilterEntity()
    {
      Employee[] ae = _employees.Where(n => n.Age < 10).ToArray();
      Assert.AreEqual(3, ae.Length);
    }

 

This alone is great. There's no fluff wasted. It's a single line, and every keyword and expression maps to a specific concept: "Given an array of employees, filter them where the age < 10, and then return an array of employees."

 

It gets better - the filter expression can contain your own custom method. In this case, I created a "HasBigName" method to check for custom logic (the length of the string).

 

    [TestMethod]
    public void FilterEntitySpecial()
    {
      Employee[] ae = _employees.Where(n => HasBigName(n.FirstName)).ToArray();
      Assert.AreEqual(3, ae.Length);
    }

    public bool HasBigName(string s)
    {
      if (s == null || s.Length <= 4)
        return false;
      else
        return true;
    }

 

Sort an Entity list

Linq makes it easy to sort entities by their fields. In this case, I can sort the employees by their first name.

 

    [TestMethod]
    public void SortEntity()
    {
      Employee[] ae = _employees.OrderBy(n => n.FirstName).ToArray();
      Assert.AreEqual(5, ae.Length);
      Assert.AreEqual("Bart", ae[0].FirstName);
    }

 

Conclusion

Linq is a great example of something that was possible to do before, but wasn't always practical. Developers don't like writing  extra lines of plumbing code just to do mundane things like sorting and filtering, so it's a huge win to have a means to quickly handle that.

Wednesday, May 14, 2008

Beyond functionality for enterprise apps

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/beyond_functionality_for_enterprise_apps.htm]

Many developers are so pressured by demanding schedules that their goal is just to "get the job done", by which they mean "it functionally works." While this is a great first step, professional programming requires more, such as:

  • Maintainability

  • Performance

  • Scalability

  • Security

  • Testability

As you go from a hobbyist toy to an enterprise app, these criteria become self-evident. They're buzzwords that everyone has heard, but surprisingly few seem to put into practice. Interesting questions to ask yourself (or someone you need to interview):

  • Maintainability - Have you ever had to write code that will be maintained by other people? How did you code differently to make it more maintainable?

  • Performance - Have you ever had to write code that was performance critical? How did you ensure that that code was fast enough?

  • Scalability - Have you even had to write code that was used by more than 1 million users? What might you do differently to ensure the code handled that?

  • Security - Have you ever had to write code that protected secure data and you knew someone would try hacking it? How did you make it secure?

  • Testability - How would you ensure that your code could be tested?

Of course some of these, like performance tuning, may take more time. But that goes with the territory of large-scale apps.

 

In general, I come across two kinds of programmers - those that just worry about functionally "getting it done", and those that look beyond functionality. It seems the first group digresses into boring copy and paste work, whereas the second is constantly on an adventure doing new, fun things. The beauty is that there's nothing that stops someone from jumping into the second group. You can read about all the techniques online, use open-source tools, and often it's even faster. For example, anyone could use NUnit, or MSTest to write unit tests to help their testing, or read about Refactoring or design patterns for more maintainable code.