Sunday, September 21, 2008

Getting Categories for MSTest (just like NUnit)

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/getting_categories_for_mstest_just_like_nunit.htm]

Short answer: check out: http://www.codeplex.com/testlistgenerator.

 

Long answer: I previously discussed Automating Code Coverage from the Command Line. Another way to buff up your unit tests is to add categories to them. For example, suppose you had several long-wrong tests (perhaps unit testing database procedures), and you wanted the ability to run them separately, using categories, from the command line.
 

The free, 6-year old, open-source NUnit has category attributes, for which you can filter tests easily by. For a variety of reasons, MSTest - the core testing component of Microsoft's flagship development product, after 2 releases (2005 and 2008) still does not have these. I've had the chance to ask to ask people at MSDN events about these, and I've heard a lot of "reasons":

 

Proposal for why MSTest does not need categories:Problem with that:
Just put all those tests in a separate DLL
  • This does not handle putting a single test into two categories. Also, it's not feasible, often the tests we want to categorize cannot be so easily isolated.
Just add the description tag to them and then sort by description in the VS GUI
  • You still cannot filter desc from the command line - it requires running in a GUI
  • This fails because it does an exact string comparison instead of a substring search. If you have Description("cat1, cat2"), and Description("cat1") - you cannot just filter by "cat1" and get both tests, it would exclude the first because "cat1" does not match "cat1, cat2".
  • Conceptually, you shouldn't need to put "codes" in a "description" field. Descriptions imply verbose alpha-numeric text, not a lookup code that you can filter by.
Just filter by any of MSTest's advanced filter GUI features - like class or namespace.
  • This requires opening the GUI, but we need to run from the command line.
  • Sometimes the tests we need to filter by span across the standard MSTest groups.
Just use MSTest test lists
  • This is only available for more advanced version of VS.
  • This is hard to maintain. It requires you continually update a global test file (which always becomes a maintenance problem), as opposed to applying the attributes to each test individually.
Well, you shouldn't really need categories, just run all your unit tests every time.
  • Of course your automated builds should eventually run all unit tests, but it's a big time-saver for a developer to occasionally just run a small subset of those.

 

So, if you want to easily get the equivalent of NUnit categories for your MSTest suite, check out http://www.codeplex.com/testlistgenerator.

Thursday, September 18, 2008

Automating Code Coverage from the Command Line

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/automating_code_coverage_from_the_command_line.htm]

We all know that unit tests are a good thing. One benefit of unit tests is that they allow you to perform code coverage. This works because as each test runs the code, (using VS features) it can keep track of which lines got run. The difficulty with code coverage is that you need to instrument your code such that you can track which lines are run. This is non-trivial. There have been open-source tools in the past to do this (like NCover). Then, starting with VS2005, Microsoft incorporated code coverage directly into Visual Studio.

 

VS's code coverage looks great for a marketing demo. But, the big problem (to my knowledge) is that there's no easy way to run it from the command line. Obviously you want to incorporate coverage into your continuous build - perhaps even add a policy that requires at least x% coverage in order for the build to pass. This is a way for automated governance - i.e. how do you "encourage" developers to actually write unit tests - one way is to not even allow the build to accept code unless it has sufficient coverage. So the build fails if a unit test fails, and it also fails if the code has insufficient coverage.

 

So, how to run Code Coverage from the command line? This article by joc helped a lot. Assumeing that you're already familiar with MSTest and CodeCoverage from the VS gui, the gist is to:

  • In your VS solution, create a "*.testrunconfig" file, and specify which assemblies you want to instrument for code coverage.

  • Run MSTest from the command line. This will create a "data.coverage" file in something like: TestResults\Me_ME2 2008-09-17 08_03_04\In\Me2\data.coverage

  • This data.coverage file is in a binary format. So, create a console app that will take this file, and automatically export it to a readable format.

    • Reference "Microsoft.VisualStudio.Coverage.Analysis.dll" from someplace like "C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\PrivateAssemblies" (this may only be available for certain flavors of VS)

    • Use the "CoverageInfo " and "CoverageInfoManager" classes to automatically export the results of a "*.trx" file and "data.coverage"

Now, within that console app, you can do something like so:

//using Microsoft.VisualStudio.CodeCoverage;

CoverageInfoManager.ExePath = strDataCoverageFile;
CoverageInfoManager.SymPath = strDataCoverageFile;
CoverageInfo cInfo = CoverageInfoManager.CreateInfoFromFile(strDataCoverageFile);
CoverageDS ds = cInfo .BuildDataSet(null);

This gives you a strongly-typed dataset, which you can then query for results, checking them against your policy. To fully see what this dataset looks like, you can also export it to xml. You can step through the namespace, class, and method data like so:


      //NAMESPACE
      foreach (CoverageDSPriv.NamespaceTableRow n in ds.NamespaceTable)
      {
        //CLASS
        foreach (CoverageDSPriv.ClassRow c in n.GetClassRows())
        {
          //METHOD
          foreach (CoverageDSPriv.MethodRow m in c.GetMethodRows())
          {
          }
        }
      }

 

You can then have your console app check for policy at each step of the way (classes need x% coverage, methods need y% coverage, etc...). Finally, you can have the MSBuild script that calls MSTest also call this coverage console app. That allows you to add code coverage to your automated builds.

 

[UPDATE 11/5/2008]

By popular request, here is the source code for the console app: 

 

[UPDATE 12/19/2008] - I modified the code to handle the error: "Error when querying coverage data: Invalid symbol data for file {0}"

 

Basically, we needed to put the data.coverage and the dll/pdb in the same directory.

 

using System;
 using System.Collections.Generic;
 using System.Linq;
 using System.Text;
 using Microsoft.VisualStudio.CodeCoverage;
 
 using System.IO;
 using System.Xml;
 
 //Helpful article: http://blogs.msdn.com/ms_joc/articles/495996.aspx
 //Need to reference "Microsoft.VisualStudio.Coverage.Analysis.dll" from "C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\PrivateAssemblies"
 
 
 namespace CodeCoverageHelper
 {
   class Program
   {
 
 
 
     static int Main(string[] args)
     {
       if (args.Length < 2)
       {
         Console.WriteLine("ERROR: Need two parameters:");
         Console.WriteLine(" 'data.coverage' file path, or root folder (does recursive search for *.coverage)");
         Console.WriteLine(" Policy xml file path");
         Console.WriteLine(" optional: display only errors ('0' [default] or '1')");
         Console.WriteLine("Examples:");
         Console.WriteLine(@" CodeCoverageHelper.exe C:\data.coverage C:\Policy.xml");
         Console.WriteLine(@" CodeCoverageHelper.exe C:\data.coverage C:\Policy.xml 1");
 
         return -1;
       }
 
       //string CoveragePath = @"C:\Tools\CodeCoverageHelper\Temp\data.coverage";
       //If CoveragePath is a file, then directly use that, else assume it's a folder and search the subdirectories
       string strDataCoverageFile = args[0];
       //string CoveragePath = args[0];
       if (!File.Exists(strDataCoverageFile))
       {
         //Need to march down to something like:
         // TestResults\TimS_TIMSTALL2 2008-09-15 13_52_28\In\TIMSTALL2\data.coverage
         Console.WriteLine("Passed in folder reference, searching for '*.coverage'");
         string[] astrFiles = Directory.GetFiles(strDataCoverageFile, "*.coverage", SearchOption.AllDirectories);
         if (astrFiles.Length == 0)
         {
           Console.WriteLine("ERROR: Could not find *.coverage file");
           return -1;
         }
         strDataCoverageFile = astrFiles[0];
       }
 
       string strXmlPath = args[1];
 
       Console.WriteLine("CoverageFile=" + strDataCoverageFile);
       Console.WriteLine("Policy Xml=" + strXmlPath);
 
 
       bool blnDisplayOnlyErrors = false;
       if (args.Length > 2)
       {
         blnDisplayOnlyErrors = (args[2] == "1");
       }
 
       int intReturnCode = 0;
       try
       {
         //Ensure that data.coverage and dll/pdb are in the same directory
         //Assume data.coverage in a folder like so:
         // C:\Temp\ApplicationBlocks10\TestResults\TimS_TIMSTALL2 2008-12-19 14_57_01\In\TIMSTALL2
         //Assume dll/pdb in a folder like so:
         // C:\Temp\ApplicationBlocks10\TestResults\TimS_TIMSTALL2 2008-12-19 14_57_01\Out
         string strBinFolder = Path.GetFullPath(Path.Combine(Path.GetDirectoryName(strDataCoverageFile), @"..\..\Out"));
         if (!Directory.Exists(strBinFolder))
           throw new ApplicationException( string.Format("Could not find the bin output folder at '{0}'", strBinFolder));
         //Now copy data coverage to ensure it exists in output folder.
         string strDataCoverageFile2 = Path.Combine(strBinFolder, Path.GetFileName(strDataCoverageFile));
         File.Copy(strDataCoverageFile, strDataCoverageFile2);
 
         Console.WriteLine("Bin path=" + strBinFolder);
         intReturnCode = Run(strDataCoverageFile2, strXmlPath, blnDisplayOnlyErrors);
       }
       catch (Exception ex)
       {
         Console.WriteLine("ERROR: " + ex.ToString());
         intReturnCode = -2;
       }
 
       Console.WriteLine("Done");
       Console.WriteLine(string.Format("ReturnCode: {0}", intReturnCode));
       return intReturnCode;
 
     }
 
     private static int Run(string strDataCoverageFile, string strXmlPath, bool blnDisplayOnlyErrors)
     {
       //Assume that datacoverage file and dlls/pdb are all in the same directory
       string strBinFolder = System.IO.Path.GetDirectoryName(strDataCoverageFile);
 
       CoverageInfoManager.ExePath = strBinFolder;
       CoverageInfoManager.SymPath = strBinFolder;
       CoverageInfo myCovInfo = CoverageInfoManager.CreateInfoFromFile(strDataCoverageFile);
       CoverageDS myCovDS = myCovInfo.BuildDataSet(null);
 
       //Clean up the file we copied.
       File.Delete(strDataCoverageFile);
 
       CoveragePolicy cPolicy = CoveragePolicy.CreateFromXmlFile(strXmlPath);
 
       //loop through and display results
       Console.WriteLine("Code coverage results. All measurements in Blocks, not LinesOfCode.");
 
       int TotalClassCount = myCovDS.Class.Count;
       int TotalMethodCount = myCovDS.Method.Count;
 
       Console.WriteLine();
 
       Console.WriteLine("Coverage Policy:");
       Console.WriteLine(string.Format(" Class min required percent: {0}%", cPolicy.MinRequiredClassCoveragePercent));
       Console.WriteLine(string.Format(" Method min required percent: {0}%", cPolicy.MinRequiredMethodCoveragePercent));
 
       Console.WriteLine("Covered / Not Covered / Percent Coverage");
       Console.WriteLine();
 
       string strTab1 = new string(' ', 2);
       string strTab2 = strTab1 + strTab1;
 
 
       int intClassFailureCount = 0;
       int intMethodFailureCount = 0;
 
       int Percent = 0;
       bool isValid = true;
       string strError = null;
       const string cErrorMsg = "[FAILED: TOO LOW] ";
 
       //NAMESPACE
       foreach (CoverageDSPriv.NamespaceTableRow n in myCovDS.NamespaceTable)
       {
         Console.WriteLine(string.Format("Namespace: {0}: {1} / {2} / {3}%",
           n.NamespaceName, n.BlocksCovered, n.BlocksNotCovered, GetPercentCoverage(n.BlocksCovered, n.BlocksNotCovered)));
 
         //CLASS
         foreach (CoverageDSPriv.ClassRow c in n.GetClassRows())
         {
           Percent = GetPercentCoverage(c.BlocksCovered, c.BlocksNotCovered);
           isValid = IsValidPolicy(Percent, cPolicy.MinRequiredClassCoveragePercent);
           strError = null;
           if (!isValid)
           {
             strError = cErrorMsg;
             intClassFailureCount++;
           }
 
           if (ShouldDisplay(blnDisplayOnlyErrors, isValid))
           {
             Console.WriteLine(string.Format(strTab1 + "{4}Class: {0}: {1} / {2} / {3}%",
               c.ClassName, c.BlocksCovered, c.BlocksNotCovered, Percent, strError));
           }
 
           //METHOD
           foreach (CoverageDSPriv.MethodRow m in c.GetMethodRows())
           {
             Percent = GetPercentCoverage(m.BlocksCovered, m.BlocksNotCovered);
             isValid = IsValidPolicy(Percent, cPolicy.MinRequiredMethodCoveragePercent);
             strError = null;
             if (!isValid)
             {
               strError = cErrorMsg;
               intMethodFailureCount++;
             }
 
             string strMethodName = m.MethodFullName;
             if (blnDisplayOnlyErrors)
             {
               //Need to print the full method name so we have full context
               strMethodName = c.ClassName + "." + strMethodName;
             }
 
             if (ShouldDisplay(blnDisplayOnlyErrors, isValid))
             {
               Console.WriteLine(string.Format(strTab2 + "{4}Method: {0}: {1} / {2} / {3}%",
                 strMethodName, m.BlocksCovered, m.BlocksNotCovered, Percent, strError));
             }
           }
         }
       }
 
       Console.WriteLine();
 
       //Summary results
       Console.WriteLine(string.Format("Total Namespaces: {0}", myCovDS.NamespaceTable.Count));
       Console.WriteLine(string.Format("Total Classes: {0}", TotalClassCount));
       Console.WriteLine(string.Format("Total Methods: {0}", TotalMethodCount));
       Console.WriteLine();
 
       int intReturnCode = 0;
       if (intClassFailureCount > 0)
       {
         Console.WriteLine(string.Format("Failed classes: {0} / {1}", intClassFailureCount, TotalClassCount));
         intReturnCode = 1;
       }
       if (intMethodFailureCount > 0)
       {
         Console.WriteLine(string.Format("Failed methods: {0} / {1}", intMethodFailureCount, TotalMethodCount));
         intReturnCode = 1;
       }
 
       return intReturnCode;
     }
 
     private static bool ShouldDisplay(bool blnDisplayOnlyErrors, bool isValid)
     {
       if (isValid)
       {
         //Is valid --> need to decide
         if (blnDisplayOnlyErrors)
           return false;
         else
           return true;
       }
       else
       {
         //Not valid --> always display
         return true;
       }
     }
 
     private static bool IsValidPolicy(int ActualPercent, int ExpectedPercent)
     {
       return (ActualPercent >= ExpectedPercent);
     }
 
     private static int GetPercentCoverage(uint dblCovered, uint dblNot)
     {
       uint dblTotal = dblCovered + dblNot;
       return Convert.ToInt32( 100.0 * (double)dblCovered / (double)dblTotal);
     }
   }
 }

 

Tuesday, September 16, 2008

Next LCNUG on Sept 25 about ASP.NET Dynamic Data

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/next_lcnug_on_sept_25_about_aspnet_dynamic_data.htm]

The next LCNUG is coming up (located at the College of Lake County, IL). Josh Heyse will present on ASP.Net Dynamic Data on September 25, 2008, 6:30 PM

 

This is a great place to meet other developers and learn from the community.

Monday, September 15, 2008

The rewards from struggling to learn something

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/the_rewards_from_struggling_to_learn_something.htm]

In the movie Kingdom of Heaven, a knight, Godfrey de Ibelin, is instructing his son, Balian, about a sacred oath. At the very end, Godfrey slaps his son in the face and says "and that is so you will remember it [the oath]." The thinking is that people remember hardships. We remember when we've been wronged, certainly when someone slaps us in the face, and when we struggle to figure something out. The same thing applies to overcoming technical problems. When the computer "slaps you" in the face, you remember it. Maybe it's spending 5 hours for one line of code, redoing a huge component, or just throwing up your hands in despair, only to be rescued by an explanation from another developer. There's something about the struggle that really makes a lesson sink in.

 

Sure, of course there's no benefit to make something unnecessarily complex. Sometimes a technology is too new and unstable, and it's not worth the struggle yet. However, most work, will always require you to push through. Here are several benefits for pushing through the struggle to learn something:

  • It teaches you to be the one who breaks new ground, especially for new or unsolved problems. As every new wave of technology comes out, all the cutting-edge developers (the MVPs, authors, speakers, top bloggers, architects, etc...) are working overtime to digest it, expand their brains, and work through the kinks. It is a constant struggle.

  • It makes you truly understand the topic. If you just passively ask, you'll forget. If you actively figure it out yourself, re-deriving the algorithm or writing the code, then you'll get it.

  • It makes you appreciate how hard it is for those "experts" to have the answers. Sure, some people are just smart and tough concepts come naturally for them. But most of these people are burning the midnight oil. The only reason that your local star can explain off the top of her head why you'll have a multi-threading error in your instance method is because she probably spent hours (when no one was watching) suffering through multi-threading bugs herself. The expert may be able to answer your question in 10 seconds - but that's only because they spent hours coincidentally studying it before hand. The coworker who goes around trivializing others' work ("Oh, that's just scaling out the database to multiple servers") is usually the one who's never had to do it themselves. They've seen others do it, so they know it's possible - but they think it's just another standard task.

  • It spares you from wasting other people's time. Someone who always just asks other people the moment an issue comes up is wasting their co-workers time. If you haven't spent at least 1 minute googling it, you don't deserve to know it. Sure you can confirm your findings with others, but at least take a stab yourself. It reminds me when in a computer science class, a student posted to the forums asking "will 'int i = null' compile? " Obviously, they could have just figured that out for themselves by typing it in the IDE and trying - and it would have been a lot faster too.

  • It encourages you to extend grace to others. Developer estimates are notoriously inaccurate. Often this is because developers come across an unexpected problem. If you're that developer, and you get burned yourself, then you understand what it's like when it happens to others, and you can be empathetic in a way that builds the team chemistry.

  • It gives you experience with which to make your own code simpler for others. If you lost half your day tracking down a string case-sensitive error, you'll be sure to write your own code to be string-insensitive where applicable. Suffering at the hands of a poorly written component can guide you on what to do better.

  • It's inevitable, so get used to it. The goal of a developer is to solve technical problems, and problems often entail a struggle.

Think of it like you're training for a marathon. Every struggle is a training session that builds up your perseverance, your endurance, and your respect for your coworkers.

 

I realize in today's "give it to me now" world, the whole point is to avoid struggling. And of course there's no benefit to struggle just for the sake of struggling. However, nothing in life - when you're actually the one doing it - is every just simple. People who do practical house or yard work see this all the time. "It's just tearing down the wallpaper" becomes an eight hour ordeal as you find yourself needing to steam the paper, tear it off shred by shred, scrape off the adhesive, and plaster in the gashes.

 

There's an old saying: "Anything worth having is worth fighting for." Without the guts to push through problem areas, you'll find yourself surrendering to every problem - and because the whole point of development is to constantly solve problems, that will forever block you from getting very far.

Monday, September 8, 2008

Code Generation templates act as your Documentation

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/code_generation_templates_act_as_your_documentation.htm]

On every project, there's an endless list of things you're "supposed" to do, like documentation. I've personally never met a developer who enjoys this, so it usually gets shuffled to the side. One way to simplify documentation is to code generate what you can, because then the code generation template becomes your documentation.

Here's an example: I once was on a project where we had a large amount of base SQL data - things like populating dropdowns, security info, and config settings. Initially this was all done with hard-coded SQL files using tons of insert and update statements. Of course, to explain how to write all those inserts and updates, which tables they should alter, and what values they should set, there was tons of tedious MS word docs. Developers would constantly write failing SQL scripts, then recheck the documentation, fix their scripts, and keep trying until it worked. People always had questions like "Should the 'CodeId' column be set to the primary key from the 'RoleLookup' table? Technically, it "worked", it was just slow, brittle, and costing the business money.

Then, we tried migrating all those scripts over to xml files that got actively code generated into SQL scripts. All of a sudden, in order to actually write those templates, we needed to understand what the rules were. What tables, columns, and rows should be affected? What does an attribute like "active=true" translate into for a SQL script? So, this forced us to flush out all the rules. As time went on, when developers had questions, they could see all the rules "documented" in the code of the generation template (the *.cst file in CodeSmith). People essentially would say: "What happens if I set the entityScope attribute to zero? Oh, the code generator template specifically checks for that and will create this special insert in TableX when that happens. Ah, I see..."

Of course there are many other benefits of code generation, but I think this is an often overlooked one: generating the code from an xml file forces you to understand what you're actually doing. If you need to "explain" how to write code to a code generator, then anyone else can look at that explanation too. This is a similar principle to writing self-documenting code.

This also hit home after I wrote a UI page code generator that was 3000 lines long. I realized that in order for  a developer to create a UI page for that project, they essentially needed to (manually) reproduce whatever that template was doing, which meant that they had to understand 3000 lines of trivia. Anyone with questions on producing basic UI pages could always "look up" how the code generator did it.

Sunday, September 7, 2008

Chicago - Codeapalooza was great

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/chicago__codeapalooza_was_great.htm]

I had the good fortune to be able to attend the Chicago Codeapalooza this last Saturday. It was an all-around success - about 30 speakers spread across five comprehensive tracks, hundreds of developers, vendor booths, good community interaction, and more. I'm proud that Paylocity could  be a sponsor. And the event was free.

I've always thought that the big thing about user groups aren't necessarily the tracks (which were good), but meeting other developers in the community. The talent level is always humbling. Several companies were actively recruiting - much better to meet a recruiter in person at a professional event than as one of a thousand random resumes online.

Our demanding field requires continual education, and free, local, interactive events like this are one of the best ways to break the ice, followed up with your own personal study.

Tuesday, September 2, 2008

Collecting machine information using WMI

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/collecting_machine_information_using_wmi.htm]

Like most developers, I want a fast machine. And like most developers, despite my best efforts, I inevitably find myself coding on a machine that isn't quite fast "enough". One of the things I'm trying to track this down is making my own simple performance benchmark tool (if you know of a tool that works great for you, please feel free to suggest it). And as long as such a tool is running benchmarks, I'd like to collect relevant machine information like memory, disk space, etc...

 

Thanks to Windows Management Instrumentation (WMI), this actually ended up being easier than I thought. There's a lot of different ways to call from the API, here's one example (I saw this template from somewhere else online, cannot remember where):

 

      //Add namespace, in assembly "System.Management"

      //using System.Management;

 

      //Put this code in your method:

      ManagementObjectSearcher mos =
        new ManagementObjectSearcher("WMI_query");
       
      foreach (ManagementObject mob in mos.Get())
      {
        foreach (PropertyData pyd in mob.Properties)
        {
          if (pyd.Name == "WMI_Property_Name")
            return pyd.Value;  //May need to convert type
        }
      }

 

where "WMI_query" is your select query, and "WMI_Property_Name" is the name of the property you want to return. The gist is that you can write WMI queries (a simple string), much like SQL queries, to get machine information. Some sample queries could be:

  • "SELECT Free_Physical_Memory FROM Win32_OperatingSystem"

  • "SELECT Total_Physical_Memory FROM Win32_ComputerSystem"

  • "SELECT Free_Space,Size,Name from Win32_LogicalDisk where DriveType=3"

  • "SELECT Name FROM Win32_Processor" //I've seen this occasionally give problems, someone suggested I try the advice from here instead.

Here's a quick tutorial.

Here's the official MSDN reference for WMI queries. There are hundred of things to query on, so a lot of it is knowing which query to write.

 

Note that these queries can be run from other scripting languages, like VBScript, so it's not just a C# thing. I'd expect this to be very popular with most IT departments.

 

It's also useful if you wanted to make a quick tool on your build servers to detect the free disk space, and have a scheduled task kick it off every night, and then log your results to a database. If you have a distributed build system (i.e. several servers are required for all your different builds), then it's a nice-to-have.