Wednesday, October 8, 2008

Having Console apps write out Xml so you can parse their output

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/having_console_apps_write_out_xml_so_you_can_parse_their_out.htm]

Say you have an automatic process calling some executable, and you want to get information back from that executable to use in your own code. Ideally you'd be able to call a class library that you could just reference, but what if no such class library is available (it's a different development platform like Java vs. C#, or it's a black box third-party component). Is there any way to salvage the situation?

 

Some developers would stream out the console output to a string, and then write a bunch of ugly parse code (even with regular expressions), and get their results that way. For example, let's say you use Subversion for your source control, and you want to search the history log. For the sake of argument, assume there's no friendly C# API or web service call (it's a local desktop app you're calling), and you need to parse command output. (If anyone knows of such an API, please feel free to suggest it in the comments.) You can run a command like "svn log C:\Projects\MyProject", and it will give you back console output like so:

------------------------------------------------------------------------
r3004 | username1 | 2008-09-26 10:47:19 -0500 (Fri, 26 Sep 2008) | 2 lines

Solved world hunger
------------------------------------------------------------------------
r3000 | username1 | 2008-09-06 14:10:56 -0500 (Sat, 06 Sep 2008) | 2 lines

Invented nuclear fusion
 

Ok, you can parse all the information there, but it's very error prone. A much better way is if the console app provides an option to write out its output as a single XML string. For example, you can specify the "--xml" parameter in SVN to do just that:


       revision="3004">
    username1
    2008-09-26 10:47:19 -0500
    
      Solved world hunger
    

  
       revision="3000 ">
    username1
    2008-09-06 14:10:56 -0500
    
      Invented nuclear fusion
    

  

Obviously, that's much easier for a calling app to parse. If you need to bridge a cross-platform divide such that you can't just provide the backend class libraries, outputting to xml can be an easy fix.

Monday, September 29, 2008

Death by a thousand cuts

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/death_by_a_thousand_cuts.htm]

A single paper cut won't kill you, but a thousand of them probably will. That's what people mean by "death by a thousand cuts". This is how many software projects die - it's usually a hundred little things that pile up, and whose combined pain becomes unbearable. It's the runaway bug list, the brittle test script that gets abandoned because it's too hard to maintain, the endless tedious validation on a details page, the component with the fuzzy interface, the buggy deployment, all the little features that just don't work right, etc...

 

This is why continuous integration, unit tests, automation, proactive team communication, code generation, good architecture, etc... are so important. These are the techniques and tools to prevent all those little cuts. I think that many of the current hot methodologies and tools are designed to add value not by doing the impossible, but by making the routine common tasks "just work", so that they no longer cause developers pain or stress.

 

Sure, you can write a single list-detail page, any developer can "get it done". But what about writing 100 of them, and then continually changing them with client demands, and doing it in half the scheduled time? Sure, you can technically write a web real-time strategy game in JavaScript, but it's infinitely easier to write it with Silverlight. Sure, you can manually regression test the entire app, but it's so much quicker to have sufficient unit tests to first check all the error-prone backend logic.

 

The whole argument shifts from "is it possible" to "is it practical?" Then, hopefully your project won't suffer death from a thousand cuts.

Sunday, September 28, 2008

Real life: How do you test a sump pump?

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/real_life_how_do_you_test_a_sump_pump.htm]

Back when we moved into our home (a while ago), this was our sump pump.

As home-owners knows, sump pumps are very important because without one, your basement gets flooded. That causes at least tens of thousands of dollars of damage, as well as potential loss of any personal items in your basement - for example all your electronic equipment or books could get ruined due to water damage.

 

In other words, it's a big deal, i.e. an important "feature" of your house. So the natural question as a new home-owner is "how do I ensure that this mission-critical feature actually works?" Obviously I didn't want to wait for a real thunderstorm and power outage to find out if everything would be ok. While I had heard the buzzwords ("make sure your sump pump works and you have a battery backup in case of a power outage"), and I had been on previous "teams" (i.e. my parent's house, growing up as a kid) that had successfully solved this problem, when push came to shove, I was certainly no expert. For all I knew, this could be the best sump pump in the world, or a worthless piece of junk. However, I knew the previous owners of the house, they were great, so I assumed that the sump pump was great too, and everything would be okay.

 

Eventually, I figured out how to test it by contacting some "domain experts" (previous house owners), who explained the different components to me. I then "mocked out" a power outage by simply unplugging the power plug (as opposed to waiting for a real power outage, or even turning off my house power). I then lifted the float to simulate water rising, and listened for a running motor. I checked a couple other things, and became confident that the feature did indeed "work as designed". I was told that is was actually a very good setup because it had a separate batter charger, and two separate pipes out, so they were completely parallel systems (kudos to the previous owners).

 

The number of analogies between me needing to test my sump pump, and a user needing to test a critical feature of a software project, are staggering. It's a reminder to me how real life experiences help one understand software engineering.

 

Thursday, September 25, 2008

Writing non-thread-safe code

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/writing_nonthreadsafe_code.htm]

Multithreading is hard, and I'm certainly no expert at it. I like this post about Measuring how difficult a Threading bug is. Most application developers hover around phase 0 (totally reproducible) to 2 (solvable with locks). Application Developers constantly hear about how some piece of code isn't "thread safe". What exactly would that look like? How could you deterministically write non-thread safe code, like writing a unit test that fails your non-thread-safe object?

 

The trick is to run a unit test that opens up multiple threads, and then runs them in a loop 10,000 times to force the issue. The following code does something like that. Say we have a simple "Account" object with Credit and Debit instance methods, both sharing the same state (the _intBalance field). If you run the unit test "Threads_2_5", it opens up 2 threads, calling Credit in one and Debit in the other. Because Credit and Debit should cancel each other out, and they're "theoretically" called the same number of times, the final result should remain zero. But it's not - the test will fail (if it doesn't fail, increase the number of iterations to force more contention).

 

So, we have a reproducible multi-threading failure in our unit test. The Account object is not thread safe. However, we can apply the C# lock keyword to put a lock on the methods in question, which at least for this simple case, fixes the problem. I've shown how to apply the lock keyword in the commented-out lines of the Account object. I see two observations:

  1. Without the C# lock keyword, this gets essentially random errors as contention (thread count or loops) increases.

  2. Adding the lock keyword will prevent errors, but the code runs much slower.  This means it's also possible to abuse the lock keyword, perhaps adding it in places you don't need it, such that you get no benefit, but now your code runs slower. Tricky tricky.

Here's the code:

 

    public class Account
    {
      //private Object thisLock = new Object();

      private int _intBalance = 0;

      public void Credit()
      {
        //lock (thisLock)
          _intBalance++;
      }

      public void Debit()
      {
        //lock (thisLock)
          _intBalance--;
      }

      public int CurrentValue
      {
        get
        {
          return _intBalance;
        }
      }
    }

    private Account _account = null;
    private int _intMaxLoops = -1;

    private void RunThreads2(int intLoopMagnitude)
    {
      _intMaxLoops = Convert.ToInt32(Math.Pow(10, intLoopMagnitude));
      _account = new Account();

      Assert.AreEqual(0, _account.CurrentValue);

      Thread t1 = new Thread(new ThreadStart(Run_Credit));
      Thread t2 = new Thread(new ThreadStart(Run_Debit));

      t1.Start();
      t2.Start();

      t1.Join();
      t2.Join();

      //We did an equal number of credits and debits --> should still be zero.
      Assert.AreEqual(0, _account.CurrentValue);
    }

    private void Run_Credit()
    {
      for (int i = 0; i < _intMaxLoops; i++)
      {
        _account.Credit();
      }
    }

    private void Run_Debit()
    {
      for (int i = 0; i < _intMaxLoops; i++)
      {
        _account.Debit();
      }
    }

 
    [TestMethod]
    public void Threads_2_5()
    {
      RunThreads2(5);
    }

 

Someone who knows multi-threading much better than I do (and who hasn't yet started their own blog despite all my pleading), explained it well here:

With multithreading, it’s important to understand what’s happening at the assembly level.
For example, a statement like “count++” is actually “count = count + 1”. In assembly, this becomes something like:

1: mov eax, [esp + 10] ; Load ‘count’ into a register
2: add eax, eax, 1 ; Increment the register
3: mov [esp + 10], eax ; Store the register in ‘count’

With multithreading, both threads could run through this code at different rates. For example, start with ‘count = 7’. Thread ‘A’ could have executed (1), loading ‘7’ into ‘eax’. Thread ‘B’ then executes (1), (2), (3), also loading ‘7’ into ‘eax’, incrementing, and storing ‘count = 8’. It then executes 3 more times, setting ‘count = 11’. Thread ‘A’ finally starts running again, but it is out of sync! It then stores ‘count = 8’ because it didn’t get updated.


When using locks, that would prevent multiple threads from executing this code at the same time. So thread ‘A’ would make ‘count = 8’. Thread ‘B’ would make ‘count = 9’, etc.
 

The problem with locks is what happens if you ever need multiple locks. If thread ‘A’ grabs lock (1) and thread ‘b’ grabs lock (2), then neither of them would ever be able to grab both locks. In SQL, this throws an exception on one of the threads, forcing it to release its lock and try over. SQL can do this because the database uses transactions to modify the data. If anything goes wrong, the transaction rolls the database back to the previous values. Normal C# code can’t do this because there are no transactions. Modifying the data is immediate, and there is no undo. ;-)

Obviously there's a world more to multi-threading, but everyone has got to start somewhere.

Sunday, September 21, 2008

Getting Categories for MSTest (just like NUnit)

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/getting_categories_for_mstest_just_like_nunit.htm]

Short answer: check out: http://www.codeplex.com/testlistgenerator.

 

Long answer: I previously discussed Automating Code Coverage from the Command Line. Another way to buff up your unit tests is to add categories to them. For example, suppose you had several long-wrong tests (perhaps unit testing database procedures), and you wanted the ability to run them separately, using categories, from the command line.
 

The free, 6-year old, open-source NUnit has category attributes, for which you can filter tests easily by. For a variety of reasons, MSTest - the core testing component of Microsoft's flagship development product, after 2 releases (2005 and 2008) still does not have these. I've had the chance to ask to ask people at MSDN events about these, and I've heard a lot of "reasons":

 

Proposal for why MSTest does not need categories:Problem with that:
Just put all those tests in a separate DLL
  • This does not handle putting a single test into two categories. Also, it's not feasible, often the tests we want to categorize cannot be so easily isolated.
Just add the description tag to them and then sort by description in the VS GUI
  • You still cannot filter desc from the command line - it requires running in a GUI
  • This fails because it does an exact string comparison instead of a substring search. If you have Description("cat1, cat2"), and Description("cat1") - you cannot just filter by "cat1" and get both tests, it would exclude the first because "cat1" does not match "cat1, cat2".
  • Conceptually, you shouldn't need to put "codes" in a "description" field. Descriptions imply verbose alpha-numeric text, not a lookup code that you can filter by.
Just filter by any of MSTest's advanced filter GUI features - like class or namespace.
  • This requires opening the GUI, but we need to run from the command line.
  • Sometimes the tests we need to filter by span across the standard MSTest groups.
Just use MSTest test lists
  • This is only available for more advanced version of VS.
  • This is hard to maintain. It requires you continually update a global test file (which always becomes a maintenance problem), as opposed to applying the attributes to each test individually.
Well, you shouldn't really need categories, just run all your unit tests every time.
  • Of course your automated builds should eventually run all unit tests, but it's a big time-saver for a developer to occasionally just run a small subset of those.

 

So, if you want to easily get the equivalent of NUnit categories for your MSTest suite, check out http://www.codeplex.com/testlistgenerator.

Thursday, September 18, 2008

Automating Code Coverage from the Command Line

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/automating_code_coverage_from_the_command_line.htm]

We all know that unit tests are a good thing. One benefit of unit tests is that they allow you to perform code coverage. This works because as each test runs the code, (using VS features) it can keep track of which lines got run. The difficulty with code coverage is that you need to instrument your code such that you can track which lines are run. This is non-trivial. There have been open-source tools in the past to do this (like NCover). Then, starting with VS2005, Microsoft incorporated code coverage directly into Visual Studio.

 

VS's code coverage looks great for a marketing demo. But, the big problem (to my knowledge) is that there's no easy way to run it from the command line. Obviously you want to incorporate coverage into your continuous build - perhaps even add a policy that requires at least x% coverage in order for the build to pass. This is a way for automated governance - i.e. how do you "encourage" developers to actually write unit tests - one way is to not even allow the build to accept code unless it has sufficient coverage. So the build fails if a unit test fails, and it also fails if the code has insufficient coverage.

 

So, how to run Code Coverage from the command line? This article by joc helped a lot. Assumeing that you're already familiar with MSTest and CodeCoverage from the VS gui, the gist is to:

  • In your VS solution, create a "*.testrunconfig" file, and specify which assemblies you want to instrument for code coverage.

  • Run MSTest from the command line. This will create a "data.coverage" file in something like: TestResults\Me_ME2 2008-09-17 08_03_04\In\Me2\data.coverage

  • This data.coverage file is in a binary format. So, create a console app that will take this file, and automatically export it to a readable format.

    • Reference "Microsoft.VisualStudio.Coverage.Analysis.dll" from someplace like "C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\PrivateAssemblies" (this may only be available for certain flavors of VS)

    • Use the "CoverageInfo " and "CoverageInfoManager" classes to automatically export the results of a "*.trx" file and "data.coverage"

Now, within that console app, you can do something like so:

//using Microsoft.VisualStudio.CodeCoverage;

CoverageInfoManager.ExePath = strDataCoverageFile;
CoverageInfoManager.SymPath = strDataCoverageFile;
CoverageInfo cInfo = CoverageInfoManager.CreateInfoFromFile(strDataCoverageFile);
CoverageDS ds = cInfo .BuildDataSet(null);

This gives you a strongly-typed dataset, which you can then query for results, checking them against your policy. To fully see what this dataset looks like, you can also export it to xml. You can step through the namespace, class, and method data like so:


      //NAMESPACE
      foreach (CoverageDSPriv.NamespaceTableRow n in ds.NamespaceTable)
      {
        //CLASS
        foreach (CoverageDSPriv.ClassRow c in n.GetClassRows())
        {
          //METHOD
          foreach (CoverageDSPriv.MethodRow m in c.GetMethodRows())
          {
          }
        }
      }

 

You can then have your console app check for policy at each step of the way (classes need x% coverage, methods need y% coverage, etc...). Finally, you can have the MSBuild script that calls MSTest also call this coverage console app. That allows you to add code coverage to your automated builds.

 

[UPDATE 11/5/2008]

By popular request, here is the source code for the console app: 

 

[UPDATE 12/19/2008] - I modified the code to handle the error: "Error when querying coverage data: Invalid symbol data for file {0}"

 

Basically, we needed to put the data.coverage and the dll/pdb in the same directory.

 

using System;
 using System.Collections.Generic;
 using System.Linq;
 using System.Text;
 using Microsoft.VisualStudio.CodeCoverage;
 
 using System.IO;
 using System.Xml;
 
 //Helpful article: http://blogs.msdn.com/ms_joc/articles/495996.aspx
 //Need to reference "Microsoft.VisualStudio.Coverage.Analysis.dll" from "C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\PrivateAssemblies"
 
 
 namespace CodeCoverageHelper
 {
   class Program
   {
 
 
 
     static int Main(string[] args)
     {
       if (args.Length < 2)
       {
         Console.WriteLine("ERROR: Need two parameters:");
         Console.WriteLine(" 'data.coverage' file path, or root folder (does recursive search for *.coverage)");
         Console.WriteLine(" Policy xml file path");
         Console.WriteLine(" optional: display only errors ('0' [default] or '1')");
         Console.WriteLine("Examples:");
         Console.WriteLine(@" CodeCoverageHelper.exe C:\data.coverage C:\Policy.xml");
         Console.WriteLine(@" CodeCoverageHelper.exe C:\data.coverage C:\Policy.xml 1");
 
         return -1;
       }
 
       //string CoveragePath = @"C:\Tools\CodeCoverageHelper\Temp\data.coverage";
       //If CoveragePath is a file, then directly use that, else assume it's a folder and search the subdirectories
       string strDataCoverageFile = args[0];
       //string CoveragePath = args[0];
       if (!File.Exists(strDataCoverageFile))
       {
         //Need to march down to something like:
         // TestResults\TimS_TIMSTALL2 2008-09-15 13_52_28\In\TIMSTALL2\data.coverage
         Console.WriteLine("Passed in folder reference, searching for '*.coverage'");
         string[] astrFiles = Directory.GetFiles(strDataCoverageFile, "*.coverage", SearchOption.AllDirectories);
         if (astrFiles.Length == 0)
         {
           Console.WriteLine("ERROR: Could not find *.coverage file");
           return -1;
         }
         strDataCoverageFile = astrFiles[0];
       }
 
       string strXmlPath = args[1];
 
       Console.WriteLine("CoverageFile=" + strDataCoverageFile);
       Console.WriteLine("Policy Xml=" + strXmlPath);
 
 
       bool blnDisplayOnlyErrors = false;
       if (args.Length > 2)
       {
         blnDisplayOnlyErrors = (args[2] == "1");
       }
 
       int intReturnCode = 0;
       try
       {
         //Ensure that data.coverage and dll/pdb are in the same directory
         //Assume data.coverage in a folder like so:
         // C:\Temp\ApplicationBlocks10\TestResults\TimS_TIMSTALL2 2008-12-19 14_57_01\In\TIMSTALL2
         //Assume dll/pdb in a folder like so:
         // C:\Temp\ApplicationBlocks10\TestResults\TimS_TIMSTALL2 2008-12-19 14_57_01\Out
         string strBinFolder = Path.GetFullPath(Path.Combine(Path.GetDirectoryName(strDataCoverageFile), @"..\..\Out"));
         if (!Directory.Exists(strBinFolder))
           throw new ApplicationException( string.Format("Could not find the bin output folder at '{0}'", strBinFolder));
         //Now copy data coverage to ensure it exists in output folder.
         string strDataCoverageFile2 = Path.Combine(strBinFolder, Path.GetFileName(strDataCoverageFile));
         File.Copy(strDataCoverageFile, strDataCoverageFile2);
 
         Console.WriteLine("Bin path=" + strBinFolder);
         intReturnCode = Run(strDataCoverageFile2, strXmlPath, blnDisplayOnlyErrors);
       }
       catch (Exception ex)
       {
         Console.WriteLine("ERROR: " + ex.ToString());
         intReturnCode = -2;
       }
 
       Console.WriteLine("Done");
       Console.WriteLine(string.Format("ReturnCode: {0}", intReturnCode));
       return intReturnCode;
 
     }
 
     private static int Run(string strDataCoverageFile, string strXmlPath, bool blnDisplayOnlyErrors)
     {
       //Assume that datacoverage file and dlls/pdb are all in the same directory
       string strBinFolder = System.IO.Path.GetDirectoryName(strDataCoverageFile);
 
       CoverageInfoManager.ExePath = strBinFolder;
       CoverageInfoManager.SymPath = strBinFolder;
       CoverageInfo myCovInfo = CoverageInfoManager.CreateInfoFromFile(strDataCoverageFile);
       CoverageDS myCovDS = myCovInfo.BuildDataSet(null);
 
       //Clean up the file we copied.
       File.Delete(strDataCoverageFile);
 
       CoveragePolicy cPolicy = CoveragePolicy.CreateFromXmlFile(strXmlPath);
 
       //loop through and display results
       Console.WriteLine("Code coverage results. All measurements in Blocks, not LinesOfCode.");
 
       int TotalClassCount = myCovDS.Class.Count;
       int TotalMethodCount = myCovDS.Method.Count;
 
       Console.WriteLine();
 
       Console.WriteLine("Coverage Policy:");
       Console.WriteLine(string.Format(" Class min required percent: {0}%", cPolicy.MinRequiredClassCoveragePercent));
       Console.WriteLine(string.Format(" Method min required percent: {0}%", cPolicy.MinRequiredMethodCoveragePercent));
 
       Console.WriteLine("Covered / Not Covered / Percent Coverage");
       Console.WriteLine();
 
       string strTab1 = new string(' ', 2);
       string strTab2 = strTab1 + strTab1;
 
 
       int intClassFailureCount = 0;
       int intMethodFailureCount = 0;
 
       int Percent = 0;
       bool isValid = true;
       string strError = null;
       const string cErrorMsg = "[FAILED: TOO LOW] ";
 
       //NAMESPACE
       foreach (CoverageDSPriv.NamespaceTableRow n in myCovDS.NamespaceTable)
       {
         Console.WriteLine(string.Format("Namespace: {0}: {1} / {2} / {3}%",
           n.NamespaceName, n.BlocksCovered, n.BlocksNotCovered, GetPercentCoverage(n.BlocksCovered, n.BlocksNotCovered)));
 
         //CLASS
         foreach (CoverageDSPriv.ClassRow c in n.GetClassRows())
         {
           Percent = GetPercentCoverage(c.BlocksCovered, c.BlocksNotCovered);
           isValid = IsValidPolicy(Percent, cPolicy.MinRequiredClassCoveragePercent);
           strError = null;
           if (!isValid)
           {
             strError = cErrorMsg;
             intClassFailureCount++;
           }
 
           if (ShouldDisplay(blnDisplayOnlyErrors, isValid))
           {
             Console.WriteLine(string.Format(strTab1 + "{4}Class: {0}: {1} / {2} / {3}%",
               c.ClassName, c.BlocksCovered, c.BlocksNotCovered, Percent, strError));
           }
 
           //METHOD
           foreach (CoverageDSPriv.MethodRow m in c.GetMethodRows())
           {
             Percent = GetPercentCoverage(m.BlocksCovered, m.BlocksNotCovered);
             isValid = IsValidPolicy(Percent, cPolicy.MinRequiredMethodCoveragePercent);
             strError = null;
             if (!isValid)
             {
               strError = cErrorMsg;
               intMethodFailureCount++;
             }
 
             string strMethodName = m.MethodFullName;
             if (blnDisplayOnlyErrors)
             {
               //Need to print the full method name so we have full context
               strMethodName = c.ClassName + "." + strMethodName;
             }
 
             if (ShouldDisplay(blnDisplayOnlyErrors, isValid))
             {
               Console.WriteLine(string.Format(strTab2 + "{4}Method: {0}: {1} / {2} / {3}%",
                 strMethodName, m.BlocksCovered, m.BlocksNotCovered, Percent, strError));
             }
           }
         }
       }
 
       Console.WriteLine();
 
       //Summary results
       Console.WriteLine(string.Format("Total Namespaces: {0}", myCovDS.NamespaceTable.Count));
       Console.WriteLine(string.Format("Total Classes: {0}", TotalClassCount));
       Console.WriteLine(string.Format("Total Methods: {0}", TotalMethodCount));
       Console.WriteLine();
 
       int intReturnCode = 0;
       if (intClassFailureCount > 0)
       {
         Console.WriteLine(string.Format("Failed classes: {0} / {1}", intClassFailureCount, TotalClassCount));
         intReturnCode = 1;
       }
       if (intMethodFailureCount > 0)
       {
         Console.WriteLine(string.Format("Failed methods: {0} / {1}", intMethodFailureCount, TotalMethodCount));
         intReturnCode = 1;
       }
 
       return intReturnCode;
     }
 
     private static bool ShouldDisplay(bool blnDisplayOnlyErrors, bool isValid)
     {
       if (isValid)
       {
         //Is valid --> need to decide
         if (blnDisplayOnlyErrors)
           return false;
         else
           return true;
       }
       else
       {
         //Not valid --> always display
         return true;
       }
     }
 
     private static bool IsValidPolicy(int ActualPercent, int ExpectedPercent)
     {
       return (ActualPercent >= ExpectedPercent);
     }
 
     private static int GetPercentCoverage(uint dblCovered, uint dblNot)
     {
       uint dblTotal = dblCovered + dblNot;
       return Convert.ToInt32( 100.0 * (double)dblCovered / (double)dblTotal);
     }
   }
 }

 

Tuesday, September 16, 2008

Next LCNUG on Sept 25 about ASP.NET Dynamic Data

[This was originally posted at http://timstall.dotnetdevelopersjournal.com/next_lcnug_on_sept_25_about_aspnet_dynamic_data.htm]

The next LCNUG is coming up (located at the College of Lake County, IL). Josh Heyse will present on ASP.Net Dynamic Data on September 25, 2008, 6:30 PM

 

This is a great place to meet other developers and learn from the community.