Friday, July 11, 2008

Automate Visual Studio from external tools

While cleaning up a code monster, a colleague and I were looking for ways to dynamically rebuild all of our web-services as part of build script or utility as we have dozens of them and they change somewhat frequently.  In the end, we decided that we didn't necessarily need support for modifying them within the IDE and we could just generate them using the WSDL tool.

However, while I was researching the problem I stumbled upon an easy method to drive Visual Studio without having to write an addin or macro; useful for one-off utilities and hair-brain schemes.

Here's some ugly code, just to give you a sense for it.

You'll need references to:

  • EnvDTE - 8.0.0.0
  • VSLangProj - 7.0.3300.0
  • VSLangProj80 - 8.0.0.0
namespace AutomateVisualStudio
{
  using System;
  using EnvDTE;
  using VSLangProj80;

  public class Utility
  {
      public static void Main()
      {
          string projectPath = @"C:\Demo\Empty.csproj";
          Type type = Type.GetTypeFromProgID("VisualStudio.DTE.8.0");
          DTE dte = (DTE) Activator.CreateInstance(type);
          dte.MainWindow.Visible = false;

          dte.Solution.Create(@"C:\Temp\","tmp.sln");
          Project project = dte.Solution.AddFromFile(projectPath, true);

          VSProject2 projectV8 = (VSProject2) project.Object;
          if (projectV8.WebReferencesFolder == null)
          {
              projectV8.CreateWebReferencesFolder();
          }

          ProjectItem item = projectV8.AddWebReference("http://localhost/services/DemoWS?WSDL");
          item.Name = "DemoWS";
            
          project.Save(projectPath);
          dte.Quit();
      }
  }
}

Note that Visual Studio doesn't allow you to manipulate projects directly; you must load your project into a solution.  If you don't want to mess with your existing solution file, you can create a temporary solution and add your existing project to it.  And if you don't want to clutter up your disk with temporary solution files, just don't call the the Save method on the Solution object.

If you had to build a Visual Studio utility, what would you build?

Thursday, July 10, 2008

Catching server errors with WatiN: redux

Stumbled upon this post about how to catch server errors for your WatiN tests.  The approach outlined provides a decent mechanism for detecting server errors by sub-classing the WatiN IE object.  While I do appreciate the ability to subclass, it bothers me a bit that I have to write the logic in my subclass to detect server errors.  After poking around a bit, I think there's a more generic approach that can be achieved by tapping into the NavigateError event of the native browser:

public class MyIE : IE
{
    private InternetExplorerClass ieInstance;
    private NavigateError error;

    public MyIE()
    {
        ieInstance = (InternetExplorerClass) InternetExplorer;
        ieInstance.BeforeNavigate += BeforeNavigate;
        ieInstance.NavigateError += NavigateError;
    }

    public override void WaitForComplete()
    {
        base.WaitForComplete();
        if (error != null)
        {
            throw new ServerErrorException(Text);
        }
    }

    void BeforeNavigate(string URL, int Flags, string TargetFrameName, ref object PostData, string Headers, ref bool Cancel)
    {
        error = null;
    }

    void NavigateError(object pDisp, ref object URL, ref object Frame, ref object StatusCode, ref bool Cancel)
    {
        error = new NavigateError(URL,StatusCode);
    }

    private class NavigateError
    {
        public NavigateError(object url, object statusCode)
        {
            _url = url;
            _statusCode = statusCode;
        }

        private object _url;
        private object _statusCode;
    }
}
public class ServerErrorException : Exception 
{
    public ServerErrorException(string message) : base(String.Format("A server error occurred: {0}",message))
    { } 
}

Few caveats:

  • Constructor of MyIE needs to be updated to reflect the other constructor overloads.
  • Need to ensure that URL of NavigateError is the same URL of BeforeNavigate
  • Test library needs to reference the Interop.SHDocVw wrapper for Internet Explorer
  • Only tested with IE7

While I wouldn't consider COM Interop to be a "clean" solution, it is more bit more portable between solutions.  And if it was this easy, why isn't it part of WatiN anyway?

Tuesday, July 08, 2008

Legacy Projects: Test the User Interface with Selenium or WatiN

Following up on the series of posts on Legacy Projects, my legacy project with no tests now has a build server with empty coverage data.  At this point, it's really quite tempting to start refactoring my code, adding in tests as I go, but that approach is slightly ahead of the cart.

Although Tests for the backend code would help, they can't necessarily guarantee that everything will work correctly.  To be fair, the only real guarantee for the backend code would be to write Tests for the existing code and then begin to refactor both Tests and code.  This turns out to be a very time consuming endeavour as you'll end up writing the Tests twice.  In addition, I'm working with the assumption that my code is filled with static methods with tight-coupling which doesn't lend itself well to testing.  I'm going to need a crowbar to fix that, and that'll come later.

It helps to approach the problem by looking at the current manual process as a form of unit testing.  It's worked well up to this point, but because it's done by hand it's a very time consuming process that is prone to error and subjective of the user performing the tests.  The biggest downfall of the current process is that when the going get's tough, we are more likely to miss details.  In his book, Test Driven Development by Example, Kent Beck refers to manual testing as "test as a verb", where we test by evaluating aspects of the system.  What we need to do is turn this into "test as a noun" where the test is a "procedure to evaluate" in an automated fashion.  By automating the process, we eliminate most of the human related problems and save a bundle of time. 

For legacy projects, the best place automation starting point is to test the user interface, which isn't the norm for TDD projects.  In a typical TDD project, user interface testing tends to appear casually late in the project (if it appears at all), often because the site is incomplete and the user interface is a very volatile place;  UI tests are often seen as too brittle.  However, for a legacy project the opposite is true: the site is already up and running and the user interface is relatively stable; it's more likely that any change we make to the backend systems will break the user interface.

There is some debate on the topic of where this testing should take place.  Some organizations, especially those where the Quality Assurance team is separated from the development teams, rely on automated testing suites such as Empirix (recently acquired by Oracle) to perform functional and performance tests.  These are powerful (and expensive) tools, but in my opinion are too late in the development cycle --  you want to catch minor bugs before they are released to QA, otherwise you'll incur an additional bug-fix development cycle.  Ideally, you should integrate UI testing into your build cycle using tools that your development team is familiar with.  And if you can incorporate your QA team into the development cycle to help write the tests, you're more likely to have a successful automated UI testing practice.

Of the user interface testing frameworks that integrate nicely with our build scripts, two favourites come to mind:  Selenium and WaitN.

Using Selenium

Selenium is a java-based powerhouse whose key strengths are platform and browser diversity, and it's extremely scalable.  Like most java-based solutions, it's a hodge-podge of individual components that you cobble together to suit your needs; it may seem really complex, but it's a really smart design.  At its core, Selenium Core is a set of JavaScript files that manipulate the DOM.  The most common element is known as Selenium Remote-Control, which is a server-component that can act as a message-broker/proxy-server/browser-hook that can magically insert the Selenium JavaScript into any site --  it's an insanely-wicked-evil-genius solution to overcoming cross-domain scripting issues.  Because Selenium RC is written in Java, it can live on any machine, which allows you to target Linux, Mac and PC browsers.  The scalability feature is accomplished using Selenium Grid, which is a server-component that can proxy requests to multiple Selenium RC machines -- you simply change your tests to target the URL of the grid server.  Selenium's only Achilles' heel is that SSL support requires some additional effort.

A Selenium test that targets the Selenium RC looks something like this:

[Test]
public void CanPerformSeleniumSearch()
{
    ISelenium browser = new DefaultSelenium("localhost",4444, "*iexplore", "http://www.google.com");
    browser.Start();
    browser.Open("/"); 
    browser.Type("q", "Selenium RC"); 
    browser.Click("btnG");

    string body = browser.GetBodyText();

    Assert.IsTrue(body.Contains("Selenium"));

    browser.Stop(); 
}

The above code instantiates a new session against the Selenium RC service running on port 4444.  You'll have to launch the service from a command prompt, or configure it to run as a service.  There are lots of options.  The best way to get up to speed is to simply follow their tutorial...

Selenium has a FireFox extension, Selenium IDE, that can be used to record browser actions into Selenese.

Using WatiN

WatiN is a .NET port of the java equivalent WatiR.  Although it's currently limited to Internet Explorer on Windows (version 2.0 will target FireFox), it has an easy entry-path and a simple API.

The following WatiN sample is a rehash of the Selenium example.  Confession: both samples are directly from the provided documentation...

[Test]
public void CanPerformWatiNSearch()
{
    using (IE ie = new IE("http://www.google.com"))
    {
        ie.TextField(Find.ByName("q")).TypeText("WatiN");
        ie.Button(Find.ByName("btnG")).Click();

        Assert.IsTrue(ie.ContainsText("WaitN");
    }
}

As WatiN is a browser hook, its API contains exposes the option to tap directly into the browser through Interop.  You may find it considerably more responsive than Selenium because the requests are marshaled via windows calls instead of HTTP commands.  Though there is a caveat to performance: WatiN expects a Single Threaded Apartment model in order to operate, so you may have to adjust your runtime configuration.

WatiN also has a standalone application, WatiN Recorder, that can capture browser activity in C# code.

UI Testing Strategy Tips

Rather than writing an exhaustive set of regression tests, here's my approach:

  • Start Small: Begin by writing coarse UI tests that demonstrate simple functionality.  For example, a test that hits the homepage and validates that there aren't any 500 errors.  Writing complex tests that validate specific HTML markup take longer to produce and often tend to be brittle and less maintainable in the long run.
  • Map out and test functional areas:  Identify the key functional elements of the site that QA would normally regression test for a build: login, update a profile, add items to a shopping cart, checkout, search, etc.  Some of these will be definite road-blockers that you'll have to work around -- you'll quickly realize you can't guarantee profile-ids and passwords between environments, or maybe your product catalog changes too frequently.  Some will require creative thinking, others may inspire custom testing tools that can perform test-specific queries or functions.  You may even find a missing need in the backend systems that you could build and leverage as part of your tests. 
  • Write tests for functional changes:  You don't need to sit down an write an exhaustive site wide regression fixture -- focus on the areas that you touch.  If you write tests before you make any changes you can use these tests to help automate the debugging process.  The development effort is relatively small -- you have to test it anyway a dozen times by hand.
  • Write tests for testing bugs!!!:  What better motivation could you have?  This is what regression testing is all about!
  • Design for different environments:  The code examples above have URLs hard-coded.  Consider using a tool that uses configuration settings to retrieve or help construct URLs so that you can run your UI tests against your local instance, dev, build-server, QA, integration, etc.  UI Tests make great build-validation utilities!

Sunday, July 06, 2008

Switching to LiveWriter

Up to this point, I've crafted the HTML markup for my posts this year using Notepad++.  While working with a local editor is far superior to using Blogger's editor window, I've found stylizing elements and adding hyperlinks to be somewhat time consuming, not to mention difficult to read/review/write content with all the HTML markup in the way.   Despite having better control over the markup, the largest problem with this approach is you really can't see what your post will look like until you publish, and even then, I usually follow a nervous publish/review/tweak/publish dance number to sort out all the display issues.

Recently, I downloaded LiveWriter and w.bloggar to test drive alternatives.  (Actually, I was interested in w.blogger's ability to edit Blogger Templates -- but it turns out that they don't work on blogger's new layout templates.  Drat.)   So far, I'm pleasantly surprised with LiveWriter.

Although I'm pretty excited that the tool is written in .NET with support for managed addins, I am most impressed with the feature that can simulate a live preview of your post.  LiveWriter is able to pull this off by creating a temporary post against your blog and analyzing it to extract your CSS and HTML Layout.  You can toggle between editing (F11), preview (F12) and HTML (Shift + F11) really easily.

LiveWriter-Post-Preview

The biggest snag I've encountered thus far is that the HTML markup produced by LiveWriter is cleaned up with lots of extra line-feeds for readability.  While this makes reading the HTML a simple pleasure, it wreaks havoc with my current Blogger settings.

Blogger's default setting converts carriage-returns into <br /> tags.  So all the extra line breaks inserted by LiveWriter are transformed into ugly whitespace in your posts.  This feature is configurable within Blogger: Posts -> Formatting -> Convert line breaks.

Settings - Formatting - Convert Line Breaks

Unfortunately for me, this is a breaking change for most of my posts (dating back to 2004).  To fix, I have to add the appropriate <p></p> tags around my content -- fortunately, LiveWriter will automatically correct markup for paragraphs that I touch with additional whitespace.  So while the good news is my posts will have proper markup in the editor, the bad news is I have to manually edit each one.

Saturday, July 05, 2008

Legacy Projects: Coverage Data without Tests

From my previous post, Get Statistics from your Build Server, I spoke about getting meaningful data into your log output as soon as possible so that you can begin to generate reports about the state of your application.

I'm using NCover to provide code coverage analysis, but I can also get important metrics like Non-Comment Lines Of Code, number of classes, members, etc.  Unfortunately, I have no unit tests so my coverage report contains no data.  Since NCover will only profile assemblies that are loaded into the profiler's memory space, referencing my target assembly into my Test assembly isn't enough.  To compensate, i added this simple test to load the assembly into memory:

[Test]
public void CanLoadAssemblyToProvideCoverageData()
{
 System.Reflection.Assembly.Load("AssemblyName");
}

This is obviously a dirty hack, and I'll remove it the second I write some tests.  Although I only have 0% coverage, I now have a detailed report that shows over 40,000 lines of untested code.  The stage is now set to remove duplication and introduce code coverage.

Tuesday, July 01, 2008

Legacy Projects: Get Statistics from your Build Server

As I mentioned in my post, Working with Legacy .NET Projects, my latest project is a legacy application with no tests. We're migrating from .NET 1.1 to .NET 2.0, and this is the first entry in the series of dealing with legacy projects. Click here to see the starting point.

On the majority of legacy projects that I've worked on, there is often a common thread within the development team that believes the entire code base is outdated, filled with bugs and should be thrown away and rewritten from scratch. Such a proposal is a very tough sell for management, who will no doubt see zero value in spending a staggering amount only to receive exactly what they currently have, plus a handful of fresh bugs. Rewrites might make sense when accompanied with new features or platform shifts, but in large they are a very long and costly endeavour. Refactoring the code using small steps in order to get out of Design Debt is a much more suitable approach, but cannot be done without a plan that management can get behind. Typically, management will support projects that can quantify results, such as improving server performance or page load times. However, in the context of a sprawling application without separation of concerns, estimating effort for these types of projects can be extremely difficult, and further compounded when there is no automated testing in place. It's a difficult stalemate between simple requirements and a full rewrite.

Assuming that your legacy project at least has source control, the next logical step to improve your landscape is to introduce a continous integration server or build server. And as there are countless other posts out there describing how to setup a continuous integration server, I'm not going to repeat those good fellows.

While the benefits of a build server are immediately visible for developers, who are all too familiar with dumb-dumb errors like compilation issues due to missing files in source control, the build server can also be an important reporting tool that can be used to sell management on the state of the application. As a technology consultant who has played the part between the development team and management, I think it's fair to say that most management teams would love to claim that they understand what their development teams do, but they'd rather be spared the finer details. So if you could provide management a summary of all your application's problems graphed against a timeline, you'd be able to demonstrate the effectiveness of their investment over time. That's a pretty easy sell.

The great news is, very little is required on your part to produce the graphs: CruiseControl 1.3 has a built in Statistics Feature that uses XPath statements to extract values from your build log. Statistics are written to an xml file and csv file for easy exporting, and third party graphing tools can be plugged into the CruiseControl dashboard to produce slick looking graphs. The challenge lies in mapping the key pain points in your application to a set of quantifiable metrics and then establishing a plan that will help you improve those metrics.

Here's a common set of pain points and metrics that I want to improve/measure for my legacy project:

Pain Metrics Toolset
Tight Coupling (Poor Testability) Code Coverage, Number of Tests NCover, NUnit
Complexity / Duplication (Code Size) Cyclomatic complexity, number of lines of code, classes and members NCover, NDepend, SourceMonitor or VIL
Standards Compliance FxCop warnings and violations, compilation warnings FxCop, MSBuild

Ideally, before I start any refactoring or code clean-up, I want my reports to reflect the current state of the application (flawed, tightly coupled and un-testable). To do this, I need to start capturing this data as soon as possible by adding the appropriate tools to my build script. While it's possible to add new metrics to your build configuration at any time, there is no way to go back and generate log data for previous builds. (You could manually check out previous builds and run the tools directly, but would take an insane amount of time.) The CruiseControl.NET extension CCStatistics also has a tool that can reprocess your log files, which is handy if you add new metrics for data sources that have already been added to your build output.

Since adding all these tools into your build script requires some tinkering, i'll be gradually adding these tools into my build script. To minimize changes to my cruise control configuration, I can use a wildcard filter to match all files that follow a set naming convention. I'm using a "*-Results.xml" naming convention.

<-- from ccnet.config -->
<publishers>
<merge>
<files>
<file>c:\buildpath\build-output\*-Results.xml</file>
</files>
</merge>
</publishers>

Configuring the Statistics Publisher is really quite easy, and the great news is that the default configuration captures most of the metrics above. The out of box configuration captures the following:

  • CCNET: Build Label
  • CCNET: Error Type
  • CCNET: Error Message
  • CCNET: Build Status
  • CCNET: Build Start Time
  • CCNET: Build Duration
  • CCNET: Project Name
  • NUNIT: Test Count
  • NUNIT: Test Failures
  • NUNIT: Tests Ignored
  • FXCOP: FxCop Warnings
  • FXCOP: FxCop Errors

Here's a snippet from my ccnet.config file that shows NCover lines of code, files, classes and members. Note that I'm also using Grant Drake's NCoverExplorer extras to generate an xml summary instead of the full coverage xml output for performance reasons.

<publishers>
<merge>
<files>
<file>c:\buildpath\build-output\*-Results.xml</file>
</files>
</merge>

<statistics>
<statisticList>
<firstMatch name='NCLOC' xpath='//coverageReport/project/@nonCommentLines' include='true' />
<firstMatch name='files' xpath='//coverageReport/project/@files' include='true' />
<firstMatch name='classes' xpath='//coverageReport/project/@classes' include='true' />
<firstMatch name='members' xpath='//coverageReport/project/@members' include='true' />
</statisticList>
</statistics>

<!-- email, etc -->
</publishers>

I've omitted the metrics for NDepend/SourceMonitor/VIL, as I haven't fully integrated these tools into my build reports. I may revisit this later.

If you've found this useful or have other cool tools or metrics you want to share, please leave a note.

Happy Canada Day!