Monday, October 24, 2011

Guided by Tests–Wrap Up

This post is ninth and last in a series about a group TDD experiment to build an application in 5 7 days using only tests.  Read the beginning here.

This last post is aimed at wrapping up the series by looking back at some of the metrics we can collect from our code and tests. There’s some interesting data about the experiment as well as feedback for the design available.

We’ll use three different data sources for our application metrics:

  • Visual Studio Code Analysis
  • MSTest Code Coverage
  • NDepend Code Analysis

Visual Studio Code Analysis

The code analysis features of Visual Studio 2010 provide a convenient view of some common static analysis metrics. Note that this feature only ships with the Premium and Ultimate versions. Here’s a quick screen capture that shows a high level view of the metrics for our project.

Tip: When reading the above graph, keep an eye on the the individual class values not the roll-up namespace values.

image

Here’s a breakdown of what some of these metrics mean and what we can learn from our experiment.

Maintainability Index

I like to think of the Maintainability Index as a high level metric that summarizes how much trouble a class is going to give you over time. Higher numbers are better, and problems usually start below the 20-30 range. The formula for the maintainability index is actually quite complex, but looking at the above data you can see how the other metrics drive the index down.

Our GraphBuilder is the lowest in our project, coming in at an index of 69. This is largely influenced by the density of operations and complexity to lines of code – our GraphBuilder is responsible for constructing the graph from conditional logic of the model. The maintainability index is interesting, but I don’t hold much stock in it alone.

Lines of Code

Lines of Code is the logical lines of code which means code lines without whitespace and stylistic formatting. Some tools, like NDepend, record other metrics for lines of code, such as number of IL instructions per line. Visual Studio’s Lines of Code metric is simple and straight forward.

There are a few interesting observations for our code base.

First, the number of lines of code per class is quite low. Even the NDependStreamParser which tops the chart at 22 lines is extremely low considering that it reads data from several different Xml Elements. The presence of many small classes suggests that classes are designed to do one thing well.

Secondly, there are more lines of code in our test project than production code. Some may point to this as evidence that unit testing introduces more effort than writing the actual code – I see the additional code as the following:

  • We created hand-rolled mocks and utility classes to generate input. These are not present in the production code.
  • Testing code is more complicated than writing it as there are many different paths the code might take. There should be more code here.
  • We didn’t write the code and then the tests, we did them at the same time
  • Our tests ensured that we only wrote code needed to make the tests pass. This allowed us to aggressively remove all duplication in the production code. Did I mention the largest and most complicated class is 22 lines long?

Cyclomatic Complexity

Cyclomatic Complexity, also known as Conditional Complexity, represents the number of paths of execution a method can have – so classes that have a switch or multiple if statements will have higher complexity than those that do not. The metric is normally applied at the method level, not the class, but if you look at the graph above, more than half of the classes average below 5 and the other half are less than 12. Best practices suggest that Cyclomatic Complexity should be less than 15-20 per method. So we’re good.

Although the graph above doesn’t display our 45 methods, the cyclomatic complexity for most methods is 1-2. The only exception to this is our NDependStreamParser and GraphBuilder, which have methods with a complexity value of 6 and 5 respectively.

In my view, I see cyclomatic complexity as a metric for how many tests are needed for a class.

Depth of Inheritance

The “depth of inheritance” metric refers to the number of base classes involved in the inheritance of a class. Best practices aim to keep this number as low as possible since each level in the inheritance hierarchy represents a dependency that can influence or break implementers.

Our graph shows very low inheritance depth which supports our design philosophy of using composition and dependency inversion instead of inheritance. There are a few red flags in the graph though: our AssemblyGraphLayout has an inheritance depth of 9, a consequence of extending a class from the GraphSharp library and it highlights possible brittleness surrounding that library.

Class Coupling

The class coupling metric is a very interesting metric because it shows us how many classes our object consumes or creates. Although we don’t get much visibility into the coupling (NDepend can help us here) it suggests that classes with a higher coupling are much more sensitive to changes. Our GraphBuilder has a Class Coupling of 11, including several types from the System namespace (IEnumerable<T>, Dictionary, KeyValuePair) but also has knowledge of our Graph and model data.

Class coupling combined with many lines of code and high cyclomatic complexity are highly sensitive to change, which explains why the GraphBuilder has the lowest Maintenance Index of the bunch.

Code Coverage

Code coverage is a metric that shows which execution paths within our code base are covered by unit tests. While code coverage can’t vouch for the quality of the tests or production code, it can indicate the strength of the testing strategy.

Under the rules of our experiment, there should be 100% code coverage because we’ve mandated that no code is written without a test. We have 93.75% coverage, which has the following breakdown:

DependencyViewer-Coverage

Interestingly enough, the three areas of with no code coverage are the key obstacles we identified early in the experiment. Here are the snippets of code that have no coverage:

Application Start-up Routine

protected override void OnStartup(StartupEventArgs e)
{
    var shell = new Shell();
    shell.DataContext = new MainViewModelFactory().Create();

    shell.Show();
}

Launching the File Dialog

internal virtual void ShowDialog(OpenFileDialog dialog)
{
    dialog.ShowDialog();
}

Code behind for Shell

public Shell()
{
    InitializeComponent();
}

We’ve designed our solution to limit the amount of “untestable” code, so these lines of code are expected. From this we can establish that our testing strategy has three weaknesses, two of which are covered by launching the application. If we wanted to write automation for testing the user-interface, these would be the areas we’d want early feedback from.

NDepend Analysis

NDepend is a static code analysis tool that provides more detailed information than the standard Visual Studio tools. The product has several different pricing levels including open-source and evaluation licenses and has many great visualizations and features that can help you learn more about your code. While there are many visuals that I could present here, I’m listing two interesting diagrams.

DependencyGraph:

This graph shows dependencies between namespaces. I’ve configured this graph to show two things:

  • Box Size represents Afferent Coupling where larger boxes are used by many classes
  • Line Size represents the number of methods between the dependent components.

DependencyMatrix-Graph

Dependency Matrix:

A slightly different view of the same information is the Dependency Matrix. This is the graph represented in a cross-tabular format.

DependencyMatrix-Namespaces

Both of these views help us better visualize the Class Coupling metric that Visual Studio pointed out earlier, but the information it provides is quite revealing. Both diagrams show that the Model namespace is used by the ViewModels and Controls namespaces. This represents a refactoring or restructuring problem as these layers really should not be aware of the model: Views shouldn’t have details about the ViewModel; Service layer should only know about the Model and ViewModels; Controls should know about ViewModel data if needed; ViewModels should represent view-abstractions and thus should not have Model knowledge.

The violations have occurred because we established our AssemblyGraph and related items as part of the model.  This was a concept we wrestled with at the beginning of the exercise, and now the NDepend graph helps visualize the problem more concretely. As a result of this choice, we’re left with the following violations:

  • The control required to layout our graph is a GraphLayout that uses our Model objects.
  • The MainViewModel references the Graph as a property which is bound to the View.

The diagram shows a very thin line, but this model to view model state problem has become a common theme in recent projects. Maybe we need a state container to hold onto the Model objects and maybe the Graph should be composed of ViewModel objects instead of Model data. It’s worth consideration, and maybe I’ll revisit this and blog about this problem some more in an upcoming post.

Conclusion

To wrap up the series, let’s look back on how we got here:

I hope you have enjoyed the series and found something to take away. I challenge you to find a similar problem and champion its development within your development group.

Happy Coding.

Tuesday, October 11, 2011

Guided by Tests–Day Seven

This post is eighth in a series about a group TDD experiment to build an application in 5 7 days using only tests.  Read the beginning here.

Today is the day we test the untestable. Early in the experiment we hit a small road block when our code needed to interact with the user-interface. Since unit-testing the user-interface wasn’t something we wanted to pursue, we wrapped the OpenFileDialog behind a wrapper with an expectation that we would return to build out that component with tests later. The session for this day would prove to be an interesting challenge.

The Challenges of Physical Dependencies

Although using a wrapper to shield our code from difficult to test dependencies is a common and well accepted technique, it would be irresponsible not to test the internal implementation details of that wrapper. Testing against physical dependencies is hard because they introduce a massive amount of overhead, but if we can isolate the logic from the physical dependency we can use unit tests to get 80-90% of the way there. To get the remaining part, you either test manually or write a set of integration or functional tests.

The technique outlined below can be used for testing user-interface components like this one, email components and in a pinch it can work for network related services.

Testing our Wrapper

Time to write some tests for our IFileDialog. I have some good news and bad news.

The good news is Microsoft provides a common OpenFileDialog as part of WPF, meaning that I don’t have to roll my own and I can achieve a common look and feel with other applications with little effort. This also means we can assume that the FileOpenDialog is defect free, so we don’t have to write unit tests for it.

The bad news is, I use this common so infrequently that I forget how to use it.

So instead of writing a small utility application to play with the component, I write a test that shows me exactly how the component works:

[TestMethod]
public void WhenSelectAFileFromTheDialog_AndUserSelectsAFile_ShouldReturnFileName()
{
    var dialog = new OpenFileDialog();
    dialog.Show(); // this will show the dialog

    Assert.IsNotNull(dialog.FileName); 
}

When I run this test, the file dialog is displayed. If I don’t select a file, the test fails. Now that we know how it works, we can rewrite our test and move this code into a concrete implementation.

Unit Test:

[TestMethod]
public void WhenSelectingAFile_AndUserMakesAValidSelection_ShouldReturnFileName()
{
    var subject = new FileDialog();
    string fileName = subject.SelectFile();
    Assert.IsNotNull(fileName);
}

Production Code:

public class FileDialog : IFileDialog
{
    public string SelectFile()
    {
        var dialog = new OpenFileDialog();
        dialog.Show();

        return dialog.FileName;
    }
}

The implementation is functionally correct, but when I run the test I have to select a file in order to have the test pass. This is not ideal. We need a means to intercept the dialog and simulate the user selecting a file. Otherwise, someone will have to babysit the build and manually click file dialog prompts until the early morning hours.

Partial Mocks To the Rescue

Instead of isolating the instance of our OpenFileDialog with a mock implementation, we intercept the activity and allow ourselves to supply a different implementation for our test. The following shows a simple change to the code to make this possible.

public class FileDialog : IFileDialog
{
    public string SelectFile()
    {
        var dialog = new OpenFileDialog();

        Show(dialog);

        return dialog.FileName;
    }

    internal virtual void Show(OpenFileDialog dialog)
    {
        dialog.Show();
    }
}

This next part is a bit weird. In the last several posts, we’ve used Moq to replace our dependencies with fake stand-in implementations. For this post, we’re going to mock the subject of the test, and fake out specific methods on the subject. Go back and re-read that. You can stop re-reading that now.

As an aside: I often don’t like showing this technique because I’ve seen it get abused. I’ve seen abuses where developers use this technique to avoid breaking classes down into smaller responsibilities; they fill their classes with virtual methods and then stub out huge chunks of the subject. This feels like shoddy craftsmanship and doesn’t sit well with me – granted, it works, but it leads to problems. First, the areas that they’re subverting never get tested. Secondly, it’s too easy for developers to forget what they’re doing and they start to write tests for the mocking framework instead of the subject’s functionality. So use with care. In this example, I’m subverting one line of a well tested third-party component in order to avoid human-involvement in the test.

In order to intercept the Show method and replace it with our own implementation we can use Moq’s Callback feature. I’ve written about this Moq’s support for Callbacks before, but in a nutshell Moq can intercept the original method and inbound arguments for use within your test.

Our test now looks like this:

[TestMethod]
public void WhenSelectingAFile_AndUserMakesAValidSelection_ShouldReturnFileName()
{
    // setup a partial mock for our subject
    var mock = new Mock<FileDialog>();
    FileDialog subject = mock.Object;

    // The Show method in our FileDialog is virtual, so we can setup
    //    an alternate behavior when it's called.
    // We configure the Show method to call the SelectAFile method
    //    with the original arguments
    mock.Setup( partialMock => partialMock.Show(It.IsAny<OpenFileDialog>())
        .Callback( SelectAFile );

    string fileName = subject.SelectFile();
    Assert.IsNotNull(fileName);
}

// we alter the original inbound argument to simulate
//    the user selecting a file
private void SelectAFile(OpenFileDialog dialog)
{
    dialog.FileName = "Foo";
}

Now when our test runs the FileDialog returns “Foo” without launching a popup. Now we can write tests for a few extra scenarios:

[TestMethod]
public void WhenSelectingAFile_AndTheUserCancelsTheFileDialog_NoFileNameShouldBeReturned()
{
    // we're mocking out the call to show the dialog,
    // so without any setup on the mock, the dialog will not return a value.

    string fileName = Subject.SelectFile();

    Assert.IsNull(fileName);
}

[TestMethod]
public void WhenSelectingAFile_BeforeUserSelectsAFile_EnsureDefaultDirectoryIsApplicationRootFolder()
{
    // ARRANGE:
    string expectedDirectory = Environment.CurrentDirectory;

    Mock.Get(Subject)
        .Setup(d => d.ShowDialog(It.IsAny<OpenFileDialog>()))
        .Callback<OpenFileDialog>(
            win32Dialog =>
            {
                // ASSERT: validate that the directory is set when the dialog is shown
                Assert.AreEqual(expectedDirectory, win32Dialog.InitialDirectory);
            });

    // ACT: invoke showing the dialog
    Subject.SelectFile();
}

Next: Review some of the application metrics for our experiment in the Guided by Tests – Wrap Up.

Tuesday, October 04, 2011

Guided by Tests–Day Six

This post is seventh in a series about a group TDD experiment to build an application in 5 7 days using only tests.  Read the beginning here.

Today is the day where we put it all together, wire up a user-interface and cross our fingers when we run the application. I joked with the team that “today is the day we find out what tests we missed.” Bugs are simply tests we didn’t think about.

Outstanding Design Concerns

At this point we have all the parts for our primary use case. And although we’ve designed the parts to be small with simple responsibilities, we’ve been vague about how the structure of the overall application. Before we can show a user-interface, we’ve got a few remaining design choices.

Associating the Command to the View

In the last post we looked at the NewProjectCommand. After reviewing our choices about the relationship between the Command and the ViewModel we decided that the most pragmatic choice was to have the Command hold a reference to the ViewModel, though we didn’t define where either object fit into the overall application. The next question presented to the team was whether we make the command independent of the ViewModel and reside in a menu-control or other top-level ViewModel, or should we add a Command property to the ViewModel?

The team decided to take the convenience route and put our NewProjectCommand as a property on our MainViewModel. Ultimately, this decision means that our MainViewModel becomes the top-level ViewModel that the View will bind to. This decision is largely view-centric and should be easy to change in the future if we need to reorganize the structure of the application.

With this decision resolved, we’ve got a better picture of how the user-interface and logical model will work. Before we start working on the xaml, we need to figure out how all these pieces come together.

Assembling the MainViewModel

The parts of our application have been designed with the Dependency Inversion Principle in mind – our objects don’t have a “top” and expect that the caller will pass in the proper configuration. Eventually, something must take the responsibility to configure the object graph. The next question is: where should this happen?

Someone suggested stitching the components together in the application start-up routine. While this is logically where this activity should occur, it doesn’t make sense to burden the start-up routine with these implementation details. If we did, we’d likely have a monster start-up routine that would change constantly. Plus, we’d have little means to test it without invoking the user-interface.

Achieving 100% code coverage, though a noble effort, is not always realistic. In most applications it’s likely that 10% of the codebase won’t have tests because those areas border on the extreme edges of the application. Writing unit tests for extreme edges is hard (“unit” tests for user-interface components is especially difficult) and it helps to have a balanced testing strategy that includes unit, system integration and functional UI automation to cover these areas.

Rather than including these details in the application start-up routine, we’ll move as much code as possible into testable components. The goal is to make the start-up routine so simple that it won’t require changes. This will minimize the untestable areas of the application.

To accomplish this, we decided to create a Factory for a MainViewModel.

Realizations for the MainViewModel Factory

The primary responsibility of our factory is to assemble our ViewModel with a fully configured NewProjectCommand. The test looks something like this:

[TestMethod]
public void CreatingAViewModel_WithDefaults_ShouldSetupViewModel()
{
    var factory = new MainViewModelFactory();
    var viewModel = factory.Create();
    
    Assert.IsNotNull(viewModel, "ViewModel was not created.");
    Assert.IsNotNull(viewModel.NewProject, "Command was not wired up.");
}

In order to make the test pass, we’ll need to wire-up the all the components of our solution within the factory. The argument checking we’ve introduced in the constructors of our command and loader guarantees that. Ultimately, the factory looks like this:

public class MainViewModelFactory
{
    public MainViewModel Create()
    {
        var vm = new MainViewModel();

        IFileDialog dialog = new MockFileDialog();
        IGraphDataParser graphParser = new NDependStreamParser();
        var graphBuilder = new GraphBuilder();

        var loader = new ProjectLoader(dialog, graphParser);

        vm.NewProject = new NewProjectCommand(vm, loader, graphBuilder);

        return vm;
    }
}

With that all in place, the application start-up routine is greatly simplified. We could probably optimize it further, but we’re anxious to see our application running today, so this will do:

public partial class App : Application
{
    protected override void OnStartup(StartupEventArgs e)
    {
        var shell = new Shell();
        shell.DataContext = new MainViewModelFactory().Create();
        shell.Show();
    }
}

Running the App for the first time

Ready for our first bug? We borrowed some XAML from Sacha’s post, and after we discovered that we needed a one-line user control, we wired up a crude Button to our NewProject command. Then we launched the app, crossed our fingers, and upon clicking the “New Project” button….. nothing happened!

We set a breakpoint and then walked through the command’s execution. Everything was perfect, except we had forgotten one small detail: we forgot to notify the user-interface that we had produced a graph.

As a teaching opportunity, we wrote a unit test for our first bug:

[TestClass]
public class MainViewModelTests
{
    [TestMethod]
    public void WhenGraphPropertyChanges_ShouldNotifyTheUI()
    {
        string propertyName = null;

        var viewModel = new MainViewModel();
        viewModel.PropertyChanged += (sender,e) => 
            propertyName = e.PropertyName;

        viewModel.Graph = new AssemblyGraph();

        Assert.AreEqual("Graph", propertyName,
            "The user interface wasn't notified.");
    }
}

And on second attempt (read: messing with xaml styles for a while) the app produces the following graph:

WPF-ComponentDependenciesDiagram

Next: Read about polishing off the File Dialog in Day Seven