This post is third in a series about a group TDD experiment to build an application in 5 days using only tests. Read the beginning here.
Today we break new ground on our application, starting with writing our first test. Today is still a teaching session, where I’ll write the first set of tests to demonstrate naming conventions and how to demonstrate how to use TDD with the rules we defined the day before. But first, we need to figure out where we should start.
Logical Flow
In order to determine where we should start it helps to draw out the logical flow of our primary use case: create a new dependency viewer from an NDepend AssembliesDependencies.xml file. The logical flow looks something like this:
- User clicks “New”
- The user is prompted to select a file
- Some logical processing occurs where the file is read,
- …a graph is produced,
- …and the UI is updated.
The question on where to start is an interesting one. Given limited knowledge of what we need to build or how these components will interact, what area of the logical flow do we know the most about? What part can we reliably predict the outcome?
Starting from scratch, it seemed the most reasonable choice was to start with the part that reads our NDepend file. We know the structure of the file and we know that the contents of the file will represent our model.
Testing Constraints
When developing with a focus on testability, there are certain common problems that arise when trying to get a class under the test microscope. You learn to recognize them instantly, and I’ve jokingly referred to this as spidey-sense – you just know these are going to be problematic before you start.
While this is not a definitive list, the obvious ones are:
- User Interface: Areas that involve the user-interface can be problematic for several reasons:
- Some test-runners have a technical limitation and cannot launch a user-interface based on the threading model.
- The UI may require complex configuration or additional prerequisites (style libraries, etc) and is subject to change frequently
- The UI may unintentionally require human interaction during the tests, thereby limiting our ability to reliably automate.
- File System: Any time we need files or folder structure, we are dependent on the environment to be setup a certain way with dummy data.
- Database / Network: Being dependent on external services is additional overhead that we want to avoid. Not only will tests run considerably slower, but the outcome of the test is dependent on many factors that may not be under our control (service availability, database schema, user permissions, existing data).
Some of the less obvious ones are design considerations which may make it difficult to test, such as tight coupling to implementation details of other classes (static methods, use of “new”, etc).
In our case, our first test would be dependent on the file system. We will likely need to test several different scenarios, which will require many different files. While we could go this route, working with the file system directly would only slow us down. We needed to find a way to isolate ourselves.
The team tossed around several different suggestions, including passing just xml as string. Ultimately, as this class must read the contents of the file we decided that the best way to work with Xml was an XmlReader. We could simulate many different scenarios by setting up a stream containing our test data.
Our First Test
So after deciding that the name of our class would be named NDependStreamParser, our first test looked something like this:
using Microsoft.VisualStudio.TestTools.UnitTesting; namespace DependencyViewer { [TestClass] public class NDependStreamParserTests { [TestMethod] public void TestMethod1() { Assert.Fail(); } } }
We know very little about what we need. But at the very least, the golden rule is to ensure that all tests must fail from the very beginning. Writing “Assert.Fail();” is a good habit to establish.
In order to help identify what we need, it helps to work backward. So we start by writing our assertions first and then, working from the bottom up, fill in the missing pieces. Our discovery followed this progression:
Realization | Code Written |
At the end of the tests, we’ll have some sort of results. The results should not be null.
At this point, test compiles, but it’s red. |
Test Code:
object results = null; Assert.IsNotNull( results, “Results were not produced.”); |
Where will the results come from? We’ll need a parser. The results will come after we call Parse.
The code won’t compile because this doesn’t exist. If we use the auto-generate features of Visual Studio / Resharper the test compiles, but because of the default NotImplementedException, the test fails. |
Test Code:
var parser = new NDependStreamParser(); object results = parser.Parse(); Assert.IsNotNull( results, “Results were not produced.”); |
We need to make the test pass.
Do whatever we need to make it green. |
Implementation:
public object Parse() { // yes, it's a dirty hack. // but now the test passes. return new object(); } |
Our tests passes, but we’re clearly not done. How will we parse? The data needs to come from somewhere. We need to read from a stream.
Introducing the stream argument into the Parse method won’t compile (so the tests are red), but this is a quick fix in the implementation. |
Test Code:
var parser = new NDependStreamParser(); var sr = new StringReader(""); var reader = XmlReader.Create(sr); object results = parser.Parse(reader); // ... Implementation: public object Parse(XmlReader reader) { //... |
Our return type shouldn’t be “Object”. What should it be?
After a short review of the NDepend AssembliesDependencies.xml file, we decide that we should read the list of assemblies from the file into a model object which we arbitrarily decide should be called ProjectAssembly. At a minimum, Parse should return an IEnumerable<ProjectAssembly>. There are a few minor compilation problems to address here, including the auto-generation of the ProjectAssembly class. These are all simple changes that can be made in under 60 seconds. |
Test Code:
var parser = new NDependStreamParser(); var sr = new StringReader(""); var reader = XmlReader.Create(sr); IEnumerable<ProjectAssembly> results = parser.Parse(reader); // ... Implementation: public IEnumerable<ProjectAssembly> Parse( XmlReader reader) { return new List<ProjectAssembly>(); } |
At this point, we’re much more informed about how we’re going to read the contents from the file. We’re also ready to make some design decisions and rename our test accordingly to reflect what we’ve learned. We decide that (for simplicity sake) the parser should always return a list of items even if the file is empty. While the implementation may be crude, the test is complete for this scenario, so we rename our test to match this decision and add an additional assertion to improve the intent of the test.
Sidenote: The naming convention for these tests is based on Roy Osherove’s naming convention, which has three parts:
- Feature being tested
- Scenario
- Expected Behaviour
[TestMethod] public void WhenParsingAStream_WithNoData_ShouldProduceEmptyContent() { var parser = new NDependStreamParser(); var sr = new StringReader(""); var reader = XmlReader.Create(sr); IEnumerable<ProjectAssembly> results = parser.Parse(reader); Assert.IsNotNull( results, "The results were not produced." ); Assert.AreEqual( 0, results.Count(), "The results should be empty." ); }
Adding Tests
We’re now ready to start adding additional tests. Based on what we know now, we can start each test with a proper name and then fill in the details.
With each test, we learn a little bit more about our model and the expected behaviour of the parser. The NDepend file contains a list of assemblies, where each assembly contains a list of assemblies that it references and a list of assemblies that it depends on. The subsequent tests we wrote:
- WhenParsingAStream_ThatContainsAssemblies_ShouldProduceContent
- WhenParsingAStream_ThatContainsAssembliesWithReferences_EnsureReferenceInformationIsAvailable
- WhenParsingAStream_ThatContainsAssembliesWithDependencies_EnsureDependencyInformationIsAvailable
It’s important to note that these tests aren’t just building the implementation details of the parser, we’re building our model object as well. Properties are added to the model as needed.
Refactoring
Under the TDD mantra “Red, Green, Refactor”, “Refactor” implies that you should refactor the implementation after you’ve written the tests. However, the scope of the refactor should apply to both tests and implementation.
Within the implementation, you should be able to optimize the code freely assuming that you aren’t adding additional functionality. (My original implementation details of using the XmlParser was embarrassing, and I ended up experimenting with the reader syntax later that night until I found a clean elegant solution. The tests were invaluable for discovering what was possible.)
Within the tests, refactoring means removing as much duplication as possible without obscuring the intent of the test. By the time we started the third test, the string-concatenation to assemble our xml and plumbing code to create our XmlReader was copied and pasted several times. This plumbing logic slowly evolved into a utility class that used an XmlWriter to construct our test data.
Next: Day Three
0 comments:
Post a Comment