This post is fifth in a series about a group TDD experiment to build an application in 5 days using only tests. Read the beginning here.
As previously mentioned during Day Three, we split the group into two teams so that one focused on the process to load a new project while the other focused on constructing a graph. This post will focus on the efforts of the team working through the tests and implementation of the GraphBuilder.
This team had a unique advantage over the other team in that they had a blog post that outlined how to use the Graph# framework. I advised the team that they could refer to the post, even download the example if needed, but all code introduced into the project had to follow the rules of the experiment: all code must be written to satisfy the requirements of a test.
A Change in Approach
The goals for this team we’re different too. We already had our data well defined and had a reasonable expectation of what the results should be. As such, we took a different approach to writing the tests. Up to this point, our process has involved writing one test at a time and only the code needed to satisfy that test. We wouldn’t identify other tests that we’d need to write until we felt the current test was complete.
For this team, we had a small brain-storming session and defined all the possible scenarios we would need to test upfront.
I love this approach and tend to use it when working with my teams. I usually sit down with the developer and envision how the code would be used. From this discussion we stub out a series of failing tests (Assert.Fail) and after some high level guidance about what we need to build I leave them to implement the tests and code. The clear advantage to this approach is that I can step in for an over-the-shoulder code-review and can quickly get feedback on how things are going. When the developer says things are moving along I can simply challenge them to “prove it”. The developer is more than happy to show their progress with working tests, and the failing tests represent a great opportunity to determine if the developer has thought about how to finish them. Win/Win.
The test cases we identified for our graph builder:
- When building a graph from an empty list, it should produce an empty graph
- When building a graph from a single assembly, the graph should contain one vertex.
- When building a graph with two independent assemblies, the graph should contain two vertices and there shouldn’t be any edges between them.
- When building a graph with one assembly referencing another, the graph should contain two vertices and one edge
- When building a graph where two assemblies have forward and backward relationships (the first item lists the second vertex as a dependency, the second item lists the first as a “referenced by”), the graph should contain unique edges between items.
By the time the team had begun to develop the third test, most of the dependent object model had been defined. The remaining tests represented implementation details. For example, to establish a relationship between assemblies we would need to store them into a lookup table. Whether this lookup table should reside within the GraphBuilder or pushed lower into the Graph itself is an optimization that can be determined later if needed. The tests would not need to change to support this refactoring effort.
Interesting Finds
The session on the fourth day involved a review of the implementation and an opportunity to refactor both the tests and the code. One of the great realizations was the ability to reduce the verbosity of initializing the test data.
We started with a lot of duplication and overhead in the tests:
[TestMethod] public void AnExample_ItDoesntMatter_JustKeepReading() { var assembly1 = new ProjectAssembly { FullName = "Assembly1" }; var assembly2 = new ProjectAssembly { FullName = "Assembly2" }; _projectList.Add(assembly1); _projectList.Add(assembly2); Graph graph = Subject.BuildGraph(_projectList); // Assertions... }
We moved some of the initialization logic into a helper method, which improved readability:
[TestMethod] public void AnExample_ItDoesntMatter_JustKeepReading() { var assembly1 = CreateProjectAssembly("Assembly1"); var assembly2 = CreateProjectAssembly("Assembly2"); _projectList.Add(assembly1); _projectList.Add(assembly2); Graph graph = Subject.BuildGraph(_projectList); // Assertions... } private ProjectAssembly CreateProjectAssembly(string name) { return new ProjectAssembly() { FullName = name }; }
However, once we discovered the name of the assembly wasn’t important and that they just had to be unique, we optimized this further:
[TestMethod] public void AnExample_ItDoesntMatter_JustKeepReading() { var assembly1 = CreateProjectAssembly(); var assembly2 = CreateProjectAssembly(); _projectList.Add(assembly1); _projectList.Add(assembly2); Graph graph = Subject.BuildGraph(_projectList); // Assertions... } private ProjectAssembly CreateProjectAssembly(string name = null) { if (name == null) name = Guid.NewGuid().ToString(); return new ProjectAssembly() { FullName = name }; }
If we really wanted to, we could optimize this further by pushing this initialization logic into the production code directly.
[TestMethod] public void WhenConstructingAProjectAssembly_WithNoArguments_ShouldAutogenerateAFullName() { var assembly = new ProjectAssembly(); bool nameIsPresent = !String.IsNullOrEmpty(assembly.FullName); Assert.IsTrue( nameIsPresent, "Name was not automatically generated."); }
Continue Reading: Day Five
No comments:
Post a Comment