If you're following true test driven development, you should be writing tests before you write the code. By definition you only write the code that is required and you should always have 100% code coverage.
Unfortunately, this is not always the case. We have legacy projects without tests; we're forced to cut corners; we leave things to finish later that we forget about. For that reason, we look to tools to give us a sense of confidence in the quality of our code. Code coverage is often (dangerously) seen as a confidence gauge. So to follow up on a few of my other TDD posts, I want to talk about what value code coverage can provide and how you should and shouldn't use it...
Let's start by looking at what code coverage will tell us...
- Code coverage shows which parts of our code have been tested. This metric is usually inferred as a total percentage of code that has been tested.
- Most coverage tools keep track of how many times methods have been visited. This value shows us how much or how little testing is represented for specific a code block, but as far as I know, there's no overall valuable metric. You could infer "top most tested" or "top least tested" metrics.
In some cases, code coverage can be used to contribute to a confidence level. I feel better about a large code base that has an 80% coverage than little or no coverage. But coverage is just statistical data -- it can be misleading...
Good Coverage doesn't mean Good Code
Having a high coverage metric cannot be used as an overall code quality metric. Code coverage cannot reveal that your code or tests haven't accounted for unexpected scenarios, so it's possible that buggy code with "just enough" tests can have high coverage.
Good Coverage doesn't mean Good Tests
A widely held belief of TDD is that the confidence level of the code is proportional to the quality of the tests. Code coverage tools can be very useful to developers to identify areas of the code that are missing tests, but should not be used as a benchmark for test quality. Tests can become meaningless when developers write tests to satisfy coverage reports instead of writing tests to prove functionality of the application. See the example below.
How a few bad tests ruin coverage
Developers can unknowingly write a test that invalidates coverage. To demonstrate, let's assume we have a really simple Person class. For sake of argument, FirstName is always required so we make it available through the the constructor.
[TestFixture]
public class PersonTest
{
[Test]
public void CanCreatePerson()
{
Person p = new Person("Bryan");
Assert.AreEqual(p.FirstName,"Bryan");
}
}
public class Person
{
public Person(string firstName)
{
_first = firstName;
}
public virtual string FirstName
{
get { return _first; }
set { _first = value; }
}
private string _first;
}
This is all well and good. However, a code coverage report would reveal that the FirstName property setter (highlighted above) has no coverage.
Should we fix the code....
public Person(string firstName)
{
_first = firstName;
FirstName = firstName; // virtual method call in constructor
// is a FxCop violation
}
... or the test?
[Test]
public void CanCreatePerson()
{
Person p = new Person("bryan");
Assert.AreEqual(p.FirstName,"bryan");
p.FirstName = "Bryan";
Assert.AreEqual("Bryan",p.FirstName);
}
Trick question. Neither!
There are two ways to improve code coverage -- write more tests, or get rid of code. In this case, I would argue that it better to remove the setter than write any code just to satisfy coverage. (Wow, less really IS more!) Leave the property as read-only until some calling code needs to write to it, at which point the tests for that call site will provide the coverage you need.
"But putting the setter back in is a pain!" -- sure it is. Alternatively, you can leave it in, but make sure you do not write a test for it. If the coverage remains zero for extended periods of time, remove it later. (If you can't remove it because some calling code is writing to it, you missed something in one of your tests.)
Note: In general, plain old value objects like our Person class won't need standalone tests. The exception to this is when you need tests to demonstrate specialized logic in getter/setter methods.
Coverage Tips for Your Project
- Set goals for coverage: Talk to your team about coverage and gather their feedback early in the project. Identify areas that will be difficult to test and develop strategies to make your code more testable. Agree upon a level of acceptable coverage based on your timelines and these constraints. For most projects that start with TDD in mind, 70-80% is very realistic target. I don't have any concrete data to back this up, but I imagine that effort increases by levels of magnitude after a certain percentage.
- Watch for changes in coverage: Rather than looking at overall code coverage percentage as a quality metric, integrate coverage into your build or continuous integration process and look at the change in coverage between builds. Coverage will flutuate as a project matures, eventually it should level out and remain relatively constant between changes. Applaud when it goes up, recognize the hard-work of your team when it stays the same, and investigate when it takes a steep drop. As an added bonus, the integrated coverage logs on your build server can be analyzed over time: it's amazing how developer churn, ramp-up, changes in functionality/design/timelines can become evident in a graphed timeline of failed builds and drops in coverage.
- Use Milestones: Whether you're in an waterfall or agile project, pick milestones in your project where you can look at coverage. I try to fit in at least one code review per iteration and kick them off with a look at code coverage reports ("Yikes! We don't have any tests for this entire namespace, maybe we should fix that.") When coverage is low, I use this time to evangelize the benefits of having tests. Set a goal for next iteration and get buy-in from both the team, management (and client) for well written tests that bump up your coverage. It can be fun motivator for the team.
- Don't Force it:. If you obsess about coverage, you're probably doing it wrong. Deliberately reworking code so that code will light up in the coverage report or writing coverage-serving tests yeilds little benefits -- let it come naturally by writing concise tests. If your tests don't reflect the functionality of the application, fix your tests; if the tests serve only to satisfy coverage they likely don't serve anybody.
0 comments:
Post a Comment