Whenever someone mentions that they want to bring a code-review process into a project, my first inclination is to have teams review the tests. Tests should be considered production assets that describe the purpose and responsibilities of the code, and developers should be making sure that their tests satisfy this goal. If the tests look good and they cover the functionality that is required, I don’t see much point in dwelling on the implementation: if the implementation sucks, you can refactor freely assuming you have proper coverage from your tests.
A unit test review process is a great way to ensure that developers are writing useful tests that will help deliver high quality code that is flexible to change. As code-review processes have the tendency to create stylistic debates within a development group, having a checklist of the 5-10 points to look for is a great way to keep things moving in the right direction.
Here’s my criteria that I’d like to see in a unit test. Hopefully you find this useful, too.
| Is the test well named? - Does the test name clearly represent the functionality being tested?
- Will someone with little or no knowledge of the component be able to decipher why the test exists?
|
| Is the test independent? - Does the test represent a single unit of work?
- Can the test be executed on its own or is it dependent on the outcome of tests that preceded it? For example, a fixture that shares state between tests may inadvertently include side-effects.
|
| Is the test reliable? - Does the test produce the same outcome consistently?
- Does the test run without manual intervention to determine a successful outcome?
|
| Is the test durable? - Is the test designed so that it isn’t susceptible to changes in other parts of the application? For example:
- Does the test have complex or lengthy setup?
- Will changes in the subject's dependencies lead to breakages?
|
| Are the assertions valid? - Does the test contain assertions that describe the functionality being tested?
- Do the assertions include helpful error messaging that describe what should have happened if the test fails?
- Are any assertions redundant such as testing features of the CLR (constructors or get/set properties)?
- Are the assertions specific to this test or are they duplicated in other tests?
|
| Is the test well structured and documented? - Does the test contain sufficient comments?
- Does the test highlight the subject under test and corresponding functionality being exercised?
- Is there a clear indication of arrange/act/assert pattern such that the importance of the arrange makes sense, the action easily identifiable and the assertions clear?
|
| Is the test isolating responsibilities of the subject? - Does the test make any assumptions about the internals of dependencies that aren't immediately related to the subject?
- Is the test inaccurately or indirectly testing a responsibility or implementation of another object? If so, move the test to the appropriate fixture.
|
| Are all scenarios covered? - Does the fixture test all paths of execution? Not just for the code that has been written, but also for scenarios that may have been missed:
- invalid input?;
- failures in dependencies?
|
| Does the test handle resources correctly? - If the test utilizes external dependencies (databases, static resources), is the state of these dependencies returned to a neutral state?
- Are resources allocated efficiently to ensure that the test runs as fast as possible:
- setup/teardown effectively;
- No usages of Thread.Sleep()?
|
| Is the complexity of the test balanced? Tests need to strike a balance between Don't Repeat Yourself and clear examples of how to use the subject. - Is the test too complicated or difficult to follow?
- Is there a lot of duplicated code that would lead to time consuming effort if the subject changed?
|
What do you think? Are there criteria that you’d add? Things you disagree with?
Happy coding.
2 comments:
Does the test test the IoC container more than the functional piece of code to test?
Hey Kyle,
You and I are both painfully aware of the frustrations of IoC containers and their impact to the tests.
My preference is to decouple my code from the IoC framework so that the tests do not need to concern themselves about how the subject was constructed.
If your tests are testing features of the container, that's better served as a unit test for the container. I think this is implied in "isolating responsibilities of the subject".
Post a Comment