The idea behind test driven development is that you are writing the test first. Since all code must reside in a method, the very first step before you can write any code, is to name the test. If you're new to TDD, you'll find this to be a very difficult thing to do. Don't let this discourage you, I'd go so far to say that out of all the tasks a developer must accomplish, finding names for things is perhaps the most difficult. W.H. Auden's statement show's that this "meta" thing transcends development:
Proper names are poetry in the raw. Like all poetry they are untranslatable. ~W.H. Auden
This begs a question that comes up frequently for new TDD developers starting out as well as experienced developers during code review: "Is there a naming convention or guidelines for unit tests?" Some believe it to be a black art, but I think it's more like acquiring a rhythm and following along. Once you've got the rhythm it gets easier.
Prior to diving into the guidelines, let's clear up some basic vocabulary:
- Target / Subject: I often use the term "Target" or "Subject" to refer to the piece of functionality that I'm testing.
- Fixture: Synonymous with "TestFixture", a fixture is a class that contains a set of related tests. Fixtures are classes that have been decorated with the [TestFixture] attribute.
- Suite: Test Suites are an older style of organizing tests. They're specialized fixtures that programmatically define which Fixtures or Tests to run. NUnit supports them for backward compatibility by using the [TestSuite] attribute. Since NUnit dynamically finds all tests with the [TestFixture] attribute, they're not as popular these days.
- Test: You may have noticed that I capitalize Test in all my entries. Tests are methods within the Fixture that are decorated with the [Test] attribute and contain code that validates the functionality of our target.
- Setup/TearDown: Test Fixtures can designate a special piece of code to run before every Test within that Fixture. That method is decorated with the [Setup] attribute. Likewise, a method with the [TearDown] attribute is called at the end of every test within a fixture.
- Fixture Setup/Fixture TearDown: Similar to constructors and finalizers, methods with the [TestFixtureSetup] or [TestFixtureTearDown] attributes execute before and after the Fixture executes. These methods happen before [Setup] and after [TearDown].
- Category : The [Category] attribute when applied to a method associates the Test within a user-defined category.
- Ignore: Tests with the [Ignore] attribute are skipped over when the Tests are run.
- Explicit: Tests with the [Explicit] attribute won't run unless you manually run them.
The following are some suggestions I've adopted or recommended to others from past projects. Feel free to take 'em at face value, or leave a comment if you have some to add:
Fixtures
DO: Name Fixtures consistently
TestFixtures should follow a consistent naming convention to make tests easier to find. Choose a naming convention such as <TargetType>Fixture or <TargetType>Test and stick to it.
DO: Mimic namespaces of Target Code
To help keep your tests organized, use the same folders and namespace structures as your target assembly. This will help you locate tests for target types and vice versa. Since most Test runners group Tests by their namespace, it's really easy to run all tests for a specific namespace by selecting by the container folder -- which is great for regression testing an area of code. I've got another post which talks about how to structure your Test namespaces.
DO: Name Setup/TearDown methods consistenty
When naming your fixture setup and teardown methods, you really should pick a style for these methods and stick with it. Personally, I can't find any reason why you would deviate from naming these methods FixtureSetup, FixtureTearDown, Setup, and TearDown as these provide clear names. By following a standard TestFixtures structure you can cut down some of the visual noise, make tests easier to read and produce more maintainable tests across multiple developers.
CONSIDER: Separating your Tests from your Production Code
As a general rule, you should try to separate your tests from your production code. If you have a requirement where you want to test in production or verify at the client's side, you can accomplish this simply by bundling the test library with your release. Still, every project is different, and tests won't necessarily impede production other than bloating up your assembly. Separate when needed, and use your gut to tell you when you should.
CONSIDER: Deriving common Fixtures from a base Fixture
In scenarios where you are testing sets of common classes or when tests share a great deal of duplication, consider creating a base TestFixture that your Fixtures can inherit.
CONSIDER: Using Categories instead of Suites or Specialized Tests
Although Suites can be used to organize Tests of similar functionality together, Categories are the preferred method. Suites represent significant developer overhead and maintenance. Likewise, creating specialized folders to contain tests (ie "Database Tests") also creates additional effort as tests for a particular Type become spread over the test library. Categories offer a unique advantage in the UI and at the command-line that allows you to specify which categories should be included or excluded from execution. For example, you could execute only "Stateful" tests against an environment to validate a database deployment.
CONSIDER: Splitting Test Libraries into Multiple Assemblies
From past experience, projects go to lengths to separate tests from code but don't place a lot of emphasis on how to structure Test assemblies. Often, a single Test library is created, which is suitable for most projects. However, for large scale projects that can have hundreds of tests this approach can get difficult to manage. I'm not suggesting that you should religiously enforce test structure, but there may be logical motivators to divide your test assemblies into smaller units, such as grouping tests with third-party dependencies or as an alternative for using Categories. Again, separate when needed, and use your gut to tell you when you should. (You can always go back)
AVOID: Empty Setup methods
As a best practice, you should only write the methods that you need today. Adding methods for future purposes only adds visual noise for maintenance purposes. The exception to this is when you are creating a base Fixture that contains empty methods that will be overridden by derived classes.
Tests
DO: Name Tests after Functionality
The test name should match a specific unit of functionality for the target type being tested. Some key questions you may want to ask yourself: "what is the responsibility of this class?" "What does this class need to do?" Think in terms of action words. Well written test names should provide guidance when the test fails. For example, a test with the name CanDetermineAuthenticatedState provides more direction about how authentication states are examined than Login.
DO: Document your Tests
You can't assume that all of your tests will be intuitive for everyone who reviews them. Most tests require special knowledge about the functionality your testing, so a little documentation to explain what the test is doing is helpful. Using XML Documentation syntax might be overkill, but a few comments here and there are often just the right amount to help the next person understand what you need to test and how your test approaches demonstrates that functionality.
CONSIDER: Use "Cannot" Prefix for Expected Exceptions
Since Exceptions are typically thrown when your application is a performing something it wasn't designed to do, prefix "Cannot" to tests that are decorated with the [ExpectedException] attribute. Some examples: CannotAcceptNullArguments, CannotRetrieveInvalidRecord.
I would consider this a "DO" recommendation, but this a personal preference. I can't think of scenarios where this isn't the case, so this one is up for debate.
CONSIDER: Using prefixes for Different Scenarios
If your application has features that differ slightly for application roles, it's likely that your test names will overlap. Some teams have adopted a For<Scenario> syntax (CanGetPreferencesForAnonymousUser). Other teams have adopted an underscore prefix _<Scenario> (AnonymousUser_CanGetPreferences).
AVOID: Ignore Attributes with no explanation
Tests that are marked with the Ignore attribute should include a reason for why this test has been disabled. Eventually, you'll want to circle back on these tests and either fix them or alter them so that they can be used. But without an explaination, the next person will have to do a lot of investigative work to figure out that reason. In my experience, most tests with the Ignore attribute are never fixed.
AVOID: Naming Tests after Implementation
If you find that your tests are named after the methods within your classes, that's a code smell that you're testing your implementation instead of your functionality. If you changed your method name, would the test name still make sense?
AVOID: Using underscores as word-separators
I've seen tests that use_underscores_as_word_separators_for_readability, which is so-o-o 1960. PascalCase should suffice. Imagine all the time you save not holding down the shift key.
AVOID: Unclear Test Names
Sometimes we create tests for bugs that are caught late in the development cycle, or tests to demonstrate requirements based on lengthy requirements documentation. As these are usually pretty important tests (especially for bugs that creep back in), it's important to avoid giving them vague test names that represent a some external requirement like FixForBug133 or TestCase21.
Categories
DO: Limit the number of Categories
Using Categories is a powerful way to dynamically separate your tests at runtime, however their effectiveness is diminished when developers are unsure which Category to use.
CONSIDER: Defining Custom Category Attributes
As Categories are sensitive to case and spelling, you might want to consider creating your own Category attributes by deriving from CategoryAttribute. UPDATE: Read more about custom NUnit Categories.
Well, that's all for now. Are you doing things differently, or did I miss something? Feel free to leave a comment.
Updates:
- 6/18/08 - Added links to custom NUnit Categories
This is actually really good. It's a great starting point when establishing the unit test standards and your organization.
ReplyDeletew/r/t: "AVOID: Using underscores as word-separators"
ReplyDeleteSaving time by not having to hit the Shift key is so not a reasonable argument against using a naming-style that increases solubility as much names_with_underscores_does. Especially if you use something like AutoHotKey (http://www.jpboodhoo.com/blog/BDDAutoHotKeyScriptUpdateTake2.aspx) to replace a space with an underscore for you! :)
I'm not saying that PascalCasing is bad or that it should be avoided, I'm only saying that using underscores is equally viable... and IMO, a more readable style. I find this especially true when used with some of the newer BDD frameworks that can generate HTML reports from your Test/Spec names.
Steve, I probably could have elaborated better on the underscore recommendation. I'll admit the motivator of not having to hold down the shift key is poor.
ReplyDeleteMy motivation is that your tests are different than your API for your production code in that Test names have a tendency to change over their lifetime. As a personal preference, I find methods with underscores clunky and I rely on tools to create them for me (creating event handlers, etc) rather than type them by hand.
Honestly, any style will do (compared to not writing tests) and if you have tools that work well with that style, then you might enforce that style as a recommendation within your organization.
I should probably amend the list, and replace that AVOID with a recommendation to pick a consistent style that is easily understood and flexible to change.
>> replace that AVOID with a recommendation to pick a consistent style that is easily understood and flexible to change.
ReplyDeleteNow that is something I can get on board with. :)
How does TDD influence the way we write objects? Could you give some insight on this as well please?
ReplyDeleteThat's an interesting question Afzi: TDD does impact the way we write code, but expressing it as do-n-don't would fall under a different category than those listed here. I'll chew on that, maybe post something in the future.
ReplyDeleteGenerally speaking, I think TDD discourages "blackbox" code, where code is organized to do anything and everything without sharing any details of how it will perform those tasks. TDD in contrast, the focus is on writing testable code, so you end up with open-ended, cohesive sets of specialized objects. Each set of objects can be tested independently of each other and can also be substituted with test versions to assist in testing larger related components. TDD and Inversion of Control / Dependency Injection go very well together.
you need to update your list of points above. You should refer to "TestFixtureSetup/TestFixtureTearDown" instead of "SetupFixture/TearDownFixture"
ReplyDeleteThanks for pointing that out cliff. I wish my blog writer had intellisense.
ReplyDelete