When I’m writing code, my preference is to follow test-driven development techniques where I’m writing tests as I go. Ideally, each test fixture focuses attention on one object at a time, isolating its behavior from its dependencies.
While unit tests provide us with immediate feedback about our progress, it would be foolish to deploy a system without performing some form of integration test to ensure that the systems’ components work as expected when pieced together. Often, integration tests focus on a cohesive set of objects in a controlled environment, such as a restoring a database after the test.
Eventually, you’ll need to bring all the components together and test them in real-world scenarios. The best place to bring all these components together is the live system or staged equivalent. The best tool for the job is a human inspecting the system. Period.
Wait, I thought this was supposed to be about functional testing? Don’t worry, it is.
Humans may be the best tool for the job, but if you consider the amount of effort associated with code-freezes, build-reports, packaging and deployment, verification, coordination with the client and waiting testing teams, it can be really expensive to use humans for testing. This is especially true if you deliver a failed build to your testing team -- your testing team who’ve been queued up are now unable test, and must wait for the next build. If you were to total up all the hours from the entire team, you’d be losing at least a day or more in scheduled cost.
Functional tests can help prevent this loss in production.
In addition, humans possess an understanding of what the system should do, as well as what previous versions did. Humans are users of the system and can contribute greatly to the overall quality of the product. However, once a human has tested and validated a feature, revisiting these tests in subsequent builds becomes more of a check than a test. Having to go back and check these features becomes increasingly difficult to accomplish in short-timelines as the complexity of the system grows. Invariably, shortcuts are taken, features are missed and subtle, aggravating bugs silently sneak into the system. While separation of concerns and good unit tests can downplay the need for full regression tests, the value of system-wide integration tests for repetitive tasks shouldn’t be discounted.
Functional tests can help here too, but these are largely the fruits of labor of my first point about validating builds. Most organizations can’t capitalize on this simply because they haven’t got the base to build up from.
Functional tests take a lot of criticism, however. Let’s address some common (mis)beliefs.
Duplication of testing efforts / Diminishing returns. Where teams have invested in test driven development, tests tend to focus on the backend code artifacts as these parts are the core logic of the application. Using mocks and stubs, the core logic can be tested extremely well from the database layer and up, but as unit-tests cross the boundary from controller-logic into the user-interface layer, testing becomes harder to simulate: web-applications need to concern themselves with server requests; desktop applications have to worry about things like screen resolution, user input and modal dialogs. In such team environments, testing the user-interface isn’t an attractive option since most bugs, if any, originate from the core logic that can be covered by more unit tests. From this perspective, adding functional tests wouldn’t provide enough insight to outweigh the effort involved.
I’d agree with this perspective, if the functional tests were trying to aggressively interrogate the system at the same level of detail of their backend equivalents. Unlike unit tests, functional tests are focused on emulating what the user does and sees, not on the technical aspects under the hood. They operate in the live system, providing a comprehensive view of the system that is unit tests cannot. In the majority of cases, a few simple tests that follow the happy path of the application may be all you need to validate the build.
More over, failures at this level point to problems in the build, packaging or deployment – something well beyond a typical unit test’s reach.
Functional tests are too much effort / No time for tests. This is a common view that is applied to testing in general, which is based on a flawed assumption that testing should follow after development work is done. In this argument, testing is seen as “double the effort”, which is an unfair position if you think about it. If you treat testing as a separate task and wait until the components are fully written, then without a doubt the action of writing tests becomes an exercise in reverse-engineering and will always be more effort.
Functional tests, like unit tests, should be brought into the development process. While there is some investment required to get your user-interface and its components into a test-harness, the effort to add new components and tests (should) become an incremental task.
Functional tests are too brittle / Too much maintenance. Without doubt, the user-interface can be the most volatile part of your application, as it is subject to frequent cosmetic changes. If you’re writing tests that depend on the contract of the user-interface, it shouldn’t be a surprise that they’re going to be impacted when that interface changes. Claiming that your tests are the source of extra effort because of changes you introduced is an indication of a problem in your approach to testing.
Rather than reacting to changes, anticipate them: if you have a change to make, use the tests to introduce that change. There are many techniques to accomplish this (and I may have to blog about that later), but here’s an example: to identify tests that are impacted by your change, try removing the part that needs to change and watch which tests fail. Find a test that resembles your new requirement and augment it to reflect the new requirements. The tests will fail at first, but as you add the new requirements, they’ll slowly turn green.
As an added bonus, when you debug your code through the automated test, you won’t have to endure repetitive user-actions and keystrokes. (Who has time for all the clicking??)
This approach works well in agile environments where stories are focused on adding or changing features for an interaction and changes to the user-interface are expected.
Adding Functional Testing to your Regime
Develop a test harness
The first step to adding functional testing into your project is the development of a test-harness that can launch the application and get it into a ready state for your tests. Depending on the complexity of your application and the extent of how far you want to take your functional tests, this can seem like the largest part. Fortunately most test automation products provide a “recorder” application that can generate code from user activity, which can jump start this process. While these tools make it easy to get started, they are really only suitable for basic scenarios or for initial prototyping. As your system evolves you quickly find that the duplication in these scripts becomes a maintenance nightmare.
To avoid this issue, you’ll want to model the screens and functional behavior of your application into modular, components that hide the implementation details of the recorder tools’ output. This approach shields you from having to re-record your tests and makes it easier to apply changes to tests. The downfall to this approach is that it may take some deep thinking on how to model your application, and it will seem as though you’re writing a lot of code to emulate what your backend code already does. However, once this initial framework is in place, it becomes easier to add new components. Eventually, you reach a happy place where you can write new tests without having to record anything.
The following example illustrates how the implementation details of a product editor are hidden from the test, but the user actions are clearly visible:
[Test] public void CanOpenAnExistingProduct() { using (var app = new App()) { app.Login("user1", "p@ssw3rd"); var product = new Product() { Id = 1, Name = "Foo" }; // opens the product editor, // fills it with my values // saves it, closes it. app.CreateNewProduct(product); // open the dialog, find the item ProductEditorComponent editor = app.OpenProductEditor("Foo"); // retrieves the settings of the product from the screen Product actual = editor.GetEntity(); Assert.AreEqual(product, actual); } }
Write Functional Unit Tests for Screen Components
Once you’ve got a basic test-harness, you should consider developing simple functional tests for user-interface components as you add them to your application. If you can demo it to the client, you're probably ready to start writing functional tests. A few notes to consider at this stage:
- Be pragmatic! Screen components that are required as part of base use cases will have more functional tests than non-essential components.
- Consider pairing developers with testers. As the developer builds the UI, the tester writes the automation tests that verify the UI’s functionality. Testers may guide the development of the UI to include automation ids, which reduces the amount of reverse-engineering.
- Write tests as new features or changes are introduced. No need to get too granular, just verify that the essentials.
Verify Build Process with Functional Sanity Tests
While your functional unit tests concentrate on the behaviors of individual screen components, you’ll want to augment your build process with tests that demonstrate common user-stories that can be used to validate the build. These tests mimic the minimum happy path.
If you’re already using a continuous integration server to run unit tests as part of each build, functional tests can be included at this stage but can be regulated to nightly builds or as part of the release process to your quality assurance team.
Augment QA Process
As noted above, humans are a critical part of the testing of our applications and that’s not likely to change. However, the framework that we used to validate the build can be reused by your testing team to write automation tests for their test cases. Ideally, humans verify the stories manually, then write automation tests to represent regression tests.
Tests that require repetitive or complex time consuming procedures are ideal candidates for automation.
Conclusion
Automated functional testing can add value to your project for build verification and regression testing. Being pragmatic about the components you automate and vigilant in your development process to ensure the tests remain in sync are the keys to their success.
How does your organization use functional testing? Where does it work work? What’s your story?
No comments:
Post a Comment