Follow @RoyOsherove on Twitter

The Testing Feasibility Matrix

One of the toughest questions in projects that contain legacy code that would like to write unit tests from part of that code (it may need to change) are: “Where should I start?”
When one of my clients faced this question a while ago I came up with a small formula to help understand where we should focus our testing efforts on. I call it the “Test Feasibility Matrix” (TFM - not to be confused with RTFM :) )
The most obvious way to approach testing legacy code components is to think of them in at least two ways: How much Logic do I have in there (The Logical Complexity (LO) factor) and how many dependencies do I have to break in order to bring this class under test (does it communicate with an outside Email Component perhaps? does it call a Static “Log” method somewhere?). I call The second factor “Dependency Level” (DL) factor.
The consideration in the most obvious method (which I call “Complexity Oriented Incentive”) simply means: The more complex it is, and the easier it is to test - the more I’d like to test it. That means that testing some validation class with some simple methods that don’t rely on anything but have a lot of logic inside has more incentive because it’s easier to test than, say, a Data Layer class which talks to a database which has less logic and more dependencies.
One could go about mapping each component considered for testing in this sort of chart:
By simply putting a component name inside the chart in the location most suitable for it you are able to map a distinction between components with higher and lower test incentive. You should of course take into account things such as feature priority and so on, but that’s the basic idea.
I’ve said earlier that this is “the most obvious” way. Let’s consider something a little less obvious. One of the side effects of testing a component who has many dependencies is that you end up breaking those dependencies leaving behind a better design which is more modularized. You might have more interfaces, or some new “injection points” where you can insert different behavior for various components. In essence - the component will be much easier to test the next time around. Not only that - you’ll also get the benefit that some of the compoenent’s dependencies, which are components themselves, will now be easier to test as well. This does not always apply but it does most of the time. That means that the more you test components with a high dependency level, the easier you make it to test other high dependent components later on.
If you put the testing effort it takes during project lifetime into a graph it might look a little like this:
This means that in a complexity oriented incentive - we leave the “best” for last - we tackle the hardest to test components last. This is a good plan if you want to make sure you get *something* done without bogging down development in hard-to realize testing scenarios yet. But you may be leaving off the toughest part (which may be mostly unknown) to the project crunch-time before release.
Now, think what happened if you went the other way around. You’d try to tackle the hardest parts first. This is what I call “Dependency Oriented Incentive” where you decrease future dependencies as a calculated effort. The graph might look like this for your components (notice the bottom dependency levels are now reveres ed from 10 to 1):
This kind of planning can lead you to a development effort that looks somewhat like this:
Notice that this time the amount of “large effort; is smaller (not as wide) as the one in the opposite diagram. That’s because, like dominoes, dependencies depend on each other and hardest you hit the more dependencies at the same tie you’re gonna remove from multiple components. In the opposite diagram you’re going around the problem and testing components which might have even been much easier to test after you’ve broken dependencies for the tougher ones. So, testing an Email Sender might be much easier to test after you’ve tested the MailServer Component. In the two approaches I’ve outlined, you’d test them both in different order.
To Summarize:
I don’t have a “favorite” among these two approaches. They both fit well on a per-project-per team basis. Use them as a means to understand where your priorities lie when approaching unit testing. Comments are welcome!

[OT]The Friday Page

XP Games links