Follow @RoyOsherove on Twitter

Code excavations, wishful invocations, perimeters and domain specific unit test frameworks

One of the talks I did at QCON London was about a subject that I’ve come across fairly recently , when I was building SilverUnit – a “pure” unit test framework for silverlight objects that depend on the silverlight runtime to run.

It is the concept of “cogs in the machine” – when your piece of code needs to run inside a host framework or runtime that you have little or no control over for testability related matters. Examples of such cogs and machines can be:

  • your custom control running inside silverlight runtime in the browser
  • your plug-in running inside an IDE
  • your activity running inside a windows workflow
  • your code running inside a java EE bean
  • your code inheriting from a COM+ (enterprise services) component
  • etc..

Not all of these are necessarily testability problems. The main testability problem usually comes when your code actually inherits form something inside the system.

For example. one of the biggest problems with testing objects like silverlight controls is the way they depend on the silverlight runtime – they don’t implement some silverlight interface, they don’t just call external static methods against the framework runtime that surrounds them – they actually inherit parts of the framework: they all inherit (in this case) from the silverlight DependencyObject

Wrapping it up?

An inheritance dependency is uniquely challenging to bring under test, because “classic” methods such as wrapping the object under test with a framework wrapper will not work, and the only way to do manually is to create parallel testable objects that get delegated with all the possible actions from the dependencies.   

In silverlight’s case, that would mean creating your own custom logic class that would be called directly from controls that inherit from silverlight, and would be tested independently of these controls. The pro side is that you get the benefit of understanding the “contract” and the “roles” your system plays against your logic, but unfortunately, more often than not, it can be very tedious to create, and may sometimes feel unnecessary or like code duplication.

About perimeters

A perimeter is that invisible line that your draw around your pieces of logic during a test, that separate the code under test from any dependencies that it uses. Most of the time, a test perimeter around an object will be the list of seams (dependencies that can be replaced such as interfaces, virtual methods etc.) that are actually replaced for that test or for all the tests.

Role based perimeters

In the case of creating a wrapper around an object – one really creates a “role based” perimeter around the logic that is being tested – that wrapper takes on roles that are required by the code under test, and also communicates with the host system to implement those roles and provide any inputs to the logic under test.

in the image below – we have the code we want to test represented as a star. No perimeter is drawn yet (we haven’t wrapped it up in anything yet).


in the image below is what happens when you wrap your logic with a role based wrapper – you get a role based perimeter anywhere your code interacts with the system:


There’s another way to bring that code under test – using isolation frameworks like typemock, rhino mocks and MOQ (but if your code inherits from the system, Typemock might be the only way to isolate the code from the system interaction.


Ad-Hoc Isolation perimeters

the image below shows what I call ad-hoc perimeter that might be vastly different between different tests:


This perimeter’s surface is much smaller, because for that specific test, that is all the “change” that is required to the host system behavior.


The third way of isolating the code from the host system is the main “meat” of this post:

Subterranean perimeters

Subterranean perimeters are Deep rooted perimeters  - “always on” seams that that can lie very deep in the heart of the host system where they are fully invisible even to the test itself, not just to the code under test.

Because they lie deep inside a system you can’t control, the only way I’ve found to control them is with runtime (not compile time) interception of method calls on the system. One way to get such abilities is by using Aspect oriented frameworks – for example, in SilverUnit, I’ve used the CThru AOP framework based on Typemock hooks and CLR profilers to intercept such system level method calls and effectively turn them into seams that lie deep down at the heart of the silverlight runtime.

the image below depicts an example of what such a perimeter could look like:


As you can see, the actual seams can be very far away form the actual code under test, and as you’ll discover, that’s actually a very good thing.

Here is only a partial list of examples of such deep rooted seams :

  • disabling the constructor of a base class five levels below the code under test (this.base.base.base.base)
  • faking static methods of a type that’s being called several levels down the stack: method x() calls y() calls z() calls SomeType.StaticMethod() 
  • Replacing an async mechanism with a synchronous one (replacing all timers with your own timer behavior that always Ticks immediately upon calls to “start()” on the same caller thread for example)
  • Replacing event mechanisms with your own event mechanism (to allow “firing” system events)
  • Changing the way the system saves information with your own saving behavior (in silverunit, I replaced all Dependency Property set and get with calls to an in memory value store instead of using the one built into silverlight which threw exceptions without a browser)

several questions could jump in:

  • How do you know what to fake? (how do you discover the perimeter?)
  • How do you fake it?
  • Wouldn’t this be problematic  - to fake something you don’t own? it might change in the future

How do you discover the perimeter to fake?

To discover a perimeter all you have to do is start with a wishful invocation. a wishful invocation is the act of trying to invoke a method (or even just create an instance ) of an object using “regular” test code. You invoke the thing that you’d like to do in a real unit test, to see what happens:

  • Can I even create an instance of this object without getting an exception?
  • Can I invoke this method on that instance without getting an exception?
  • Can I verify that some call into the system happened?

You make the invocation, get an exception (because there is a dependency) and look at the stack trace. choose a location in the stack trace and disable it. Then try the invocation again. if you don’t get an exception the perimeter is good for that invocation, so you can move to trying out other methods on that object.

in a future post I will show the process using CThru, and how you end up with something close to a domain specific test framework after you’re done creating the perimeter you need.

Introducing QuickUnit

Introducing Typemock Test Lint