Follow @RoyOsherove on Twitter

Test driven design – Willed vs. Forced Designs

I’m writing this as a typemock employee, but also as someone who has sat on the other side of the line for several good years, and can argue in both ways. The following, I feel, is true no matter where I work.

 

There are two ways people use tests to drive design, as far as I see. one is great, and I agree with, the other is not so great and I don’t agree with it. Sadly, both of them are categorized together these days, and the baby gets thrown out with the bath water - You either use both (BAD) or you use neither (BAD!)

Here are the two usage patterns:

#1 Willed Design

By writing tests, you can observe the usability of your design from a consumer perspective, and can decide whether or not you like it, and change it accordingly

#2 Forced Design

By using a subset of the available isolation frameworks(rhino, moq, nmock) or specific techniques *manual mocks and stubs) you discover cases that are not technically “mockable” or “fakeable” and use that as a sign for design change.

 

I highly agree with #1, and highly disagree with #2.

#1 makes sense.

You get to decide what is a good and bad design, and the experience of using that design from the test perspective is your guideline. But you make the rules on what you like and don’t like.

#2 is problematic for several reasons:

  • The tool decides. Not you.

You let an automated tool (rhino mocks, Moq etc..) tell you when your design is OK or not. That point alone should go against anything ALT.NET has ever stood for, doesn’t it? If you need a tool to tell you what is good or bad design, then you are doing it wrong. You should either know good design beforehad, or you shoud pair program together to find the best design, or you should learn by a mentor who can review your design mistakes, but don’t ever let a tool tell you what is right and what isn’t, especially when the only reason that tool works for that is by chance and not on purpose  (as you’ll see in the next point)

 

  • A technical limitation that grew into something else

tools like rhino and MOQ and NMOCK just happen to support some OO ideas that seem good enough for design activities because of the underlying technology they use underneath. It’s pretty simple – they all use some form of either generating code at runtime that inherits from a class or interface, and then overrides methods on it (therefore they need to be virtual methods and non sealed classes, or an interface) or they use a proxy of some kind which underneath does pretty much the same things. (typemock is an exception since it uses a profiler api which has non of these requirements) . simply that means that the reason rhino, or MOQ or NMOCK “let you know” that you should use an interface somewhere, is a technical limitation of the tools, not a choice. Ayende was asked once what he’d do if he was technically able to fake static methods in rhino mocks – would he add that feature? “in the blink of an eye” he answered. and I agree. adding more options to the tool just extends the limitations of the possible design under test, not the “goodness” of it.

Languages like Ruby, Javascript, Python etc.. have isolation frameworks (or in some cases don’t even need such frameworks) that fully support any type of behavior changing, regardless of the design, since the language is less strict. yet, somehow, proper design arises in those languages tool. perhaps those languages are just “too powerful” and should not be used because they will cause you to do bad design? see the previous point for my answer.

What happens if tomorrow, or using C# 4.0 those tools get such abilities? will you all stop using them?

of course, you don’t have to use isolation frameworks to be dissuaded by the idea of limiting your design by using something – in this case a technique: using manual mocks and stubs in an object oriented language is just as “limiting” technically as is using one of those frameworks. You’re still bound to play within the simple laws of OO and using a design that is even a little out of place (even though it might make perfect sense for your application for security, performance or other reasons) is either untestable, or a general no no.

see the previous point for what i think about that.

the point here is that you’re using a technical limitation of a tool or a technique to tell you what to do instead of thinking for yourself and learning proper design guidelines. that limitation just happens to be somewhat partially consistent with what you might currently to believe to be true for design. but technically, it is a limitation that could end soon. when it changes it’s behavior, will you just change your design guidelines? switch or won’t upgrade to a new version of the language? or actually start using your head and your peers to see what’s right and what’s not?

 

The Typemock Dilemma

Typemock gets a lot of flack for not inhibiting the design of the program, and I can see how people would be afraid to lose that limitation in other tools, since all they head from alpha geeks in .NET is that if it’s not “testable” then your design is wrong. worse, they hear “if you need typemock your design is wrong”.

there’s nothing a silly as absolute “fact” theories in the software world. In fact, let me go out and say that all fact theories are wrong. How that’s for irony?

The message should be, I feel, more like “here are some principles of good design as we think of it today”, but instead it is based on tool choice and not on technicques or craftsmanship.

Unfortunately, I don’t think getting rid of #2 is possible in .NET today without using a tool like typemock, and that’s a shame, since it means that, because it costs money, people in the community will still want to use the free tools, which force design, instead of allowing them to decide on it like the mature developers they are.

Maybe it’s time to have some sort of free version of Isolator so that everyone can benefit. what do you think?

Course: Leading Software Teams – Essential Practices for team leads

DWORD – a new video cast