Roy Osherove

View Original

Re-Thinking the Role of Mock Objects, Design & Test Maintainability (stream of thought)

Disclaimer: this post assumes you understand mock objects deeply and have been using them for a long time and are familiar with the london-school of TDD.

Disclaimer: written as a stream of though. So might be a bit incoherent.

It’s no secret that in my previous books on unit testing I’ve advocated for an approach to mock objects that is very different from that prescribed in the the “London School of TDD”. Namely, I usually prefer to use mock objects as little as possible, and to verify them only when the expected end result of a unit of work is calling a 3rd party dependency.

The reasons I usually state for this are:

  • since mock objects are testing internal interactions, they solidify our internal design so that it is harder to change later on. Internal interactions always result in either state changes, value results or calling a boundary in the system, so let’s use mock objects only on the boundaries. We can still create local wrapper interfaces for those boundaries, but they still represent a boundary dependency.

  • mock objects make tests longer and harder to read write and maintain

  • It’s easy to test something that has no value in real life by mistake (that my class calls a method on another class… ok. now what?)

This is almost the exact opposite from the recommendations in “Growing Object Oriented Software Guided by Tests” book and from the now famous paper (well, in our little nische anyway) “Mock Roles not Objects” (PDF).

“…We believe the opposite, that Mock Objects are most useful when used to drive the design of the code under test. This implies that they are most useful within the system where the interfaces can be changed. Mocks and stubs can still be useful for testing interactions with third-party code, especially for avoiding test dependencies, but for us this is a secondary aspect to the technique.”

As I’m working on the third edition of art of unit testing these days, I’m revisiting and challenging my thinking. I did read the GOOS book many years ago and I’ve just re-read it with fresher eyes, but also with more experience of my own, and a few thoughts popped into my head. I’m not sure where they lead yet so I’ll just throw them here before I forget:

  1. I always say that TDD requires three skills: good tests, test first and design. And I never focus on design. Mock objects (london school) are all about design of objects and dependencies. It could be that when we want to focus more on our design skill we would want to start using more of the london-school style mock objects? That thought scares me a bit because of the maintainability implications, but…

  2. maintainability: If we’re using mocks in our tests to drive the design of roles in our objects, we could take several precautions to prevent test maintainability (maybe)

    1. We can use the mock objects in a non strict manner and not verify on them (thish will just force us to think about the internal design). Things like the old NMock and JMock2 were made for that. Any framework with record-replay ability might fit that, but they are getting long in the tooth. AAA frameworks like fake it easy would not help us here because we only define the interactions as part of an assert.

    2. We could write the test, assert on the interactions, and then delete the interaction asserts and leave only the mocks, but treated as stubs. This relaxes our maintainability later on assuming we have non strict mocks

    3. This could be a sign that our design sucks, so we’d need smaller, shorter interfaces in the system or more roles, or add a middle role for multiple dependencies.

  3. We could separate the task fo learning design from the other two skills (test-frist, good tests) by choosing when to use london-style mock objects, and use them only in particular cases

  4. Boundaries: It’s true that people can mistakenly just fake external interfaces and use them as mocks in the test. I always advocate for creating an internal wrapper interface instead, that w can control. It could be that the way I explained this before could easily be understood as “fake 3rd parties directly”. And that would be unfortunate. I’ll fix that for my 3rd edition. Fake only things that are under your control, or that are not changing. A 3rd party logger can still change (if you change loggers!)

So what am I trying to say here?

  • There might be room for london style mocks in my book after all, but it might have taken me 10+ years to realize it, and I’m still not convinced fully. It could be one of the keys to learn about design, but at what cost?

  • My biggest fear is that the cost (tests that keep breaking) far outweighs the value we get from the design, and we’ve all seen long horrible tests with long complicated (also internal) dependency chains. The design skill and the skill of stopping and CHANGING the design when the tests go “OUCH” are not a given in many situations, especially enterprise.

  • Given the last point, is it worth it to let the london school test driven design idea go with the understanding that the world is mostly not ready for it yet, and will mostly just abuse it?

  • Or is it fair to throw it out there so that what I fear will be a select few will end up using it “correctly” but the rest of the world types blindly on the keyboard to produce a franken-test?

Here’s what the paper has to say about this:

“When testing with Mock Objects it is important to find the right balance between an accurate specification of a unit's required behaviour and a flexible test that allows easy evolution of the code base. One of the risks with TDD is that tests become “brittle”, that is they fail when a programmer makes unrelated changes to the application code. They have been over-specified to check features that are an artefact of the implementation, not an expression of some requirement in the object. A test suite that contains a lot of brittle tests will slow down development and inhibit refactoring.

The solution is to re-examine the code and see if either the specification should be weakened, or the object structure is wrong and should be changed. Following Einstein, a specification should be as precise as possible, but not more precise.”

Easier said than done? this paper was written 16 years ago. I’d love to say “people are ready” but I’m not sure yet.

Mainly, I’m not sure whether Nat Pryce and Steve Freeman have new information to share about these ideas since then and if there are any new lessons learned. I’ll ping them and see if they can respond to this blog post.