Follow @RoyOsherove on Twitter

Avoid multiple asserts in a single unit test: revisited

James Avery argues that what he'd really like to see in a unit test framework is the ability to run (and fail) multiple asserts within the same test.

As some background, I personally am in favor of the "single assert per unit test" idea for several reasons, the most prominent of them is that currently, all unit test frameworks I know in .NET will fail the test on the first failing assert call.

The reason this is how it works is that the framework detects failure by catching a special type of exception (AssertionException if I recall correctly) that is thrown by the assert method. you can't really catch an exception and keep on going to the next line unless the framework automagically instruments each line of assert code in the test with a try-catch (which would be one way to archive James' request)

So currently, what that means that if you have other asserts after the one that failed - they won't run and you won't know if they would have succeeded or not, thus giving you only a partial picture of what's failing and what isn't, which is sometimes a terrible way to try and discover a bug. the other reasons why you'd want to have only a single assert per test are that a test with multiple asserts is actually testing multiple things, so it may as well be considered as multiple tests. A good "tell" sign is when it's hard to name a test because it does several things instead of just one.

There is another very important reason which I neglected to talk about in my previous posts or articles, which I have recently realized.

Even if we assume that all the asserts are run, you're essentially running multiple tests on code that has "dirty" state. For example, asserting on the result of a method call may actually change the state of the object under test so that the next call to that method may actually be skewed because of the previous asserts. that's a bad world to be in. having single assert per test means you also know exactly the state of your object before the assert.

Worst yet, you may not have the object state changed now when you're writing the current tests, but sometime in the future you may add that state changing ability to some method or property call and suddenly some of your tests will break unexpectedly, but only on the 2nd or 3rd assert call in the same test. That's a bug that would be very hard to find, because it's actually in the tests, and the behavior is expected.

Having said that, I'd like to walk through James' points one by one as to why he'd like to have multiple-assert ability in the framework and why separating tests per assert is not his cup of tea.

 

  • "I think it’s actually harder to read since the assertions are scattered around in separate methods. "

It shouldn't be harder to read if each test was really simple and was named correctly, and in fact, James' tests are pretty readable as far as I'm concerned. they just look like a lot more tests than his "simple" example. I actually think that his "simple" example may seem easier to read, but the tests are harder to understand unless you actually read through all the code in them.

  • "It would increase the number of tests. On my current project we have 1800 tests, if we followed the one assertion rule we would have over 6,000 I am sure. "

So? In essence what you really are doing is running 6000 tests that only appear to be 1800. that might mean that if one of the 1800 tests breaks it may be harder to realize what the problem is without reading through that test code and finding the actual assert that failed (aka "debugging"), and even then you may have a hard time finding the bug if you have asserts in that test that haven't even run.

 

  • "If my method breaks and starts returning null then I have 8 tests failing instead of just 2, this means I have to know the dependency tree of my tests to find the real issue. "

That is true also when you have multiple asserts in the same test, I would think, or you probably wouldn't have written them all together. so if you had two failing tests, how many failing asserts would you have? you may have had only 2. you really want to be sure that your other asserts would fail given that the first two fail, which is not always the case (it is the case in James' examples).  Imagine that you had 4 small failing tests and four small passing tests - wouldn't that be great instead of two big failing tests? 

for example your method could still return null, but a you may have an assert that shows that a specific state of the object is actually true, as you expect it to be. If that assert never runs, you can't know that and might look in the wrong place to fix the bug. if it is in a separate test you know that it succeeded. if it is run inside the test, you again face the possibility of changing the object's state as mentioned earlier.

 

  • "Any code I have to write to setup the data for my test has to be duplicated 8 times. (if I move that setup data to the setup method than I am effectively limiting my fixtures to one fixture per test) "

You can and should extract setup code for specific tests which are reusable into an external private method (XXX_init for example) that is called from each of those tests. No one forces you to use the setup method for that. the setup method is there only for global setup routines, which are used by all the tests.

 

  • "I now have over double the amount of code. I am constantly trying to reduce the amount of code in my project, whether test or production, and anything that doubles it better add a ton of value."

if the tests are concise and code is refactored correctly inside the tests, that "larger" amount of code is simply a better syntactical way to express exactly what you did before  - write all the tests you wanted, but they are well separated and solid and readable. Just like you may want to write some method calls in two lines of code instead of one for the sake of readability, this is the same idea, but for the sake of solid confined tests.

Regulazy 1.0.3 Released

My sessions at TechEd Europe 2006