Follow @RoyOsherove on Twitter

Another Entlib example: Separate integration tests from unit tests (and learn to know the difference)

Here are some more inputs on the EntLib unit tests runnability issues (problems as specified in this blog post)
[I've bolded some phrases I thought were important in the original text]:
 
Data
56 tests will fail if you do not have Oracle installed. If you do happen to have oracle installed, youll need to manually open the Data\Tests\TestConfigurationContext.cs file and change your oracle connection settings.
 
That configuration should have been in a config file relating to the tests and the user should have been asked what the connection to the Oracle server they would like to run against.
 
  • Logging EmailSinkFixture.LogMessageToEmail will fail, since you do not have access to our internal mail server. You can fix this by changing Logging\Tests\EnterpriseLibrary.LoggingDistributor.config on line 22 to reference different smtpServers and to and from addresses.
  •  
    TypeMock could have been used or interfaces could have been made to test that "interaction" between the object indeed occurred. Testing the real email is an *integration test* and should be run separately.
     
  • Security.ActiveDirectory Tests will fail because you cannot access our Active Directory server. There are instructions about how to set up ADAM in Security.ActiveDirectory.Configuration.ADAM Setup. Youll also need to change line 53 in Security.ActiveDirectory.Tests.TestConfigurationContext to reflect your ADAM setup.
  •  
    Again - interfaces and mock objects could have been used to test this. testing against a live directory is an integration test that should be run separately.
     
  • EnterpriseLibrary It is normal for several of the tests to occasionally fail. There are a few unit tests that are timing-dependent, especially in Caching and Configuration. These tests are testing whether or not something happens during a particular time interval, and sometimes the system delays the actions too long and those tests fail. If you rerun the test, it should work the next time. Additionally, our tests write to the event log, which occasionally fills up. If you begin to see a number of tests failing, check that your application event log is not full.
  •  
    Why not have the method that sometimes fails and sometimes not be run twice or three times by a unit test that checks that it succeeds at least one of those times? A test that "sometimes" passes is not a unit test anyway - but if you want to make sure there are no false negatives - this is a great way to check.
     
     
    One thing to note here -
    these are all considered integration tests because they rely on external objects beyond the tested ones to do their bidding and thus need special setup. there is a very good place for such tests, but they should not be called unit tests (even though they are automated - they don't test a unit - they test a set of related technologies and objects in scope).
     
    Integration tests should be run separately and setup separately a special install program.
    Why? because if they are mingled with your *real* unit tests and they keep failing - your end-developer will throw out the good with the bad and decide that it's not worth to run *any* of the unit tests since they can't be trusted (a few rotten apples spoil the bunch).
     
    You should always provide a basic set of tests that *always* runs and is fully green when someone runs it. You can leave the other integration tests for a special run with special setup  - but it's important not to mess up the developer experience and sense of test runnability in this regard - something has to be run consistently and easily to make people use and extend your tests. otherwise you've done a lot of work for nothing- people won't use it. We're a lazy bunch.

    Scary future ahead?

    Test maintainability: consider creating a test fixture "per method" instead of "per class"