By Roy Osherove
In this article I’ll show what problems we encounter today when we perform unit tests against database related component, and the various ways used to solve these problems. I’ll also show a new method to solve these problems, which is far easier and simpler than any of the others. At the end of this article you will have the tools necessary to do simple and easy unit tests against database related components without the need to write messy cleanup code after your tests.
You will find more information on issues and solutions regarding this approach in the following article: Issues and solutions for database unit testing with enterprise services. You may also find XTUnit helpful in this regard.
Test Driven Development makes life easier. Unit tests are the lifeblood of a good Test Driven Development approach (and thus, are also the lifeblood of most of the agile methodologies out there such as XP, Scrum and others). Therefore it stands to reason that one should strive to create unit tests with the least amount of work and maintenance, which will in turn allow the developer to focus on the business problem at hand and the various design and algorithms needed to create a working product/component.
For the past couple of years I’ve been struggling with Unit tests, That is, I’ve used them in major production projects and have seen the good and the bad that can come from a fully test Driven approach. I’ll let the “good” part be taken care of by the numerous other articles out there that talk about the benefits you get from Unit Tests and TDD, and will focus on one of the most problematic parts of Unit tests in a real world project – testing against the database.
Testing Data Access layer components (DALs for short) is problematic in a number of ways. The number one reason f is this:
Testing the various CRUD(Create, Retrieve, Update, Delete) operations on a single class can result in a massive amount of “garbage” data residing on the test database (You are using a test database, aren’t you?). This creates several problems:
1. Problem: Garbage Data
Your database will eventually be full of junk data which you’ll need to get rid of sooner or later
2. Problem: Affecting other tests
Worst, you may break one of the golden rules of unit testing: No test should be dependent on another test to perform correctly. That is, you should be able to run your tests in any order you choose or maybe just one of them at a time, and they should all still work. That means that every test needs to start with a known state on which it will act. For DALs this usually means a known state of records in the database.
But guess what? When you test CRUD operations you’re actually changing the state of the database, so the next test that will be run that might depend on a specific record being there might fail because you just deleted that record in your current test. Anything can happen.
3. Problem: starting tests from a known state
This is the other side of the previous problem: My test needs to have something in the database in order to perform correctly. For example – it needs a specific “category” entity in the Category tables in order to test inserting a child “Product” into the Products table. I need to make sure I insert these values into the database before I perform the test, but I also want to make sure that data is no longer there after I finish my test.
Dealing with the problems
So how do you deal with all this garbage data that’s making your database and tests unstable? Obviously, you need to make sure that before and after each test you leave the database in the same state that you got it in. That is – you need to “undo” your CRUD operations after testing them. Here are some of the most prevalent methods used to achieve this:
- Remove it manually
This is the most obvious and in simple cases one of the simplest to achieve. To do this, after each test (at the end of it) you execute the opposite actions than the ones you took in the test. For example, if you inserted a record in the test – you delete it at the end.
Some problems and questions that arise from this approach:
o If I inserted the record into the Database using my objects’ Insert() method which I was testing – should I remove it from the database using the object’s Delete() method (which might have not been tested yet or may not even exist yet!) or should I do it against the database using direct calling of stored procedures or direct ADO.Net classes?
o If I choose to remove the records I inserted directly from the DB – this involves some serious amounts of extra code, residing in my test! And that code can have bugs as well.
o If I choose to use my object to delete and Insert as well - what do I test first? (Chicken and the egg kind of question..)
o I can also choose to remove all the garbage data at the end of all the tests run (TestFixureTearDown method). But then I’d need lots of data and custom code and …. It’s just complicated.
o I can ignore all my garbage data and restore my database from backup before running all the unit tests. This might solve the problem partially but may still leave a few unit tests that mess with other test’s data. A big no-no.
- Use the transaction object in ADO.Net
In his very interesting and insightful book, “Test Driven Development with Microsoft.Net”, James Newkirk addresses this same problem with a seemingly simply solution: Execute all your actions inside a transaction and rollback that transaction when you’re done with each test. This is a great solution and can definitely be implemented, but also has its problems:
- You need to worry about components that manage their own inner transaction but still will need to use outside transactions initiated by the unit tests.
- To workaround this problem James came up with an implementation of the Command pattern inside the unit tests (the book has a good example). However –even if it accomplishes the task nicely, after using this pattern we are left with more complicated unit tests that we should have, and all this just to be able to support rollback functionality.
A simpler way
The previous two approaches are the most used, and if you’ve used either of them you’d know that they are still hard to achieve with massive amount of custom work just to make the database data more reliable.
When I was reading Jame’s book, in the chapter that discusses implementing transactions, I came across this little sentence (abbreviated):
“We want to achieve the same functionality as “RequiresTransaction” for each tested component, just like in COM+”
This sentence triggered an interesting thought : why work so hard? Why not let COM+(Or, “Enterprise Services” in .Net) do the work of coordinating the transactions for us? Thus – I came up with another way of implementing database rollback functionality :
Using Enterprise Services
What we want is to be able to achieve the transaction functionality without all the overhead of writing our own custom Transaction manager class as prescribed in James’ book. How do we do that? With COM+ Enterprise services. Here is the technical gist of the method:
Our simple efforts start with this simple base class
public class DatabaseFixture:ServicedComponent
public void TransactionTearDown()
Here are some important pointers about this class:
- This class will be the base class that your DAL unit tests will inherit from. It allows for automatic transaction enlistment using the COM+ architecture through the EnterpriseServices namespace in .Net.
for that end :
- It inherits from ServicedComponent
- It has the following attributes on it:
The second attribute specifies that whenever this class is instantiated or invoked, COM+ will automatically create a transaction context for it to run under, or enlist it into an existing transaction context.
- It has a simple TearDown method which will call ContextUtil.SetAbort(). This causes the following flow to occur:
1. Before each test a transaction context is opened (or enlisted into an existing transaction)
2. The test is performed
3. TearDown is called and the transaction is rolled back thus
We get the following benefits from this approach:
1. it is dead simple and easy to implement
2. You need no code whatsoever to roll back any database changes your components have made throughout the test. They will be rolled back automatically.
3. You do not need to manage transaction by yourself.
A few more settings and we’re done
We’re still not finished but we’re awfully close to the finish line.
All serviced components need to have strong names. This one is no exception.
- Use the SN.exe tool with the –k option to create a key file.
- In your test project open AssemblyInfo.cs and set the following attribute (add it if it is not there):
Notice that I’ve put here a relative file path. This is the file I generated from sn.exe and it is located in the same directory as my .sln file. That way I can put this same attribute on all the other projects that need it as well.
- Put a static Version attribute in your test project:
- Look for this in the AssemblyInfo.cs file :[assembly: AssemblyVersion("1.0.*")]
- And change it into [assembly: AssemblyVersion("1.0.0")] or whatever version you like.
This step is important because by default when the attribute is at its default state Visual Stuido.Net will automatically increment the assembly version with each build. That’s a bad practice to have on any project, but when you’re dealing with a COM+ component (A ServicedComponent is just that) you need to realize that for every separate version of the test assembly that you build, you will have created a different COM+ component in the COM+ catalogue of the testing machine. SO – after you’ve run the tests 50 times a day your COM+ catalog might turn out to look pretty scary and we don’t want that. Setting the version to a fixed version saves us this little headache.
- Since your test assembly now has a strong name, every assembly that it references must have a strong name as well. This is easily done: put this same attribute in each of the referenced projects set in your test project.
That’s basically it. You’re done.
Here is a simple example of a test fixture that derives from this class to achieve a perfectly transparent data rollback after each test. This class tests a simple class that does category insert and delete against the Northwind database:
public class CategoryTests:DatabaseFixture
string CONN = @"Server=localhost;Database=Northwind;Trusted_Connection=True";
public void Insert()
CategoryManager mgr = new CategoryManager();
int newID = mgr.Insert("Test category");
public void Delete()
CategoryManager mgr = new CategoryManager();
int newID = mgr.Insert("Test category");
bool result = mgr.Delete(newID);
private void VerifyRowExists(string connectionString, int existingRowID,bool shouldExist)
string SELECT = "Select * from Categories where categoryID=" + existingRowID.ToString();
SqlDataReader dr = SqlHelper.ExecuteReader(connectionString,CommandType.Text,SELECT);
As you can see there is no clean up code anywhere. That’s how simple this technique really is.
Just so you can see there are no tricks up my sleeve – here is the CategoryManager tested class (in VB.Net – just for the heck of it):
Public Class CategoryManager
Private CONN As String = "Server=localhost;Database=Northwind;Trusted_Connection=True"
Public Function Insert(ByVal CategoryName As String) As Integer
Dim sql As String = "Insert into categories (categoryName) values('" + CategoryName + "');Select scope_identity()"
Dim result As Integer = NorthwindDAL.Data.SqlHelper.ExecuteScalar(CONN, CommandType.Text, sql)
Public Function Delete(ByVal id As Integer) As Boolean
Dim sql As String = "delete from categories where categoryID=" + id.ToString()
Dim result As Integer = _
NorthwindDAL.Data.SqlHelper.ExecuteNonQuery(CONN, CommandType.Text, sql)
Return (result = 1)
No tricks, no hidden cards. The database actually gets updated, only the updates are rolled back.
In this article I’ve show a way to define simpler, more maintainable unit tests against the database access layer components using the powerful and easy to use services of COM+. Hopefully using this technique database unit testing can become a much less painful experience, and thus allow more developers do more tests on their database code – and find those bugs as soon as possible.