Every once in a while in my development career I realize just how much I’ve been missing out on some technique that I’ve just discovered. Such was the case with learning about Design patterns, as was the case with learning how to write self-documenting code, and countless other occasions where I felt I’ve “discovered a whole new world” and from that moment on tried to stick to the new things I’ve learned. Doing Test-Driven development (TDD) was definitely one of those techniques.
I touched it first when I discovered the notion of XP. XP is a software development methodology that has some hardcore guidelines on how to achieve success in development projects but it’s a very steep learning curve, and you can find plenty of books nowadays that talk about this subject. TDD is one of the trademarks of XP (you might have heard about other, more “provocative” notions such as pair-programming) but it wasn’t invented there. There’s a whole movement that supports what is known as “Agile methodologies”. You might find it interesting to explore this whole subject. A good start would be Martin Fowlers’ site, one of the granddaddies of everything that is agile.
But I digress. The whole notion of TDD is just that. Development is based on testing. Does that sound weird? That’s because we’re used to it being the other way around. Usually we’ll code something and throw it over to QA for testing. Sometimes we’ll hack out some tests after the code base is done, just to make sure that “everything appears to be working” . TDD takes this whole process and turns it upside down. That means that whenever we are going to code something up, we stop, and first do a test that makes sure our code works. Yep. We don’t have any code yet. But we have a test to prove it.
Here are the bullet points for how to start doing TDD:
1. Make a test that fails
2. Make it work
3. For every new feature goto step 1
It’s that easy. Really. Now, I know this sounds weird at first. How can we test code that does not exist yet? Well, if our code even fails to compile, that means the test failed in our book. So it’s a start. To show just how easy this is, let’s make a pretend project.
Our first task in the project is to create a class named Calculator that contains a method that adds two numbers. Now, given what I’ve just explained, the first task would be to first have one test that fails. So let’s create a new Console Application project. This project will be our testing project. It’s not the real project that we have to do. This project will contain the code that tests our production code.
In our Main() method let’s add a few lines of code that determine whether out code (which does not exist yet) runs ok.
The first thing we want to do is test that we can create a new instance of our calculator class:
[STAThread]
static void Main(string[] args)
{
Calculator calc = new Calculator();
}
Now, of course this code won’t compile, but what we have just done is created a test that relies on the fact that a class named Calculator exists. So our next step is now to make the test pass . To do that we’ll finally create our “real” project and add a class named calculator to it. Once we add a reference to that project into out test project, we’ll see that the code compiles and runs just fine. We just made the test pass.
Our next goal? Add an “AddNumbers()” method that takes two numbers and returns the sum of those numbers. Again, what’s the first thing we do? That’s right. We create a test that fails. So let’s add code to test this to our previous test code:
[STAThread]
static void Main(string[] args)
{
//test creation
Calculator calc = new Calculator();
//test addition
int retVal = calc.Add(22,20);
if(42!= retVal)
{
throw new Exception("calc.Add(2,2) did not return a value of 4");
}
}
That was simple. Again, this code won’t compile. We have to create the code that does this addition operation. Once we’ve added a simple method for Add() we can now re-run the test and make sure that it works. Let’s make this a little more interesting though. What happens when we send in a null value in one of the parameters? Suppose that the design requires us to throw a CalcException when a null value is sent to this Add() method. If want to add this feature, we first need to test for it:
//test exception
try
{
int retVal = calc.Add(null,22);
}
catch(CalcException ce)
{
//test passed
}
catch(Exception e)
{
throw new Exception("calc.Add(null,22) did not throw a calcException");
}
Now the code won’t compile again, We don’t have a CalcException class defined. Once we define it, we can run the test, but it will fail again, since we’ll get a standard exception and not a calcException from the AddMethod. So we’ll change our code to throw that exception…. And so on…
As you can see this process is pretty easy. What I’ve done at every step is define a goal, and than make sure I pass it. What we have achieved at the end of this session is a piece of code that is thoroughly tested. Not only that, we get several added bonuses:
· Anyone who looks at our tests can understand what out intention and purpose is of each method
· We can make changes to our code and know “what we broke” with the click of a mouse and fix it just as fast
· If our testing covered all of our code like this, we could find bugs in our programs at build time that would have taken a very long time at the customer’s site
But some things are definitely missing from our current solution:
· No automation. If I wanted to run a build and get the results of the tests that were performed I’d have a long day coming up with a solution that traces the messages and output them.
· No re-use. I’d have to re-write any output handling from scratch every time I want to test a project
· No decoupling. Code that runs a test must be totally decoupled from other code that runs a test. I always want my testing code to run within a given context, a known set of known values for parameters and so on. I don’t want other tests messing up my state when they change stuff. There’s no framework that allows me to have a separate state for each testing code without significant work every time.
NUnit
What’s needed here is a framework that allows us to write tests and not worry too much about how we’re going to get back their results. The de-facto framework for .Net unit testing is NUnit. Currently at version 2.1, NUnit provides us with a set of base classes and attributes that allow us to abstract away our unit tests and concentrate on the code that actually does the testing. The beautiful thing is that moving from our current coding/testing style to Nunit style requires little learning and is very easy to master.
Nunit allows us to separate out testing code into what is logically know as tests, test fixtures and test suites. The concept is very simple.
· You write a test to test a single piece of functionality.
· You group one or more tests inside a test fixture that allows you to easily add repeatable state for each test (I’ll explain shortly)
· You group one or more fixtures inside test suites to logically separate the test and their meaning
So how do we turn our code into using Nunit style?
· Download and install Nunit
· Add a reference to the nunit.framework.dll to our testing project
· Add a using clause for NUnit.Framework workspace in a new class file
Now we’re ready to start working.
Change the class’s name to MyTestingClass .
This class will hold the fixture for our tests. We also need to let the Nunit framework know that this class is a fixture, so we simple add a [TestFixture] attribute on top of the class name. You can remove the default constructor from the class (but don’t make it private!). Once we’ve done that, we have a class that looks like this:
[TestFixture]
public class MyTestClass
{
}
Easy enough we just have to start adding tests to the class. We’ll use the code from our previous example to test against Calculator.
A test in a fixture is defined as a public method that returns void and accepts no parameters, marked with the “[Test]” attribute and with one or more assertions inside. So let’s add the first test.:
[TestFixture]
public class MyTestClass
{
[Test]
TestAddition()
{
//test addition
int retVal = calc.Add(22,22);
Assert.AreEqual(44,retVal,
"calc.Add() returned the wrong number");
}
}
As you can see it’s the same code as before only now it’s sitting in a method of its own .The method is decorated with the [Test] attribute .Also, instead of having to manually throw an exception I’m using the Assert class which is part of the Nunit framework. You’ll get to know this class a lot as this is the main instrument you’ll use to verify your code. The Assert class will fail the current test if the condition passed to it is false. It contains only static methods that allows you to make sure that a value is not null, equals other values and just send in any Boolean expression you like. You can also send in messages that explain the meaning of this failure.
Now we need to build our testing project so we can move on to the next step.
Test suites
Test suites in the new versions of Nunit are derived directly from the namespaces that the test fixtures reside in. If you have two fixtures in separate namespaces (i.e. one is not contained inside the other) they’ll be considered as residing in two separate test suites.
So what now?
GUI
Well, now it’s time to run your first Nunit test. When you install Nunit you get two choices on how to run your unit tests: Either a GUI based version of the Nunit test runner, or a console based one. The Gui one is located in Start->Programs->Nunit V2.1->Nunit-Gui . When you open it you get a pretty “not beautiful” but very functional interface that allows you to select an assembly with compiled unit tests inside it and run all the tests that are there.
· Select File->New project
· Select Project->Add assembly and select your compiled tests assembly.
Once you’ve selected your assembly you’ll see the tree on the left fill up with namespaces, with the names of any test fixtures inside them and the names of any tests inside them. Now you can see why it’s important to put those attributes on our classes and tests. It’s how we make out testing GUI find them and run them.
Make sure the top node of the tree is selected and click “Run” on the right side of the form. You see the progress bar very quickly turn green to signify success. If the bar is red, it means that test has failed and you can go back and make it succeed.
I won’t go into too much detail here on how to use all the features in the Nunit GUI but you can learn all you need by reading the documentation for it.
Feel free to close the GUI, it will remember the last assembly you loaded in it next time.
On important thing to note here is that once one test inside a test suite fails, all other tests will not run.
Console
Besides the GUI version of the Nunit test runner, you also get a Console test runner. This is especially good for when you have an automated build procedure that runs unattended. You can make it call the console version of Nunit which outputs directly into the stdOutput and have it log all results.
To make the console do the testing, you need to switch to [Nunit program files folder]\Bin . From there you can run Nunit-Console.exe providing the name or full path of the assembly to test against. I urge you to put that path inside the global PATH environment variable so that you can use the console easily from anywhere.
More testing goodies we get
· Another attribute you can put on a test is the [Ignore(reason)] attribute, Use this to skip certain tests , but the reason for their skip will be displayed inside the GUI.
· You can have a [Setup] and [TearDown] method inside your fixture. The [Setup] runs before each test in the current fixture is run, and the [TearDown] runs after each test. These methods are very useful for when you want all your tests to use the same set of clean initialized data. In there you can initialize global variables, delete or create needed files and so on. This of [Setup] as an implicit contructor for each test, and of [TearDown] as a destructor for it. Methods that are marked by these attributes should not be marked as tests as well!
· You can have a [TestFixtureSetUp ] and [TestFixtureTearDown] methods as well. These methods will be run only once for each test fixture tested. Use them for global initialization and cleanup of resources that can be shared by all tests in that fixture.
· Another excellent attribute we get is the [ExcpectedException] attribute. When a test method is decorated with this attribute and no exception of the type specified in the attribute is thrown inside the test, thw test has failed. This is perfect to check that your components throw exceptions at the right moment, such as bad user input and so on. We’ll use this attribute to add another test to our fixture, which test for the CalcException:
[Test]
[ExpectedException(typeof(CalcException))]
public void TestException()
{
int retVal = calc.Add(null,22);
Assert.IsTrue(true);
}
As you can see it couldn’t be easier.
The Nunit-Addin
Now that you understand the basics of writing unit tests with Nunit, it’s time for me to introduce one of the coolest gadgets related to this subject – the Nunit Addin.
This add-in allows you to ,instead of re-opening the Nunit-Gui every time you need to make sure your tests pass, to just right click on the project or class you wish to test and hit “Run Test(s)”. You’ll get all the information inside VS>NET’s output window.
This add-in allows more than just this functionality, however. It allows you to test a single method from inside the code editor. Just click anywhere inside the code of that method and hit “Run test”.
Another very powerful feature allows you to do what is called “Ad-Hoc testing”. You can create any method, and not even put a [Test] attribute on it. Than, inside that method right click and hit “Test with”-“Debugger” and you immediately step into that method without needed to create a separate project that calls this method. Indeed, very powerful. You can also debug using different versions of the .Net framework or even Mono. This add-in is a must have for quick incremental development.
A word before we finish
The technique I’ve shown here means very little if not pursued diligently. Remember – the first thing you ever do is write the test, not the code. If you keep this up you’ll eventually end up with a system that is fully testable and with fewer bugs. You’ll also find that you think about your component’s design more responsibly, because you’re looking at them from a different perspective. Once you get the “Zen” of it, you’ll start to even have more fun doing it. You’ll also gain confidence in changing your code. You’ll get instant feedback if something broke and you can squash bugs at their inception point.
Another thing that needs to be known– Nunit is the unit testing framework for .Net, but like Nunit there are many others, for practically any semi-popular programming language out there. If you program in C++ for example, take a look at CPPUnit. There’s also a JUnit out there. In fact, Nunit is a port from JUnit into .Net. There are also commercial frameworks and addins that try to provide added functionality through add-ins for .Net unit testing: Some of those include HarnessIt,csUnit and X-Unity.
Most of the non-.Net frameworks all support the same kind of logical notions of test case, fixture and suite, but each one might provide different means of expressing them. Attributes are unique to .Net but in other OO laguages you might have to derive a class from TestFixture to declare it as a fixture and so on. You can find the complete list of frameworks over at www.xprogramming.com (look for the “downloads” link).
Advanced issues
This article is just a first in this series. In the next articles I’ll talk more about real world problems facing a developer who wants to test real-world applications. Some of these issues include:
· Testing abstract classes?
· Testing complex object models and dependencies
· Testing database related features
· Mock objects and their use
· Testing GUI interactions