Follow @RoyOsherove on Twitter

Achieving And Recognizing Testable Software Designs – Part I

Roy Osherove

Principal, Team Agile

www.TeamAgile.com

blog: www.ISerializable.com

 

Recently, I’ve had the pleasure of speaking at a Microsoft Dev/IT Pro Days conference in Belgium.  I was approached by the organizers asking if I would want to do a session on “Designing for testability”, as part of three talks I was to give there.

The topic was not something I had spoken about before, but definitely something I had thought about, considered and wrestled with many times on many projects and occasions.

 

I set out to first determine what the definition of a “Testable System” might be, in my eyes. I came to the realization that a testable system is not measured in a vacuum, but its testability has to be “mirrored” through external testing related factors. For example, how easy would it be to write quality unit tests against such a system? And for that question to be answered, one has to ask what “quality unit tests” really means in this context. In this article we’ll try to define what a testable system design really means, and explore some basic design rules to make sure we can keep that testability in the system from the beginning.

 

Here’s my current definition of a testable system:

 

“For each logical part of the system, a unit test can be written relatively easily and quickly that satisfies all the following PC-COF rules at the same time:

 

Partial runs are possible

Consistent results on every test run

Configuration is unneeded before run

Order of tests does not matter

Fast run time”

 

 

Here are examples for all five rules. The first and foremost rule is the consistency one.

 

Consistent results on every test run

The test always has to fail or pass, but never switches between modes until a bug is fixed. That’s all good and well for the test quality, but how does it relate to the testable system? Well, remember, the definition says writing such a test is relatively easy and quick.  Imagine trying to come up with testing a scenario that uses multiple threads. That would be neither easy to write a test for or quick. Not only that, threading-related tests often exhibit a “random” style of behavior, where the test usually passes and only sometimes fails.

That can tell us several things. First, this is probably not a unit test, but an integration test (which is fine and dandy, but is not a replacement for a logic-level unit test) which may not be what we are looking for, assuming we just want to test the logic-level inside a single thread. Second this tells us that we have a dependency that we cannot control – that of the system threads (which is why I call it an integration test – we are testing our logic along with the threading utility). We will either have to remove that dependency or by changing the way we test things. Either way, a threading-related scenario is not a very testable one.

Here’s a simpler example that makes us design the system in a specific way so as to make it more testable. Consider the following piece of code that needs to be tested:

  public bool CanPurchase(Person p)

        {

            if (!(PersonValidator.IsValid(p)))

            {

                return false;

            }

 

            if (p.SSID != null &&

                p.SubscriptionType != “Canceled” &&

                p.CreditOnFile > 0)

            {

                return true;

            }

 

            return false;

        }  

 

The code marked in blue is the logic coded inside our method. The code highlighted in red in in fact an external dependency, in this case on a static method of a the PersonValidator Class. What’s the problem with this code? Why isn’t it considered testable? Consider the following:

·                           Suppose our test passes, that is, the logic is well tested. Then someone decides to put a bug in the IsValid() method of the PersonValidator class. Our unit test suddenly fails, even though the logic in the method works just fine. So, depending on the external method call, our test results may be inconsistent.

·                           In our test we’d like to test for a state where the person is valid, and another test where the person is *not* valid. Assume that the business rules for determining if a person is valid are obscure, hard to write and reside in a database somewhere. We can’t possibly know them while we’re writing the “CanPurchase” method. So how to we send in a person which is valid for one test, and not valid for another? That’s the problem of controlling an external resource such as the IsValid method. We can’t easily control it.

 

The question for this scenario then is “How easy would it be to write a test that can be consistent in the face of bugs in other logic and can control the Validation result so that the code can be tested under various states of validation?” The answer in this case may be “Not very easy”. We can take this as a sign that the system design is not very testable (indeed, it’s spelled out right there in my definition of a testable system).

            Here’s one way to make a system testable for this scenario: Use Interfaces. Consider the following refactoring of the code to use interfaces and think about the testability of the new code:

              IValidator m_validator;

                public void SetValidator(IValidator validator)

                {

                    m_validator = validator;

                }

               

                public bool CanPurchase(Person p)

                {

                    if (!(m_validator.IsValid(p)))

                    {

                        return false;

                    }

       

                    if (p.SSID != null &&

                        p.SubscriptionType != “Canceled” &&

                        p.CreditOnFile > 0)

                    {

                        return true;

                    }

       

                    return false;

                }

What we’ve done is use an interface on a validator object, which allows us to replace that validator with a fake validator in our tests. That Fake validator can be made to do whatever we want – return true, false or even through an exception if we tell it to. It’s just a simple class in our tests that is a “dummie” object that looks like a validator.

Here’s how a test with a fake object might look like with NUnit:

 

     [TestClass]

    public class PersonLogicTests

    {

        [TestMethod]

        public void CanPurchase_DefaultPersonValidated_IsFalse()

        {

            MyFakeValidaor val = new MyFakeValidaor();

            val.whatToReturn = true;

 

            PersonLogic logic = new PersonLogic();

            logic.SetValidator(val);

 

            Person p = new Person();

            bool result = logic.CanPurchase(p);

            Assert.IsFalse(result);

        }

 

        public class MyFakeValidaor:IValidator

        {

            public bool whatToReturn;

            public bool IsValid(Person p)

            {

                return whatToReturn;

            }

        }

}

The code marked in blue is the one using the fake validator. Notice how easy it is to control the return values and replace the external dependency in this case. This is a perfect example of how interfaces make code more testable.  Using interfaces in this case it is truly easy and quick to write tests for the coded pieces of logic inside that method.

            “But you had to add a whole new method just to replace the object, and I might not want to do that for my class” you may say. A solution could be to put the code that is there just for testing reasons (the setValidator method) in a conditionally compiled block or use the “Conditional” attribute on it. Antoehr solution is to use a constructor which takes the interface as a parameter instead or use a set property. The main thing is to understand that testing changes our code into testable state, which is not necessarily a bad thing. The design may be more “open” but that’s a “price” you might want to pay so that you know your code works as it should.

 

Next we’ll take a look at the “Configuration is not needed” part of the equation.

 

Configuration is not needed

Why is Configuration a part of this article? The ability to configure a class at runtime is an important one for unit tests. If code requires external configuration before it is tested, it will take more time to create tests for that code. It can also make the tests less manageable and not as easy to write. Consider the following piece of code:

 

        public bool IsConnectionStringValid()

        {

         string connString =

           ConfigurationSettings.AppSettings[“conString”].ToString();

            //do some stuff

            //…

            return true;

        }

 

 

The code marked in blue uses external configuration files to do its bidding. If we tried to write a unit test for this code we’d have to include a configuration file in our test. That’s not a bad idea sometimes, but then it wouldn’t really be a unit test, but an integration test – a test which uses more than one unit test do the testing of some logic. Integration testing has its place and time, but we’re talking about a system which is easily unit testable.

So how do we overcome this problem of testability? We could introduce an interface into the mix – use a separate class to get the connection string settings, make it use an interface, and replace that class with a “fake” class of your own – providing a connection string of your own that does not need configuration. That’s one way to do it, indeed.

            Here’s another way: Use a virtual method to get the external data you need. If  we use virtual methods, you could then derive and override  the virtual method in a different class, returning whatever value you like. You can then use that derived class in your tests. The trick is to separate into a virtual method only the simplest code possible that uses the external dependency. Don’t’ extract an “if” for example, because you’ll be overriding that logic in your test.

            Here’s the same code refactored to use a virtual method:

 

Class ConnectionHelper

{

public bool IsConnectionStringValid()

        {

         string connString = getConnectionString();

            //do some stuff

            //…

            return true;

        }

 

        protected virtual string getConnectionString()

        {

            return ConfigurationSettings.AppSettings[“conString”].ToString();

        }

}

 

The code in blue is the refactored code. By refactoring the code we were able to both keep the original functionality of the code, and make it more testable at the same time. That is the definition of the word “Refactoring” – Change existing code without changing its functionality. You’ve done this numerous times yourself – if you’ve ever change a method name, or extracted a long method into many little method calls.

 

Let’s see how we’d write a test with that refactoring in place. First, we’ll create a class that in effect derives from the class we’d like to test. That class we’ll call it the “TestableConnectionHelper” class and then in fact write our tests against that testable class. Here’s how it looks:

 

[TestMethod]

        public void ConfigBased()

        {

            TestableConfigBasedClass myClass = new TestableConfigBasedClass();

            myClass.mConnectString = “bad string”;

            Assert.AreEqual(false,myClass.IsConnectionStringValid())

        }

 

 

        public class TestableConfigBasedClass:MyConfigBasedClass

        {

            public string mConnectString;

            protected override string getConnectionString()

            {

                return mConnectString;

            }

        }

 

Note that the testable class only overrides one simple thing, and the rest of the functionality under test stays the same as if this was the original test. This testable class also resides in our test project, and is not an original part of the production code.

            This method is called “Extract & Override” and is very powerful for Refactoring existing code into testability. It does require, however, the availability of a virtual method to override. That’s why I recommend, as a rule, to be virtual by default – make method calls virtual by default, and make sure to always use method calls when getting data from external resources in your logic. That guarantees a way in for writing tests.

We went into this because we wanted it to be easy and quick to write tests that don’t’ use a configuration scheme. That’s been achieved in this scenario.

 

 

Fast Run Time

It may not sound like much, but being able to have very fast unit tests can make or break your development cycle. Is half a second per unit test fast enough? Not even close. Imagine having 5000 methods with logic, with at least a couple of unit tests per method. That’s 10,000 tests right there. If each of them took half a second (ok, only 30%) that’s 25 minutes of run time just for those tests. How often would you run a test suite that takes half an hour? Once, maybe twice or three times per day, but no more. If it took 2 minutes to run things would be much different and you’d run it much more often.

But how does this “best practice” relate to a testable system? Well, in a non-testable system you would find it really hard, or really time consuming to write some of the tests against objects and make them run fast. That’s because you’d most likely have objects that do some sort of time=consuming activity using their external dependencies. For example, take the “Validator” example explained earlier.

 

            if (!(PersonValidator.IsValid(p)))

            {

                return false;

            }

What if the IsValid method calls a web service to do its bidding? Or it reads rules from the database? Or it simply performs a very length processing of in-memory rules that might take a few seconds or half a minute in some cases? Writing a test that runs fast with that method in place would be a non trivial task, as you would need to find a way to either make sure the data put in will not take too long to process, or that you configure the validator at runtime to not do so much processing. Either way you’ll end up writing more code in the test, making it more complicated, and having a harder time than you would have if this was a testable system. To make it testable you can take the same actions explained before: Refactor the code to use interfaces, or use a virtual method to check the validation and return a true/false result, then override it in your “testable” class.

 

Order Of tests does not matter

If you’re doing database related development, and tried to write “unit tests” for your data layer, you’re already familiar with this issue.

 

bool Insert(Person p)

{

//insert person to database

}

 

bool Delete(Person p)

{

//Delete person from database

}

 

 

 

The issue here is that you are relying on external state in the database for the tests to work. SO, in order for the delete tests to work, there has to be some row in the database to delete successfully, and for the insert tests to work there should be no duplicate line in the database. Many people who first start out testing data related actions do the “obvious” thing and simply make sure that the delete test runs after the insert test, and that they both try to insert and delete the same row data. That, of course, is problematic, because most unit test frameworks cannot guarantee the order in which unit tests will be executed. SO just the fact that it runs fine today does not mean tomorrow the tests won’t run based on some other criteria.

So how do you make sure that tests can be run in no specific order? 

In this case that’s very hard. You need to make sure the database’s state is rolled back before and after each test, which can be both time consuming and hard to code.  That means we break at least the “runs fast” rule, and even if we didn’t, we are breaking the “easy to write” rule in the definition.

What does that tell us?

It could tell us that the system is not testable, and I’d agree. A data layer, in my mind, is not testable by itself, and should always be tested along with the Database it represents. That’s a whole other topic, but essentially, you would want to test the “logic” in the data layer, which is, like it or not, highly coupled with that in the database. So, even though the system is not “unit” testable, we still want to test it.

Is making it testable unit interfaces and virtual methods a good idea? My opinion is that for anything *but* data layers, this is a good idea, but in the data layer case, this is a bad one because logic exists in the database as well.

Which can lead to a different conclusion: We’d want to write *integration tests* for the data layer, which look like unit tests and use the unit test framework tools to do their work. Integration tests don’t test a class or method separately, but several items working together.

If we subscribe to the fact that we would rather have integration and not unit tests for our data layer, everything falls into place:

Because these are integration tests than:

·                           It will be harder to write them while making sure they run fast (or impossible)

·                           It will be harder (but necessary) to make sure the order of the tests does not matter

·                           It will be harder to make sure they stay consistent between runs

 

The reason I’m using this example in here is that the ordering of tests, and the next part – the partial running of tests, both relate to integration-based tests problems, and these “problems” help us define not only what a testable design means, but what a “unit-testable” design means. If test ordering needs to be maintained and couldn’t be rewritten easily to make sure it does not matter, you’re probably writing integration tests anyway.

 

What’s the design rule here? It’s mostly an observational rule. Make sure you’re writing unit tests, unless you can’t, in which case you’ll have problems with ordering and partiality of the test runs.

 

Partial Runs of tests are possible

This last case related directly to the latter. When you have external state you’re likely to have a problem if you don’t run all the tests in your suite, since the external state between specific tests may need to be adjusted for the next test (so that a row will be inserted just in time for the delete test, for example).

Again, this problem is solvable, but how easy it is to solve it? Not as easy as writing simple unit tests without external state, that’s for sure. The whole problem of rolling back external state is that of controlling the before and after states and making sure they are the same for each test run.

If you can’t do that easily you’re either not using a testable system design, or you’re not writing a unit test.

 

Summary

The main design rules I’ve described till now are:

·                           Use interface based design

·                           Make methods virtual by default

 

In the next part of this article we’ll take a look at more specific examples of dependencies in code and who using these two simple techniques we can overcome them. We’ll talk about what refactoring the code had to undergo in order for these techniques to be effective, and see that refactoring the code for testability is easy to accomplish if you’re looking for the right things.

·                           Refactoring and Testing around singletons

·                           Refactoring and Testing Singletons

·                           Refactoring and testing direct object calls and static methods

·                           More..

 

 

Windows Complication Foundation