I hear a lot of objections to writing unit tests. They vary from “The test isn’t catching any bugs because the code is too simple.” to “The test is more complicated than the code, who is writing tests on the tests? ha ha ha!” However, the one I hear most often is “I changed my code and all my tests stopped compiling. Now I have to do much more work and I still didn’t catch any bugs.” This comes up so often that it really deserves to be thought about.
Here is the bad news: even though writing unit tests can improve the quality of your code No-One Said That It Was Going To Be Easy. The problem is, as some of the cynics point out, that test code is still code and it will need work.
We have 2 ways to tackle this:
1) increase the amount of worth of each unit test
2) decrease the amount of work required for writing and maintaining each unit test
Improving unit tests is a topic that can, and has, filled entire books. Writing the test in the first place can be hard mainly because we don’t know how to write them or what to test. The amount of work that is required for a unit test bites us in two ways: it bites when we write the test and it bites again and again and again when we change the code. Maintaining the tests can also be hard and that is what often leads to dead tests and people either ignoring them or commenting them out so that the “real” code compiles.
There are two parts to how we address the length of time that it takes to write and maintain test code; but first, that bad news again: it will take longer to write software that is tested compared to just hammering it out with no tests. However, the fact is, that if it isn’t tested then it isn’t done. You can — and some of you will — argue that a test that is executed once via the GUI when you write the code is a valid test. You also probably argue that writing a unit test that is reliable enough to be run automatically is too much work and is slowing you down. Well, that is true. Writing no tests at all will certainly be faster… for the first hour. If you are unlucky enough to have a business-driven solution, it will be complex and subtle and your lack of tests will bite you before you even do the first release. The reality is that like using software to do anything else, writing software to test other pieces of software automatically will take time, effort and brains. The good news is that writing good test code is no different to writing any other sort of good code; you keep it simple, reduce duplication and so on.
However, we can write software so that testing it is easier and we should. When you write your unit tests, you are the first consumer of the code. If you can’t drive it from the outside, it is awkward to test. If it is awkward to test it will also be awkward to use. If the tests break when you change the code, then this will also happen to your “real” code. This isn’t about being test driven — although that helps — it is about writing genuinely OO software. OO code is naturally testable and doesn’t need to be redesigned for testability. That’s the funny thing, test code is also real code.
Let’s look at a couple of examples of common problems when creating and modifying code that has unit tests. First, the classic: adding a parameter to a method. I’ve heard this so many times: a developer modifies a small piece of a function and needs to add a parameter; therefore they have to modify 50 different test methods. I think that there are a few ways to deal with this depending on the situation:
1) Mechanically alter the tests
Be honest; if the tests are that mechanical then it will probably only take a few minutes to find and fix them. You are, however, then in the dangerous position of having altered your test code and your “real” (system under test, or SUT) code. Of course this one will come up sometimes, but it really isn’t the answer. If you had to do the same thing to real code, you might grin and bear it once, but if you left it alone it is probably because you were too scared to change it.
2) Restructure the tests
Test code with that much duplication in it is probably bad anyway. You should clean it up before you start the changes on the real code. Instead of calling the method in 50 different places, hand the task over to a single method then you only need to make the changes in one place.
If the test code is bad you end up with brittle tests that keep breaking the build. If you have tried to get that warm fuzzy feeling of “mmmm, lots of tests” by using cut ‘n’ paste then your tests are probably quite shallow anyway. If things are properly structured you shouldn’t see that kind of duplication.
Also, testing the same method over and over with the same parameters and playing with one of them; making it null, negative, out of range etc; that is just ONE test. The exception to this rule is on a class that represents an external, or public, interface. If for instance, you have made a class that is a web service and you have told your users that it is exception safe, it better be just that. And you better have lots of tests to back that up. Of course, if the interface changes then the tests may break, but that is OK as you can’t change it anyway as all those external people are using it! Of course there are ways to take that repetition out, like using reflection. Didn’t think of using reflection in test code? Hmm almost like it was “real” code. I remember working on the first project that I did using unit testing like this, and manually cranking out hundreds of these suckers. Ridiculous, but I’m stronger for it. 🙂
But what if the changes mean that you can’t do the tests anymore? Well in that case it might be that the changes were bad!. If you make changes too deep, too fast you won’t be able to get it right tests or no tests. The answer: roll back your changes, read the refactoring book again, try again! Try making slow changes; and only one change at a time and test in between. Even if you haven’t got full test coverage this makes sense; and automated unit tests will be better than running up the GUI once per minute. If you need to change the test code and the SUT code then you are in trouble and you need to go even slower. In this case I’d recommend refactoring the test code to a structure that allows you test the new structure, then changing the SUT code. In either case, you need to make some piece of code that is a scaffold that allows the one thing to be changed at a time, then when both are in the new form you can throw the scaffold away.
3) Restructure the real code
Maybe the problem in the breaking tests is not the tests. If you have a method that has more than 3 parameters on it, you are probably in trouble anyway. Don’t tell yourself that it is only one more parameter. It is time to get out your refactoring tool and use those tests — if they are worth anything — and change the interface to something that will tolerate changes. The simple one in this case is to replace the long list of parameters with a property bag, then when you need to add parameters, it won’t change the method signature and the tests won’t break.
Actually, in my experience, this is actually less likely than 2. But still possible… there are 2 chances here; 1 that you don’t really do TDD very well and you’ve made a set of classes that wasn’t well designed and now need to change them. When you’ve done a little more TDD you might either accept that things need to get thrown away or get annoyed and start making more flexible, testable code that requires you to throw away less.
The other chance is that you’ve made code that now doesn’t need testing; that is possible that you have refactored your code into being compile-safe. and there is no point in testing the compiler as it isn’t the SUT. Most people who have broken a set of tests like the idea that the new code doesn’t need testing; but don’t give me that “Look, the application works, I don’t need to run all the tests”. A working application is the weakest criterion for tested code and simply unacceptable for anything that is ever going to change and you will be responsible for it breaking.
Let’s look at a counter example where people think that they have done something OO and point to it as an example of how OO isn’t testable and how they have to “compromise” their OO design in order to make things testable,. The kind of thing they do — apart from just not testing — is to start making the internals of the class public or even worse, start using things like .Net private accessors or even reflection to get at the class internals. In my opinion, that isn’t sensible.
The classic example here is a an inheritance hierarchy where the sub-classes have to override an abstract/pure virtual/MustOverride method in the base class that does the real work. The function that does the real work is, of course, made protected so it can’t be called in the wrong way. How do we do test this without breaking into the class? Well there are a couple of things we can do. The first thing is don’t use inheritance unless you have to. It is one of the tightest ways to bind classes together and the problem of testability is entirely avoidable. For instance, you can inject a function pointer to change the behaviour of your class or extract the behaviour into an interface if there are multiple methods and inject an instance of that. Then the worker method is broken into its own class and becomes testable. OK, say you can’t refactor this way, what else can you do? You could use subclasses again; use a subclass to spy on the context that the work method is called from so you can test the base class behaviour. You can use a sub-sub-class to try and play the same trick on the worker class. You might be stuck though, and have to put up with obscure tests that run on big blobs of functionality. Oh well, tough luck. Like I said: No One Said It Would Be Easy and you’ll do it right next time.
The funny thing is about writing unit tests is that you will start writing better units. Your code will become more inherently OO. That doesn’t meant that it will be better in itself but, after all, people invented OO because other approaches were insufficient. It has become the de-facto programming paradigm for software that has to cope with changes and develop rapidly; software that has to model things in the real world; and, of course, software development where cost is a factor. In other cases, like brutal high-performance, multi-threaded applications or running on embedded OSs or like deep-space satellites where there will be no updates and there simply can not be any errors, OO is less relevant and unit testing there would have to take a much different form.