Test Setup as a Sanity Check on your Design
Over the years there has been much discussion on the nuances and value, or lack-thereof, of Test Driven Development (TDD). As part of those discussions there have been many blogs and books which talk about listening to your tests. This is one of the key elements of the discipline that is not immediately obvious when you first start out. In the following blog post I endeavour to explain this point in more detail.
I am a signed up believer in building a code base from the Outside-In. In it’s simplest form this means that you should start at the boundary of your system and work in until all actions have been fully realised. For instance, in a web app when a user signs up for an account the system may need to create an account, send an email, and call a service. If I am building this app I will start with the Account Signup UI and allow the tests to find the collaborators and contracts which need to be fulfilled for the system to function.
The Sanity Check
This morning I was working on piece of code which would give users Rewards based on their activity. I wasn’t sure how it would work so I sat down with a bit of paper and figured out the pseudocode for my RewardGiver:
I decided that for each reward there should be a guy which implemented the RewardCalculator interface and it seemed like it would be a good idea for that guy to handle both the if statements in the pseudocode; all of this seemed straightforward enough.
Now to get down to some test writing. I had to think for a minute as the first test doesn’t seem immediately obvious with my nested If condition but I decide to implement shouldNotGiveRewardWhenUserAlreadyHasReward.
I started to build my test setup to implement this test.
Now I considered my first test. I started to set it up.
The test setup started to feel overly complex for such a simple thing. In addition I was wondering what I should verify to confirm my scenario was working. This led me to consider that additional tests would require more and more setup on the various players in the test.
Watch out for that cumbersome feeling
When things start to feel hard to test, or your test setup cumbersome, that is a tell tale sign that there is something wrong with the design. Quite often you will find that the class you are trying to test is trying to do too much or take on a responsibility that lies elsewhere.
In the case of our RewardGiver the solution to simplify things was simply to combine the conditional checks and move them to the Reward Calculator as described by the following pseudo code.
After that change the tests practically wrote themselves as the function of rewarding a user was now very straightforward.
Avoid the pitfalls of Inside-Out
If I had taken the original pseudocode:
and started on the Inside and worked out. This would have meant that I would have implemented all of my reward calculators using this slightly misguided interface design. At that point it would have been a compromise one way, or the other; I would either have had to accept that I did not particularly like the design of RewardGiver but carry on or re-work all my calculators for the new interface. In either case the result would be less than satisfactory. This is why I subscribe to building things from Outside-In.
Avoid the muddle
The algorithm I have described in this sample is unlikely to win any prizes for complexity, but I hope it has demonstrated how easy it is for test code to quickly become complex which in turn makes it harder to maintain and extend your application.
Do yourself a favour - Keep it simple and work from the Outside-In.