Garry Shutler

Test-driven development - 3 years on

January 25, 2010 · 4 min read

I’ve been questioning my own practices recently, seeing if there were places that I could improve, both in quality and productivity. Part of this process involved an evaluation of my test-driven approach (TDD) to developing software.

I have gained and learnt so much by practicing TDD. I believe there is no better way to instil the SOLID principle within both yourself and your code than to practice TDD. Violate any of the principles and you feel the pain in your tests, they will either become monolithic and hard to write or continuously break.

Here lies the chicken and the egg situation: to write good, robust tests you have to understand SOLID; to understand SOLID you have to be testing your code. This creates quite a hurdle that many people don’t put forth the effort to overcome. This is a shame but I think an excellent way of splitting the wheat from the chaff. In order to become proficient at testing you either need to have a natural talent for writing good code or the persistence to break through to the required level of understanding. Both of these qualities are equally valuable, having both would be fantastic.

The quality of code produced by TDD has never been in doubt in my mind. What I am questioning is the return-on-investment (ROI) of each test I write. Sometimes I feel I am writing a test for the sake of writing a test, producing very little value in the process. The kind of scenario where this is most apparent is the sort of code where if you were to not have written it properly nothing would happen or it would blow up in your face. Code with real binary levels of success, often with a single path through it.

This is the elephant in the room of all TDD discussions. When am I testing too much? The standard response of “NEVER!” is a lie but you have to have practiced TDD for a decent while until you can judge when you’re going to be writing tests with little worth. I’ve identified a subset of tests that I’m probably wasting my time in writing which are clouding the overall message of my test suite. The problem is that I am responsible for mentoring several developers in how and when to craft tests. I have to lead by example and until the whole team reaches a higher level of understanding of TDD I have to test everything despite knowing some of the tests I write have little to no value.

How can I test everything without writing these low value tests? Currently I’m looking at integration acceptance tests that describe the behaviour of the system. These will verify that the several layers of the application interact as expected in given scenarios to produce the desired behaviour of the system. These will likely have meaty setups and meaty verifications but they will remove the need for multiple low ROI tests. They are likely to be more brittle than unit tests but so long as I verify outcomes rather than interactions they should be fairly robust. What I’d love to experiment with is getting the stakeholders to help me write these acceptance tests, but this will have to wait until I’ve settled on a style for writing them. There’s nothing that harms adoption of a practice more than the first interaction to be a bumbling mess as you are not sure of what you’re doing!

As a company, we are also looking to hire testers this year. This is another thing stopping me from cutting out the low ROI tests. If I do not test my code, it will not get tested through anything but a manual process. Once we have testers I may be able to just write the tests that drive the behaviour and design of my code, leaving the full suite of tests to be developed by the testers.

So what have I learnt over the past few years? Is test-driven development worth the effort? Most definitely. The quality of my code has come on leaps and bounds and each time you do a major refactor of a well tested code base is a revelation. I find I write less code to do more and writing less code is always a good thing.

It’s harder to be sure you spend less time fixing bugs if I’m honest but I’m confident it’s the case. If you’re just starting out with TDD or unit testing in any form, start tracking the hours spent developing versus bug fixing. It would be interesting to see some empirical values on the subject.

Here’s to future years of TDD. It’s going to be interesting to see where my practices go.


Photo of Garry Shutler

Hey, I’m Garry Shutler

CTO and co-founder of Cronofy.

Husband, father, and cyclist. Proponent of the Oxford comma.