Tuesday 10 November 2009

Rare insight

Sometimes an external consultant really can bring useful insight to a project. In my experience, this is normally when the consultant is an independent, rather than attached to a consultancy agency with something to sell, e.g. either themselves, a product, or their particular flavour of dogma.

We've recently had the pleasure of some consultancy from a very respected industry figure with many years of experience. He's not a stick in the mud, and is au fait with Agile practices and all the latest 'buzz words'. Rather reassuringly, however, he doesn't tend to use the new-speak himself, for example, in his parlance, a bug is still a bug, it's not a 'defect'!

He gave us a presentation today to share some his observations of our general code-base and project, and I couldn't help but grin all the way through.

Firstly he criticised, in his gentle and constructive way, our architecture, with it's MVC and 'Enterprise Architecture' pretensions, where we've tried to force the angle-bracket shaped peg of XML processing into the one-shape-fits-none-shaped hole of Java. He made me wish once again that I'd really stuck my neck on the line 18 months ago and overruled my colleagues on their approach to the page generation architecture.

Then he turned to our code. Apparently we've got the text-book unit-test coverage, which he's never seen before! Hooray for TDD! But wait, unfortunately, it seems most of the tests are pointless! The tests are so tied to the implementation rather than the intention of the code, that changing the implementation, even in a compatible or beneficial way, inevitably breaks the test, which means that there are effectively no regression tests. He pointed his finger at over-use of Mocking.

In general in our code-base, all the interactions of the unit under test with other classes are actually with mocks, not real objects, which, text book though it is, means that your test can pass even though you're calling collaborators with the wrong parameters, just so long as you expected the incorrect parameters and return the 'right' answer! Thus, nothing ever breaks in our '10 minute build', since it's basically a fait accompli, and everything is only really tested in expensive integration or acceptance tests, which means a long and lonely wait at the bus-stop of build.

This somewhat riled the TDD fans in the audience, who's chief defence seems to be 'Well, when I TDD I write better tests than that'. Unfortunately, they have not written most of the code, instead comparatively junior (but well paid) Java developers have, and whether they claim TDD experience or not, they inevitably go down the mocking rabbit-hole, and write this kind of pointless nonsense. I've also seen people stuck for days on 'how are we going write a test for this code (that we've not written yet)', and tried to dissuade them of worrying about it too much, suggesting that they try writing some code first, at which point the dogma-clones get all uppity. Equally I've seen apparently experienced people spending hours effectively testing a framework we use (e.g. Spring), rather than the code that calls it, because the test for our own code seemed so insubstantial.

Don't get me wrong, having tests is good! Essential! No-one in their right mind is against testing! But only if the tests are actually useful tests, that is, tests that help to understand the way code is to be used and prove the axioms that must not be broken by re-factoring. However, TDD, which 'if done properly' encourages you write your tests first, and thus design the code in the process, is demonstrably very tricky to master, and blind adherence to the primary rule, without application of common sense or foresight can be a colossal waste of time and effort.

TDD is another of those TLAs that if you find yourself or your colleagues brandishing it about too much, you need to sit down and think what the whole point of it is, and what you actually want people to do.

No comments: