Friday, 3 June 2011

Steve Freeman - Smells

Went to Skills Matter last night for another of their excellent series of free evening talks. This one was Steve Freeman talking about Fractal TDD: Using tests to drive system design

His premise was that the standard TDD cycle of Write Failing Test -> Make It Pass -> Refactor could be done at a higher level and the benefits of doing so.

He started off by showing a unit test from the wild - pages long, full of references to other classes and mocks, even including a mock God object. This was TDD getting out of hand and someone should have noticed and said "this doesn't feel or look right"
Which led onto the meat of his talk - Test Smells.

If code follows good design principles then it should be easy to test, if there are any test smells then it's likely there are design problems. He suggested adding an extra stage to the TDD loop - "is it hard to write a test" ?

He then went through some of the common Test Smells

  • Test Duplicates Code - the code of the test seems identical to the code that it is testing
  • Too Many Assertions
  • Faking The Wrong Objects - for example dont mock a 3rd party API, test this with integration tests so your TDD tests can concentrate on the design
  • Test Setup Requires Magic - usual example of this is a clock
  • Not Testing Logging - if logging is important enough to be part of the production code then it's worth testing


Steve then said that to test a system we need to

  • Know what the system is doing
  • Know when it has stopped doing it
  • Know when it has gone wrong
  • Details of why it has gone wrong


he then made the connection that the things we need to test the system are also the things that make the system easier to support. So if you do TDD then not only do you get code that's easy to modify but you also get a system that's easy to support.

He made use of the Ports and Adapters concept from Alistair Cockburn which I hadn't come across before but am now busy reading up on. Steve showed how good design means tests dont have to be confined at the edges of these ports but can go in deeper and test the model.

All in all an interesting talk, some new concepts for me to think over and I really really must finish his book

One thing I did think of though ( and only after the event ) was his examples of code that was hard to test.
If TDD was being done right then wouldn't the tests be written first rather than writing the code then finding it was hard to write tests for it ?

3 comments:

Alan Richardson said...

Thanks for the write up. I benefited from reading Mr Freeman's book. It, along with Kent Beck's "Implementation Patterns" and Bloch's "Effective Java", influenced my Java programming most recently. All books on my re-read list.

Phil said...

Thanks for the comment, Alan - and for the recommendation of more books. Amazon here I come...

Matthew said...

"One thing I did think of though ( and only after the event ) was his examples of code that was hard to test."

Right. That seemed strange to me too. My guess is he is talking about adding tests on top of either legacy systems or systems with boundaries, EG /you/ are testing a component that needs to interact with a component written by someone else 2 iterations ago ... and he 'no longer works here' ...

Sadly, I can't get over the pond that often for these things, so thank you for the write up!