This time was no exception - the book I was reading was What The Dog Saw and from the chapter Blowup I learned about risk homeostasis.
The theory behind this is that under certian circumstances, changes that appear to make a system or an organisation safer, in fact don't. The reason for this is that humans have a tendency to compensate for lower risks in one area by taking greater risks in another.
Gladwell gives some examples of this:
- Part of a fleet of taxicabs in Munich were equipped with ABS. Did the group with ABS have fewer accidents ? No, they had more ! They drove faster, made sharper turns, braked harder.
- More pedestrians are killed at marked crosswalks than unmarked ones because they compensate for the safe environment of a marked crosswalk by being less vigilant about traffic.
Risk homeostasis also works the opposite way.
In Sweden they changed from driving on the left-hand side to the right-hand side. This did not increase the accident rate, traffic fatalaties dropped by 17% as people compensated for the unfamiliarity with the new driving pattern by driving more carefully.
All of which made me think about whether this could relate to testing and s/w development.
- Do devs take more risks if they know there's a testing stage at the end to catch their mistakes ?
- If TDD is introduced to an organisation does it cause an increase in defects as devs think they are safer in the same way as the cab drivers did with their ABS ?
- Does switching to a new programming language decrease defects because devs are being more careful with their code because it's new to them ?
I was going to look more into this and get the book Target Risk that Gladwell references in his article - but not with a price of £163.95 !!