Sunday 27 February 2011

Guided by tests - 5 years 0 failures

Went to Skills Matter last week for a talk by Steve Freeman and Andrew Jackman - Five Years of Change, No Outages.

Very interesting experience report on the experience of implementing a system for an investment bank.

It was the banks 3rd or 4th attempt to try and get a working system - Steve and the team delivered it in 9 months ( Project Manager, 2-3 BA's, 4-5 devs, 1 DBA ) and it has been working ( with changes ) for the last 5 years with a third generation team working on it.

It was implemented by doing XP by the book. Within 2 weeks they had a Walking Skeleton and continued to deliver every 2 weeks.

The team did not sit around debating quality - they just Got On With It.

The PM gave the team great support - no doubt due to the 3 or 4 previous failures. The team used pair programming ( the interview process for the team had a pairing exercise ) and the PM could see the sense of this due to a past experience where a contractor had been moved on and they found his code was unusable and needed a rewrite which cost the project a few months delay.
The team were co-located - again, there seemed to be no arguments from the PM who backed their approach.

FIT tests were used - the devs had experience of using them and knew where the problems were. Initially the BA's were not used to going to the level of detail required for these tests but soon became converts. They even used the FIT tests to explore the system and would bring people down from other teams to show off their tests.

No code was written until they had FIT tests with examples. This took time as the requirements were not clear so it also had the benefit of shaking out the real requirements.

The FIT tests were also used to explain the system to third parties that were using it.

Steve and Andrew showed actual examples of these tests and showed tests from 2005 and 2010 and you could see how sophisticated the tests were.

Steve emphasised that the primary purpose of the tests was communication.

Retrospectives were still being done every 2 weeks, there was 'relentless progress' as there was no stopping to deal with crashes or having to firefight and the regular releases were done on a Friday before the team went off to lunch ( a sign of the confidence in the system )

The devs were on front-line support 24x7 which was a great incentive to them to make sure the system was rock solid

The only drawback seemed to be that once the system was up and running and working then people thought that it must have been an easy problem to fix and forgot the previous failed attempts.

Excellent experience report and afterwards Steve stayed around to answer questions and sign my copy of his Growing Object-Oriented Software Guided by Tests book. It was good to find I wasn't the only person there with the book as there was someone else behind also with a copy.

Steve did remark that my copy seemed new so maybe it's time I read it again and turned a few corners of pages down...

Monday 21 February 2011

WTA #7 - Down to the river


Took part in the WTA #7 ( Weekend Testing American chapter ) hosted by Michael Larsen and Albert Gareev.

Michael has already done a good write-up on his TESTHEAD blog so I'll give some of what I took away from the session.

( if you want to try out the app then try here - or the puzzle can be found here

Basic mission was to see if the program would help find the smartest candidates.

Almost immediately it did seem to find a not-so-smart candidate...

"Clicking the picture only does copying the file to Excel nothing else and I have 20 odds in my downloaded folder"

There was a debate on 'smart' - IQ smart, street smart ?
I went off to Google and found the solution to the puzzle - does that count as smartness ? The Google factor cannnot be discounted - even if a candidate taking this test had no access to Google it's also possible that they would have googled 'interview questions for Company X' beforehand and known that they would be getting a puzzle. Again, does this make them street smart ?

One of the great things about the Weekend Tester Sessions is what when a bunch of testers get together then things can go off on all sorts of tangents that one person alone might not have thought of.

If illegal combinations were put on the raft then there would be a bubble showing it was wrong - example is shown at the top of this post.
Now the Japanese with their anime culture might see nothing wrong - but as Justin Byers pointed out

"A father punching his daughter... even though it is cartoon violence, is this the kind of program you want to associate with your business? "

The app itself wasn't really tested but we did find holes in the puzzle.
There did not seem to be a rule about the prisoner being alone - in fact to solve the puzzle he was left on his own so why wouldn't he run away ?

If we were being brought in to test to see if the app was a good test of smartness then shouldn't we first be tested to see if we were smart ?

We found a way to get in a dig about certifications.

"Also, the customer can simply see if there's a "Family Raft Crossing Certification" on the CV/Resume. No need to use the app!"

The basic premise behind the mission was challenged rather than finding bugs in the app. This was a good thing and summed up in this great phrase ( that I think I'll be re-using )

"Like finding that the logo on one side of the sinking Titanic is the wrong color."


The session itself did seem to indicate that giving this entire mission to a tester ( rather than just the app ) might make a good audition for someone hiring a tester.

"For me, if I were the hiring manager, I think I would be more impressed if they were to have asked some of these questions during their session."


Great session, cannot recommend these sessions highly enough.

Thanks Albert and Michael !!

Sunday 20 February 2011

TWIST and Shout


Decided to make more of my commuting time on the train and download some testing podcasts. The TESTHEAD blog always has a good source of them so off I went.

The podcasts are actually on the STP site, having downloaded a few of the recent ones I noticed there were others such as Cem Kaner or Mark Crowther.

However, clicking the 'download podcast' link gave me the page shown at the top of this post.

Your Membership Level: Basic
Membership Level Required: Basic

Seems to match - where's my download ??

The smallprint on TESTHEAD's blog made it clear.

Each TWiST podcast is free for 30 days, but you have to be a basic member to access it. After 30 days, you have to have a Pro Membership to access it.

so there's the problem, the error message doesn't tell me that the required membership level is Pro.

Well, that was one problem...

On the list of podcasts if you choose one of the older links - eg Twist #16 Catherine Powell or Twist #15 Michael Larsen then you go straight to an error page that does correctly say that the membership level required is Pro.
However, that correctness is spoiled by there being another error message.

You must be logged in. Please log in or register for an complimentary or annual subsciption package.

Top right of the page shows my name and profile details so it knows I'm logged in.
Whoops.

That's one of the risks of running a website for testers, they're going to be all over any mistake they spot...

Sunday 13 February 2011

WT53 - What am I bid for this tester ?

Saturday morning and I fired up my laptop whilst watching Soccer AM and noticed a tweet from @weekendtesting that WT53 was about to start with @jbtestpilot aka Jon Bach.

Couldn't turn down the chance for some testing with the QA director of Ebay so I joined the session and had a blast.

Jon had set the session up to be 4 mini-missions all of which involved using the Ebay site.

First mission I attempted was to find the search term that gave the greatest number of results. First thought was to try wildcards but no, the site insisted on a couple of characters before any wildcards. So what letters would start or be used in the most words. Maybe 'mo' to match 'mobiles', 'motors', 'mothers'...

I started to learn how the site worked, typing one letter and the dropdown would start to offer suggestions so I tried different letters and tried to work out if the categories offered might lead to lots of hits.

'ca*' gave me 18 million results which for a while was one of the top results, 'co*' gave 21 million. With more time maybe a quick automated script to loop round valid 2 char combinations...

Second mission was to find the most expensive item. Could have been very simple if the search by price option didn't insist on having a category. Other people in the session had heard of a yacht being sold by Roman Abramovich for 168 million. This was no longer on there so did not count.

How to find the most expensive item currently on there ? I thought of items that are usually expensive - diamonds, yachts, cars. Some of these led to items on sale for 21 million and also a surprise. I was thinking of physical items but there were domain names for sale for 21 million. That was a good testing lesson.

Then I thought about houses and real estate, sure enough there were some on there, firstly a villa in Spain for 28 million and then a 5 star hotel on Sicily for 38 million.

One mission I did not attempt was to find the most bizarre item for sale. I didn't attempt it for 2 reasons:

1) One mans perversion is another mans pleasure so what I might consider bizarre might seem mainstream to another

2) Once I started looking then I knew I'd get sucked in and be there all day

Fourth mission was to do a Ebay whack and find a search term that only returned 1 result. Initially I thought this was easy - find an item and type in it's entire description. Jon told me that it had to be 2 words only. It still wasn't too difficult, find a search term that didn't return many results and use the results from that.

For example when searching for hotels doing my 'most expensive' search there were results returned for hotel souvenirs. So using a search such as "69 sheraton" gave me 1 result back and an Ebay whack.

Afterwards it was the debrief. Jon explained that the theory behind this and it was Open Book Testing and he was using it to get new testers up to speed quickly. Shrini and I both thought it would also be a useful addition to tester interviews.

As you can tell by reading this blog, it really is a useful guide into the thinking process that is going on when someone is trying to run some tests.

More details of the Open Book Testing approach can be found in this paper by Jon here

It was a fun session, I now seem to be getting a small addiction to surfing Ebay to find what is on there and it was yet another success for Weekend Testing

Friday 4 February 2011

The Glaze Heuristic


Wish I could take credit for this one but I found it when reading this blog post from Dan North ( which in itself is worth reading ).

As an aside, this raises some interesting questions. What if you are writing scenarios in a domain that no-one seems to care about? (You can tell by watching their eyes glaze when you talk about it.) A lot of what we traditionally call non-functional requirements can fall into this category. For instance, most non-technical people aren’t interested in networking terms like latency, throughput or packet loss, but they might perk up when you start talk about sluggish response times or requests going missing. You can use the glaze test as a heuristic to know if you are talking to the wrong person – or using the wrong language

Looking forward to using this one at the next meeting...

Thursday 3 February 2011

lpr build this

More reading through Beautiful Teams gave me another good story.

This one was from Mike Cohn when he was asked what practices a team could do to improve the quality of the code.
Mike says that the first thing he would want a team to do is a continuous integration approach.

There's now a plethora of tools out there - Bamboo, CruiseControl, Hudson ( pardon me - Jenkins ), Team City, TFS, yadda yadda yadda.

This hasn't always been the case and Mike relates the tale from 1992 and how they utilised a Novell print queue to do the builds for them.


We actually wrote a build server that monitored a Novell print queue. Novell print queues were wonderful. They could hold anything. We put compile jobs into the Novell print queue, and we had a build server that would just monitor it and kick off compiles whenever it noticed anything getting inserted in there.

It was a completely cheesy, crappy way to do this network communication, but just on that application it had tremendous benefits



You might argue with Mike about whether CI is the #1 item on the list but you couldn't argue with that teams commitment to quality !