Sunday 24 January 2010

A Saturday Afternoon At The Opera

Another weekend and another session of the European Chapter of Weekend Testers

Our mission this week - write some scenarios for Soap Opera testing of Bing Maps.

An interesting challenge which shows why the Weekend Testing movement is such a great resource for a tester. I've read about soap opera testing but never had the chance to put it to work and a session like this was a chance to get a taste for it.
But it was only a taste - by the time the mission was explained, some thoughts about the problem, some interruptions from the Skype icon flashing ( must learn to ignore this ) and then reminders from Markus that we had to send our reports in and the time was gone.

I also lost time as I deviated from the mission and went off and tried out a couple of my scenarios and lost more time as I found a 'script running slowly' error which meant a reload of the app and attempts to reproduce the problem.

Managed to write 2 scenarios and found that writing good ones requires a lot of thought. The good thing about doing this with other testers is that I can now go off and look and see what they came up with and learn from them

Time-up and it was the second-half of the session which is just as valuable ( maybe more so ) than the first which is where the session is discussed and the discussions fly off on all sorts of tangents.

One discussion was the fact that some of us went 'off mission' and actually used the app and got distracted by that. Guilty as charged but in my defence, your honour, I spend all week with people insisting i keep to my mission so their metrics look good so I relish my freedom at the weekend...

We disussed what to do about this as a test manager. Do you want to know if your testers are going off mission, do you encourage it ( maybe you should build in a 20% time a la Google to let testers follow their instincts and maybe you should have some metrics to see if more problems are found when testers go off mission than when they are on it ( sorry, suffering from metric overload on my current project ))

Another tangent was whether I was a SuperHero or not... Being a good tester I've got to know the app I'm working on backwards. I know the business side, the techy side and almost know every word in every requirement doc and tech spec. So when testers new to the project write their tests or think they have found defects they are run by me first to see if they make sense. Does this make me a Hero ? I think not - but does mean the project is suffering from The Bus factor in that there would be a problem if I was to be hit by one

A discussion on conferences and the shortsightedness of companies unwilling to send their tester there. A topical topic as Matt Heusser posted a guide to Conferences On The Cheap - or organise your own as Tony Bruce did ( Tony was on the session and good to have him on board )

So I am totally sold on the Weekend Testing concept
Great way to try out and learn new approaches ( anyone recommend any books/blogs on soap opera testing ? ) and great discussions with fellow testers

3 comments:

Unknown said...

I really hope that you're not going to be hit by a bus. Lisa Crispin and Janet Gregory mentioned Soap Opera Testing in their book "Agile Testing". Some weeks ago Lisa mentioned on twitter that the write-up about it is finally online. When preparing the mission for this week, I remembered the article and proposed that we could end up with something like "Markus wants to visit all weekend testers in Europe" or something like that as a possible scenario. There we ended up with the mission about defining narratives for testing. Great you liked it, since I was a bit unsure about the responses to such a mission.

Hope to see you next weekend!

Simon Morley said...

Interesting read!

Interesting question about test management, test missions and mission control...

If the time from mission launch to reporting was one hour then I wouldn't expext much off-pisting!

If that time scale was 2-3 weeks then I'd expect some - depnding on the mission, of course. (If the mission was purely confirmatory then I wouldn't expect deviation.)

If the mission was "test this" - with a range of requirements to verify and performance/characteristics to investigate then I'd expect some feedback loop into "mission control".

But this feedback loop can make some stakeholders uneasy as they see the "test scope" (however the stakeholder "defines/understands" that) as changing. The majority of stakeholders want a report on their baseline, or a baseline on the testing effort - any testers off-pisting here can make such a stakeholder jumpy...

One way to fix this is communication and the way results and findings are presented.

"Yeah, we've executed x% of our designed/specified tests, but have found several problems in feature Y and so we're going to devote some extra time/days there - this will be an initial investigation and by day Z we'll a better idea how much extra effort is needed. We've checked this area with the system designers/requirement owners and it seems worthwhile...."


Oh, we have discussions about the angry bus driver at our place too... I actually think these buses are driven by black swans!

Unknown said...

Phil,
good discussion on Saturday, good fun again and good writeup as usual.

With your comment about a good tester needing to know the application inside out. I only agree somewhat. An excellent tester, just to use another word should be able to test the application anyway. I do acknowledge that it makes it easier if you have the domain knowledge as otherwise some test cases won't occur to you.
What happens if you have an application that you can't know inside out because it's just too large? Take SAP for example. Some of the software that I'm working on is simply too large to be an expert in it.
My point is that the "I make a better tester if I know the product" is simply not always possible, so what are the alternatives?

Simon,
I usually give 1-2 hours for a mission taking a lead from James Bach's session testing. I don't have a problem if testers deviate (too much) from the mission, but they need to be able to put into words why they did it. Being able to communicate that is a trait of a good tester in my book.
Also, I'm only one instant message away and very much contactable so if testers find they deviate and are not sure they could contact me. We've done more scripted tested in the last weeks though...