Thursday, 28 March 2013

5 Questions With Rob Lambert

When I was working out who to ask, one person had to be Rob Lambert. I've known him online for a while, first through his Testing Revolution and then his Social Tester site and also with his work on the Software Testing Club before finally getting to meet him at Tester Gatherings.

( He's recruiting - when I was looking for a job I was very tempted but could not turn down Atomic Object and a move to Michigan. If you're reading this and looking for a move - check out his posting )

1) What happened to "The Testing Revolution" ? Was anyone sent to the guillotine or did you decide that being social was nicer than being revolutionary ?

The Testing Revolution was quietly squashed when I became more involved in the day to day running and creating of content at the Software Testing Club. I think with the Testing Club involvement I found what I had been searching for; a group of people doing something differently and challenging the norms. 

Since I've not been involved behind the scenes with the Software Testing Club I've been starting to get all revolutionary again. Watch this space.

Being social and sociable does seem to lead to interesting outcomes, cool projects and connections with interesting people so I'll stay on this route rather than declaring outright revolution. The Testing Revolution was a great name though!

2) Finding great testers to join you seems to be hard. Are you just really picky or is there a lack of talent out there ? Why do you think this is ?

Great question. It's no secret that we are finding it hard to find outstanding talent and I think there are a number of factors at play.

Firstly, we have the bar set very high. Our recruitment and interview process is pretty tough as we're looking for the best people to join us to support our growth and deliver amazing products. The application process itself puts the emphasis on filtering out those who just want a job from those who want a career and of course, want to work at NewVoiceMedia. 

Secondly, we operate in a way that means the testing we do is mostly exploratory and there really aren't that many testers available who know much about exploratory testing, why it can be very powerful and how to improve their own practice of it. I suspect there are a great many testers who are excellent exploratory testers, they maybe just don't label it, notice it or acknowledge it as a skill.

Thirdly, we are in an area of the United Kingdom where there are five major cities with hundreds of companies all competing for a dwindling number of appropriate candidates. At a recent recruitment fair I got to meet several of these companies and they too are struggling to find candidates.

Fourthly, there appears to be very few testers coming through to the job market who have a well rounded set of skills above and beyond testing (i.e. communication skills, intuition, commercial awareness etc). There are no schools, colleges or universities in the UK that teach testing as a career option. There are very few businesses and companies that promote testing as a career to be rivalled with development; hence testing appears to be treated with less respect often leaving testers in these companies feeling undervalued and disheartened and unwilling to improve their skills and prospects. This has an impact on the industry at a national level; there simply aren't that many people improving their skills.

As such there seems to be fewer people coming in to testing. There are also fewer companies and people creating hotspots of amazing talent that then disperse within the community to further spread the skills and interest in testing to other companies (and hence people).

I don't appear to be alone in my struggle to recruit testers; many people I know in the community are looking for testers but cannot find them. 

However, it's easy for me to sit here typing away about the problems but what this community really needs is a solution to this. That's much harder to find although there are pockets of training, mentoring and career building going on. The challenge is growing this training and mentoring to a larger scale and with more momentum without falling in to the trap of standardisation and certification.

I'm working hard in my spare time on this problem and will hopefully have some ideas to bring to the community in the next few months.

3) Do you do any hands-on testing yourself ? If not, do you miss it and what is your time spent doing ?

I don't do a great deal of hands on testing at work anymore. I do indeed miss it and I do occasionally start an exploratory session and get cracking. 

My time at work is mostly spent doing test strategy, product documentation, line management (of testers, developers and scrum masters), agile coaching, research in to testing techniques/approaches and team planning (budgets, recruitment etc)

Saying that, in my spare time outside of work I do voluntary exploratory testing for ICT4D (information and communication technologies for development) organizations who often lack the budget for testing, yet are building life changing (and in some cases - life saving) products. This helps to keep my skills sharp and I'm doing some good for the ICT4D community.

{Information and Communication Technologies for Development (ICT4D) refers to the use of Information and Communication Technologies (ICTs) in the fields of socioeconomic development, international development and human rights.} - Wikipedia (

4) Your company moved from 8 month releases to weekly ( congrats ! )  - what's the next challenge for you ?

The next challenge for us is to grow the development team whilst maintaining the agility, frequent releases and continued focus on delivering great products to our customers. The company is growing quickly and with that growth will come some challenges around deployment, communication and process. 

5) What books are you reading at the moment and why ?

I'm a prolific reader of books and typically get through a couple of books a week; sometimes more if I'm researching for a book or article. I also tend to have a number of books on the go at once.

At the moment I'm reading:

The Power of Small - It's a great book about focussing on the small things in life like truly listening to your co-workers, paying attention to small problems and being present in your day to day life. 

An Idiot Abroad by Karl Pilkington - The TV series was hilarious and so is the book. This is my light hearted reading to stop my head exploding.

How to Get A Grip by Matthew Kimberley- This is a hilarious book; it's basically a no holds bar self-help book. Genius. 

Man Watching by Desmond Morris - This is a classic book on human behavior which I originally studied at University. I'm re-reading it again for an upcoming book I'm writing. Lots of good insights in to how people communicate verbally and non-verbally.

Team Geek by Brian W. Fitzpatrick and Ben Collins-Sussman - Really good book on being a geek and having to work with other people. I've learned so much from this book so far in terms of both my own behavior and that of others in the team.

Positivity by Barbara Fredrickson- This may sound like a mamby pamby self help book but I observed a few years back that I'd become negative about a lot of things; it's a trait I often see with testers. I think we can still be skeptical and critical whilst still being positive - hence I'm reading around the subject.

Confidence Plan by Sarah Litvinoff - A great book for those who want to boost their confidence in a number of areas. One of the areas I could do with a confidence boost is in my presentation skills. This book is good at putting a lot of the daily challenges and the big things (like presentations) in to perspective. 

Scaling Lean and Agile Development by Craig Larman and Bas Vodde  - Just started reading this. This book ties in with what I believe our challenges will be moving forward; scaling a lean development team.

For those that have a keen interest I'm adding a lot of what I'm reading to my Shelfari Bookshelf

Wednesday, 27 March 2013

The Stagnant Gorilla

I watched an interesting talk from Alan Page that he did at QASIG about Test Innovation.

In the talk he made the point that a tester isn't going to sit in the bathtub like Archimedes and have a Eureka moment and come up with a brand new test idea ( though having a bath or shower is a good place to think ). Or sit under an apple tree like Newton who suddenly ( or not ) discovered gravity when watching an apple fall.

It happens gradually with exposure to other experiences, ideas and thoughts.

Case in point being his presentation itself.
It happened a few weeks ago in Seattle but I was able to sit on my sofa in Grand Rapids, MI and watch and listen.

I can get up in the morning, surf Hacker News and find a post with a great title of Chrome wakes me up in the middle of the night, with monsters - a bug report where after a Windows update in the middle of the night Chrome re-opens its tabs and starts replaying a scary movie.  This gave me ideas for tests, I sent out the link as a Tweet and saw this getting re-tweeted. So lots more testers out there get to see this and add it to their list of test ideas and it ticks away in the back of their brain and possibly creates a new test idea based on that.

All sounds marvellous, all these testers putting their heads together and moving the test world onwards and upwards...

But then Alan posted about his frustrations with the damned dancing gorilla

I also read a blog about problems with test automation, it sounded very familiar to the Test Automation Snake Oil article from James Bach ( the author agreed and added it to the end of his post ). The Snake Oil article is from 1999 - and based on an article from 1996

and then, coincidentally, I read a blog post on The Real Future of Software Testing - he lists some of the advances in s/w testing over the years. Seemed like a pretty sparse list.

So what to think ?
The Internet opens up a world of learning and a chance to exchange ideas and learn from other testers - but if we're just sending each other dancing gorilla videos are we moving on? Do you find the same ideas being rehashed and argued over again and again ? What are you finding that's new out there ?

Monday, 25 March 2013

5 Questions With Iain McCowatt

Iain McCowatt was kind enough to spare the time to answer my questions and explain his views on automation, zombiesm and why he gets called Liam...

1) What's your testing background and why is testing your passion?

My background is hard to pin down in that it's been quite varied. Once I even tested in order to reverse engineer a product rather than to assess its quality. That was fun! Now, apart from a couple of years with a software vendor, I’ve tended to test in Enterprise IT.

Most recently I’ve been working with large banks. I’m something of a serial obsessive; I love to get heavily into something, to learn it inside and out…and then move on to something new. Until I fell into testing, I thought that was a personality defect, now I see it as a blessing. Testing, at its heart, is all about learning and discovery. It never fails to provide something new to learn, something new to obsess about. 

2) You did a great series of posts on "idiot scripts" - does this mean you were once a zombie tester? Why is it that people associate testing with scripts and do you think that will ever change?

A zombie? Not likely. In many ways I was fortunate in in my development as a tester. For most of my early years I had a mandate to test and free reign over how to do so. It was only in recent years, on moving to the management of outsourced testing services, that I encountered something different: a belief that one can manage testing like a sausage factory, with measurable, repeatable processes and drones turning the handle of the machine. That’s not how I see testing, and you know what? It doesn’t work. 

And yes, I see signs of hope. Increasingly I’ve been meeting clients who have simply had enough of the broken promises, who want real testing, who value passion, creativity and intelligence. Change is coming.

3) You've also done a number of interesting posts on automation - do you find that there is also "idiot automation" and that people aren't thinking about their approach to it?

That’s a nice way of putting it. There are three major idiocies that really bug me when it comes to automation. 

First is why we automate. Lots or organization behave as if test automation is a universal good, an end in itself. So they end up managing automation as if it is somehow separate from testing. I’ve met teams of automation engineers who never talk with the testers they should be serving, testers who are discouraged from contributing to the automation conversation. That’s nonsense: automation serves no purpose whatsoever unless it helps testers to solve testing problems.

Next are the unnecessary constraints that we often put on what we automate and how we do so. There seems to be a tendency for people to over-generalize solutions. They’ll solve a problem in one situation, and then assume that the answer will be useful in another situation: even if the problem is completely different. So we end up with dumb-ass assertions that “all automation must be justified by R.O.I.”, “tests must be fully automated”, or “automated tests must be idempotent”. These all come at a cost: blindness to the real opportunities to automate, and weak, compromised, automation. When people solve the wrong damned problem it shouldn’t come as any surprise that they fail.

Then there is the complete misunderstanding as to the costs of automation. This isn’t just about tangible financial costs, but what you lose by automating. Tools may be able to do certain tasks better than people can, but their focus is narrow. By attempting to remove the intelligence, intuition and even unpredictability of people from the testing equation you choose to ignore all manner of potential problems. Don’t get me wrong: I’m in no way anti-automation, in fact much of my work involves figuring out ways to use tools to help testers, but I believe that we need to get better at understanding exactly what tools do, and don’t do, for us.

OK, rant over. No doubt I’ll blog some more on this subject in the future.

4) It can be confusing for a newbie to testing - the scripts vs exploratory arguments, do you need to know how to code, QA vs QC vs Tester etc - what advice would you give to a newbie?

In a word: learn. Learn how to learn. Learn to love learning. Be ready, willing and able to learn whatever you need to learn in order to best serve your project as a tester. When I say learn, I don’t mean class-learning or book-learning, I mean learning by doing: taking ideas about testing and making them your own, changing, breaking and remaking them until they’re in your bones. And when you think you’ve learned enough, be ready to throw it away and try out a different way of thinking about the problem you’re working on.

5) What books are you reading at the moment and why are you reading them?

At the moment I'm reading What Computers Can't Do (Dreyfus). That's informing some of my current work in test automation. I'm also interested in the role of intuition in testing, which has led me to Working Minds (Klein, Crandall, Hoffman) which is a great reference for a set of techniques that can help you to figure out the heuristics that experts use to solve problems.

Bonus Question) How many people miss out the other 'i' in your name and does it annoy you?

Ha! That's the Scottish Gaelic spelling, the Cyclops variety is an English derivation. Once upon a time miss-spellings bothered me, but I'm used to it now. It's amusing how three-vowels-in-a-row can throw people into a dyslexic spin. 
I've had just about everything: Iam, Lain, Liam...

Sunday, 24 March 2013

The Acorn Grew

A couple of months back I posted about a tentative Michigan Testers Meetup

Well it actually happened - 30 testers arrived at Lansing, MI for an afternoon - and evening  - of talking testing. When we came up with the idea we didn't know how much, if any, interest there would be, it was a pleasant surprise to find 42 people signed up and most of those actually turned up.

Somehow I'd volunteered myself into doing a lightning talk and was first up - and to add to the nerves the presenters were also being miked up, not used to that and been awhile since I've spoken in public.
Seemed to go OK though I have another post lined up about a mistake I made in it...

Next up was Erik Davis who had driven up all the way from Columbus, Ohio. His voice had gone during the week and he didn't think my suggestion of doing his presentation through the art of mime was a good one.
He improvised though and gave one of the funniest presentations I've been in - he had written up his talk and used a text-to-speech program ( with an impeccable British accent ) to read it out. The talk was about how Eric has wanted to set up his own local Tester Meetup and what he's done to get it off the ground. It's happening so good luck to him !

Jess Lancaster, a fellow organiser, gave a very enthusiastic talk about tester skills and how often "technical skills = programming" rather than the full gamut of things that being 'technical' could cover. Loved his delivery and energy, any tester listening to him would want to leave the meeting and start working on improving their skills

The event was kindly hosted by Tech Smith, great facilities and two of their testers Wade Stevens and Clint Hoagland gave talks on what they are up to. Wade loves using video for recording bugs and gave a good talk on why and how he uses them, Clint talked about how the company is moving from Waterfall to Agile with the problems they face and some of the solutions they are coming up with.

Along with questions from the audience this was a good session, as you can see from the outline the range of topics covered was wide-ranging

Snack break next and a problem found with inconsistent requirement. Trays of Subway sandwiches but the plates that were put out were the TV dinner type plates with 3 compartments all of which were too small for a sandwich. Defect ?

Next up was Matt Heusser with a talk on automation and some of the pitfalls of the approach that companies often take where they set it up to do a 'click click click check value click click check value' which they think is mimicking the approach a human tester is taking. But Selenium does not have a "oh look these buttons overlap" function or a "lets see what happens if we edit the second item in a shopping cart and save it then re-edit it and add a third item oh look the program is in a mess"function.

To demonstrate his ideas he got the room to play Battleships, people were divided into teams, one half could be 'exploratory' and react to the feedback from their moves, the other team had to script out the plays they were going to use before play started. I'd taken part in this before at a GR Testers Meetup so took on the role of interested observer. Quite amusing to watch a room full of adults calling out "E4...hit"  "D7...miss"  "A6.. you sunk my cruiser". No real surprise when the exploratory groups won.
More discussion led by Matt of when and how automation can be used.

and then it was over
Raffle tickets drawn for some prizes, announcement of other testing events - and a plan to hold another Michigan Tester Meetup in 3 months time.

Some of us then adjourned to a local bar to carry on the discussions including how many tests cases would it need to cover the range of chicken wing options and sauces...

Discussion of ideas - check
Discussion of other peoples experiences and problems - check
Enthusiasm level raised - check

- and for one tester who suffers from social anxiety but who pushed herself to make it happen - it happened. Kudos to Hilary and a great example of what can happen if you put yourself out there and start engaging with the test community - online and off.

Thanks also to Matt and Jess for helping organise, TechSmith for hosting - and for Michigan testers for turning up on a Saturday afternoon.

Wednesday, 20 March 2013

5 Questions With Ilari Henrik Aegerter

Next up is Ilari Henrik Aegerter, a tester from Switzerland who has moved on from testing cuckoo-clocks and chocolate to be Manager of Quality Engineering in Europe for Ebay.

1) Congrats on your recent move to eBay - as a tester, what was it about them that attracted you? 

Before I moved to eBay, I worked for a company called Phonak AG. They are the world’s leading producer of hearing aids. I worked there for seven and a half years and basically built up the testing team from scratch. It is a wonderful company and I have only positive thoughts about them. But after a certain amount of years, a position in a company may become a bit too comfortable and new challenges are missing.

I have known my current manager – Michael Palotas - for a while, we both were members of the conference board of Swiss Testing Day and during the collaboration on the organization of the conference we discovered that our views on testing were often matching. Also, on a personal level we always got along well.

Towards the end of 2011, Michael had an open position and he asked me if I was interested. At first, I wasn’t really but decided to nevertheless have a closer look at it. During the following months and after many discussions I got a better insight into what eBay is all about, and – even more importantly – got the impression that within his team there was a real chance to build up something beautiful.

One thing, which makes eBay very attractive for a tester, is that they are the biggest online marketplace on the planet and that comes along with interesting testing challenges. Also, working with a distributed team – my team is located in Zurich, Berlin, Paris and London – makes the work interesting.

But most important: I have a degree of liberty to organize the testing of my team the way I find suitable and this level of trust into what I do is outstanding. Currently I could not imagine a better place to work at.

2) What is your testing background and have you ever been a testing zombie?

Well, I think my story is quite similar to the story of many in testing. I fell into it by coincidence. In my late twenties I decided to go back to the university and studied General Linguistics and Sociology. As a student I did not have major expenses but of course still needed to earn some money.

On the university job board, Phonak AG was looking for somebody to set up operating systems on PCs for three days. I thought: I can certainly do that. That was in 2004 and the three days grew to seven and a half years. The first 3 years I worked as a tester/test engineer and then in 2007 had the chance to become the line manager of 13 testers. In the same year I attended StarEast, where I had my first exposure to James Bach and his tutorial “Exploratory Testing Explained”.

At that time we were following a heavy, detailed test case writing process and the idea of approaching testing problems by the use of exploratory testing convinced me immediately.
However – and now comes the zombie part – I still pursued a program with my testers to have them all certified. I argued for it in a genuine zombie manner: “prove professionalism”, “a common language about testing”, “structured approach” and other horrible non-thinking utterances. Pure business gibberish-talk. 

The most radical change in my views on testing happened on the 4 October 2011 at half past seven in the evening. I attended the StarWest conference in Anaheim and since we wanted to have James Bach as a keynote speaker for the 2012 edition of Swiss Testing Day, I had dinner with him.
He did the Mysterious Spheres exercise with me. It was very tough and I was tremendously challenged by him. I almost fainted, but apparently I navigated through the exercise in a way that pleased James. Ever since we have been in close contact and James still is one of my most important inspirational people in testing.

3) Your recent blog post about Becoming A World Class Tester was great - do you consider yourself a World Class tester? What are your strongest - and weakest - skills?

Thanks, and I want to share the credit to the numerous reviewers of the article. Without the incredibly valuable input of the people in the community the message of the post would have been much weaker. 

The title of the post is /becoming/ a world-class tester, and that is certainly something I pursue. I know many people who are far better testers than I am. (Actually, when I am hiring, these are the people I want to hire) The cool thing about learning is, that you never reach an end. It is a constant journey and I am a happy traveller on that path.

I have been told several times that my strongest skill is the ability to inspire others to become better at what they do. Puzzle solving appears to be another area I am fairly good at.

Although I also studied Software Engineering for two and a half years, I consider myself quite weak at the more technical side of testing. I would not hire myself as an engineer for test automation.

4) You're very active in the online community, what do you get out of it ?

The community supports my thinking. I enjoy having discussions with people in the context-driven community because there is a high concentration of smart people. Smart people are challenging and this challenge helps me to keep up learning something new every day.

I think one should carefully select the people one spends time with. The people around you set the limits of what you may become.

5) What books are you reading at the moment and why ?

Let’s Test 2013 is coming soon and I am in preparation mode for my tutorial and my session about observation and description. I currently read “Perception and Cognition: Essays in the Philosophy of Psychology” by Gary Hatfield. It is a thorough examination on how we perceive the world.

As I am a fond lover of comics, there are always some on my reading list, too. Right now I read “Market Day” by James Sturm. It is the story of a Jewish rug trader and his journey to the market, where he tries to sell his lovingly crafted rugs but is turned away by everybody.

Generally I read for two reasons. First, I enjoy it because reading is seeing somebody else thinking and second, there is a certain smell of new books that I find attractive. That is also the reason why I don’t own a kindle. It just does not smell nice.

Friday, 15 March 2013

5 Questions With Keith Klain

Time to resurrect my "5 Questions With.." series and dig deeper into the thoughts and background of some of the testers being active online today.

First up is Keith Klain of Barclays:

1) You seem to have become very active online recently with your Twitter account and blog - why did you decide to do this? What have you got out of being online and what are your impressions of the online tester community ?

About 10 years or so ago, I used to be more involved in the “public” testing industry as a consultant attending and speaking at conferences, etc. But around 2001-02, I became very disillusioned with the whole testing industry. Maturity models and certifications were really coming into their own then and I couldn’t articulate it then, but it really felt shallow and distracting – almost anti-intellectual. 

As well, the testing conference circuit is unbelievably boring with the same people saying the same things over and over and over again, so I receded from public life, stopped attending conferences and just focused on building my own teams.

Around the same time I became a closet disciple of James Bach and the context driven community after reading Lessons Learned in Software Testing. I used my time as the Head of QA for Equities IT at UBS Investment Bank to try new things like visual test strategies, etc. and made LOADS of mistakes there. 

After taking the job at Barclays, I realized very quickly that I was going to have tremendous senior management support and a real shot at building a testing team the way I’ve always wanted to run one. We worked very closely with James Bach and Michael Bolton in defining our training regime to focus on testing skills and applying CDT principles in a big way.

My boss at the time actively encouraged me to get out there and talk about what we were doing as recruitment tool as we were struggling to find people with an open mind. All that as well as some mild prodding from Michael and James to talk publicly about the success we were having, got me re-engaged with the testing community. 

I think it’s due to the questioning nature of our business and the people it attracts, but I love the online testers and the testing community. I’m a big advocate for testers in general, and think it’s important to have counter examples as so much in the testing industry is harmful to testers. Getting the GTC story out there as an example of how things can change for the better (although it’s not perfect here) has become part of my advocacy.

2) How did you start off your testing career and how has your thinking on testing changed since then? Were you ever a testing zombie?

I think of the start of my testing “career” was when I joined a company called Spherion which had a Software Quality Management practice which specialized in testing. They had written a methodology, training, and a support network you could tap into for advice and mentoring. Their approach was basically the V-model and very rigid with lots of documentation filled with wonderful stuff like “phase containment” and test case counting. 

Working my up through the ranks from a test analyst, to automation engineer, to test manager, to practice director, I had to learn all that stuff well enough to go into the business-side of running a testing practice. That’s very helpful now as I know the arguments for factory style commoditized testing inside and out, as I’ve used them all! 

I would never call myself a zombie, as that implies a mindlessness that I’ve never suffered from, but I definitely had a period of “un-enlightenment” about the mission of testing.

The biggest shift I’ve seen in my approach to testing and managing testers is that we are in the knowledge business not manufacturing. I think that is one of most common (and harmful) mistakes that testers and people in IT make when it comes to testing. Managing people who use their brains to creatively solve problems takes a complete paradigm shift in how you communicate and motivate them. 

The mistakes I’ve made in the past are not giving people enough autonomy to get their work done and removing fear from the organization structure. Fear is like an odorless, colorless gas that seeps under the door and before you know it, everyone is asleep. 

In all honesty, I’ve found that the more transparent I’ve been with people on strategy, operations, finances, etc. has actually made my attrition rates go down! That runs directly counter to the prevailing HR policy of telling people what YOU think they need to know to try to manage them better. My policy is tell them everything and let them manage their own expectations.

3) What was your biggest challenge in making the changes at Barclays? Did you get any pushback from the testers that were there?

Education is one of the biggest challenges due to stereotypes and ingrained bias developed from decades of bad metrics programs, flawed maturity models, and low value testing. Testers have to take responsibility for their own contribution to the problem as well, as we can re-inforce a lot of those perceptions by how we conduct ourselves and inherently limit our value. 

I believe that if you want to drive change in an organization and get congruent action from culturally and regionally diverse teams, you have to focus on what you are contributing to the problem first, articulate your values and principles to give people a lens to view their work, then develop strategies that are aligned to the business you support.

Funnily enough, when we cancelled all the metrics and maturity programs, some of the loudest protests were from the testers! And that’s because they didn’t know how to measure themselves outside of purely quantitative means; they couldn’t articulate their business value. Most of the folks left fairly early on but some stuck with it and are contributing in a really meaningful way now. 

My team has been absolutely fantastic in trusting me in making these cultural shifts, and it’s been the best job I’ll probably ever have. I am extremely fortunate to have the team around me that I have now and any success or recognition is down to their hard work and dedication.

4) What book(s) are you currently reading - and why are you reading them?

Right now I am reading two books “The Invisible Gorilla” by C Chabris and D Simons and “Thinking, Fast and Slow” by D Kahneman. “The Invisible Gorilla” is about inattentional blindness and how it impacts what we believe we know about memory and observing things - both great topics for testers. 

Michael Bolton is an incredible reference for books that can inform the way you test and he’s probably given me a dozen great suggestions that I’m working my way through and I also put a bunch of stuff I consider "required reading" on my website.

5) You say in this blog post that the greatest challenge is finding good people - why do you think it's hard to find good testers? What makes a good tester to you - and what advice would you give to a new tester who wants to become a great one?

It’s hard to find good people for several reasons, but primarily, we are looking for people that are not coming at a task with a prescribed outcome in mind. The CDT community is relatively small and finding them at all is hard, and then try adding in people that are in the right country at the right time, and it’s nearly impossible! 

A good tester to me is humble, curious, honest, and knows how to construct an argument in the classical sense. My advice to anyone wanting to be a great tester is question everything, read A LOT, and get involved in the CDT community. Even if you don’t subscribe to everything that the CDT community believes in, it is a great place to debate, sharpen your arguments and learn. It can be a bit intimidating at first through its reputation for rigorous debate, but I have never seen a group of people more genuinely concerned for the betterment of testers.


Many thanks to Keith for taking the time to answer the questions in such depth.
Look out for more in this series of posts - if you want to be part of this or want to suggest someone then contact me or leave a comment.

Tuesday, 12 March 2013

A Factory of Bugs

The latest GR Testers meeting was at The Factory and the topic was 'defect types' - or to give it a grander title, Bug Taxonomies.One of our newer members had asked about different defect types at the previous meeting so we used this as a topic for this meeting

Introductions over, the topic for the evening was announced and I kicked things off by asking those present who knew what a Bug taxonomy was and if they ever made us of it.
No raised hands and a few blank faces

I used my copy of A Practitioner's Guide to Software Test Design to show some examples and to explain about the different taxonomies that have already been documented such as:
- the 500 listed in Testing Computer Software
- 60 high-level ones from Vijayaraghavan that are specific to a shopping cart
- ones specific to OO by Robert Binder
and I referenced Binder and his two approaches to testing
1) Use reqts and spec to guide creation of test cases “non-specific fault model”
2) “specific fault model” - taxonomy of defects guides the creation of test cases

We then discussed how to use bug taxonomies - can they help answer the question:

"if there is a bug in this part of the system how would you find it ?"

- if you are aware of the potential defects then does this knowledge help you find them more efficiently ? Would you miss them if you did not know about them ? If you're unaware of memory leaks then you're not going to write a test to see if your app has them.

Conversation took one of the many detours that make the chats at these events so interesting as usability was discussed and how usability bugs could get short shrift ( because the product has shipped and the PM got a tick for doing so ) or how usability can be great but the back-end is rubbish as it's built on a 30 year old system or how usability problems get ignored because "the customers are using the program wrong"
Seems there is enough material here for a topic on it's own.

Next detour was to discuss what makes a defect a defect - a trap cunningly laid by Pete Walen to see if people were aware or not about safety language and the possible legal ramifications of actually using the word 'defect'
It was interesting to me to hear different people using different words for bugs and some being quite adamant ( sometime because it is official company policy ) that 'bug' and 'defect' were different things, as was 'issue'.

There was a bit of confusion about using bug taxonomies as couldn't a bug be in several ? If there was a deadlock bug then wouldn't this also count as a usability issue if the customer couldn't update their records ? And weren't all bugs in effect usability issues ?

This was resolved by explaining that by using taxonomies you would run tests to see if that form of defect existed in the system. To test for a deadlock issue you wouldn't be running a usability test, there would be tests specifically for a deadlock. Then, maybe if deadlocks could happen and could not be fixed there would be a usability test to see if the error for a deadlock was handled correctly.

The evening was wrapped up by asking around the table to see if anyone had encountered 3 types of bug not usually found in a standard bug taxonomy - Heisenbugs, Mandlebugs and Schrōdingbugs

Next meeting is on automated vs manual

Meeting notes can be found here - thanks to Pete Walen

Sunday, 10 March 2013

The Tribe That Worshipped 3

Reading some of the recent blog posts about sociology and anthropology being useful to a tester got me thinking about what would happen in an anthropologist did study testers..

They might come across the Testing Tribe where the number 3 seems to have great significance.

First sign of this is that the company they work for has 3 letters in its name.
The last letter also happens to be C -notice that 'C' rhymes with 'three' - coincidence ?

The people in the Testing Tribe of 3 seem to have arranged themselves in a hierarchy of 3 - Test Manager, Test Lead and Testers. The anthropologist was unsure of what value the Manager and Leads brought apart from writing reports to each other and everyone involved in the project but that's for another study.

The Test Phases also used The Power of 3 - there was always a System Test Phase, an Integration Phase and a UAT phase. These phases had to be kept totally distinct and separate.

Within each phase, the magical properties of 3 were used - the tests were always run in cycles of 3 no matter what the results of a particular cycle were.

The Test Managers used their hierarchical position to make sure each day respected The Power of 3 by making sure there were 3 conference calls each day - morning, noon and evening. Some non-believers wanted to know what the point of a morning call was as it gave the same information as the evening call but they were sent into exile ( aka turning manual test scripts into QTP scripts )

The end of each test phase was marked by observance of the Rule of Three. The Exit Criteria demanded there should be only 3 Severity 3 bugs outstanding. There had been some early controversy where some people had said that the rule was actually "no more than 3" but they too had been sent into exile ( aka transcribing the conference calls into Powerpoint )

This meant that the final hours of a testing phase were when The Ceremony of Fudging Numbers was performed and testers were put under excrutiating pressure ( I will spare the readers from the gory details but hint that it involved renumbering test steps in QC)  to make the numbers match.

Maybe I should stay off the Founders Beer when writing blog posts, but it would be interesting to see what an anthropologist would make if they studied testers at work....

Friday, 8 March 2013

Evil in Your Living Room

Several years ago I looked evil in the face - but I did not know so at the time.

I was just starting to dip my toes into the swirling waters known as Testing. I'd found a great site that had lots of informative posts to learn from, Testing Reflections, and the guy running the site, Antony Marcano, gave a friendly welcome to a newbie.
A few months later on there was a testing conference in London and Mr Marcano was one of the speakers so I persuaded my boss to send me and off I went.

After listening to his talk I plucked up my courage and introduced myself to him and he invited me along to the usual apres conference drinks and networking at the local pub. More courage plucking and I went along, Antony introduced me to a friend of his who talked enthusiastically at rapid speed andI nodded my head and tried to keep up with what he was saying.

I learned afterwards that I had had my first brush with evil - for the guy in the pub was Alan Richardson aka The Evil Tester

Since then I've had a couple more encounters with evil, most notably at last years Test Bash with his Evil Testers Guide To Eeevil

I also used his Selenium Simplified book to get an intro to Selenium ( and to ensure his evil reputation he uses Java and I ended up installing Eclipse ), this was one of the things that rekindled my interest in code and being more technical.

And the 't' word ( technical ) brings things full circle.

This week I signed up and started going through his free course on Udemy - Technical Web Testing 101.  Several years since I first met him he's still talking enthusiastically and getting you to think about your approach to testing and how it could be improved. Just listening to the videos got me fired up, revealed some holes in my knowledge and approach - and wanting to watch the rest of the videos so I could start learning.

It's not often that an evil person is held up as a role model - but the Evil Tester is one of the exceptions.

Wednesday, 6 March 2013

Pity The Testing Noob

There's a few blogs that will list blog posts that the author has read that week - or day. One thing I find lacking from these is what the author took from these posts, did they learn anything new, what things will they do differently, why should I bother going to read the post, what might I get out of it ?

So I thought I'd try and do just that with posts I've read this week - and maybe turn this into a regular series. But as I was doing this I wondered what a newbie to testing might make of these posts and how they could end up very confused...

Imagine you're a newbie to testing, maybe you've read a book on the basics, had a surf around Wikipedia and you have the idea that there are test strategies, plans, scripts, test cycles and defects.
Determined to get better you start surfing The Interwebz and start reading the blogs....

First thing you might find out is that The Test Plan Is Dead ( poor old test plans, not long ago there was the 10 Minute Test Plan now they are pushing up the daisies )
Reading that post, the noob tester finds out about Confidence Maps, this seems exciting stuff, he should go and start implementing some of this !!!

Not so fast, another blog post appears which talks about The Confidence Game and making confidence the mission of your testing is a mistake !!
(  and that NOT giving your opinion is the hardest testing skill to master. Noob tester makes note to start practicing NOT giving his opinion, seems simple as his test manager seems to ignore him anyway apart from asking how many scripts he has written/run )

Luckily for the noob tester the testing mission is for his test managers to work out, he can concentrate on doing his testing and reporting his bugs.
Or not. He finds out he's not using Hexawise or doing combinatorial testing which means he's at least only half as productive as he could be.
He hangs his head in shame.
And his scripts never mention running a Shoe Test, maybe that's a special input that Hexawise has, Shoe/Sock/Barefoot ?
He makes a note to make sure he wears fresh socks every day and to wear his best shoes in case he ever has to run this test.

The test lifecycle he is following also has no mention or distinction of when and how much creative and critical thinking to do. No worries, he's now learned his lesson and will make sure to include this ( once John Stevenson has detailed what the Best Practice for this is and precisely how much of each to do in each phase ) in his next Test Plan.
Damn, forgot that Test Plans were dead.

Ah well, stick to logging bugs - oh noes his defect tracker does not have an "unclassified" field for the severity, should he ask his manager to add one ?

Undaunted, the valiant noob tester decides to work on his skills ( other ones besides the 'not giving his opinion one' already noted )
But which ones ?
Technical skills of course otherwise he'll end up as dead as the Test Plan - but taking care to find a balance as Manual testers rock and are the heart and soul of testing.

So to balance the technical web testing course he's signed up for, maybe some sociology, along with some whisky, cigars and a history lesson in using soft skills with Hiram Ulysses Grant, US Civil War General

After all this the noob tester still considers himself lucky - what about his colleague doing UAT ?
Seems that UAT is the hardest testing to do

Monday, 4 March 2013

Binary Linked Hashes - which one would YOU choose ?

Reading comments on Hacker News about the performance of linked lists and hash tables got me going down another rabbit hole...

Take this example from Stack Overflow about  Binary Trees vs. Linked Lists vs. Hash Tables
From the answers, it seems there is a standard trade-off
  • Binary Trees
    • medium complexity to implement (assuming you can't get them from a library)
    • inserts are O(logN)
    • lookups are O(logN)
  • Linked lists (unsorted)
    • low complexity to implement
    • inserts are O(1)
    • lookups are O(N)
  • Hash tables
    • high complexity to implement
    • inserts are O(1) on average
    • lookups are O(1) on average
and of course you know what O(logN) and O(N) and O(1) mean...

So when you're pairing with the dev and going through the code ( because all testers need to know code ) then when you find out he's using a linked list you can point out the error of his ways...

or maybe not. 
Writing this post reminded me about Alan Page and his blog posts and article about  the 5 Orders of Ignorance:

The 2nd order of ignorance is lack of awareness. You have 2OI when you are unaware of what you don’t know. "

But having read this post you've levelled up and know of things that you don't know ( you're welcome ) and decide to tackle it by doing a Google and find a book such as the (in)famous SICP - ( Structure and Interpretation of Computer Programs ) or Introduction to Algorithms ( especially the Third edition which covers van Emde Boas trees !! )

Or maybe not.
Is this a rabbit hole worth going down ?
Any readers of this blog familiar with algorithms and data structures ?  Have you found it useful in your  work ?