Wednesday, 24 September 2008

Comments Please

I was reading about Chrome and how Google might have got themselves into trouble by reverse-engineering Windows

The giveaway seems to have been a comment in the code
// Completely undocumented from Microsoft. You can find this information by
// disassembling Vista's SP1 kernel32.dll with your favorite disassembler

Usually code commenting is A Good Thing - is this a case where it was actually A Bad Thing ?

It did remind me of a tester I interviewed who said he did code reviews. When I pressed for further details he said that he didn't actually read or understand the code, he just looked through it to make sure there were comments in it...

But at least there were probably comments to be found, one of my first jobs when I joined the s/w industry was to go through a huge pile of Assembler code and document how it worked as the code was written by a third party and the contract didn't state that there had to be code documentation

And finally, be careful with comments or you could waste 357 years

Tuesday, 23 September 2008

Rewarding Quality

Read a blog about The cost of (not) testing software

It was not a new idea, by now everyone must know the theory that the later in the lifecyle a bug is discovered the costlier it is to fix

What did get me thinking was one of the comments to the post where the question was raised about companies not rewarding and recognising employees who find and fix the defects at an early stage.

I'm sure all testers know the about-to-ship squeeze with late night and weekend bug blitzes and fixes and managers buying in pizza for the testers.
If a defect does slip out to a customer then there's the chance to be a hero and either find a workaround or come up with a bug fix and patch

Does anyone get rewarded for finding the defects early ?
Do all companies wait until the end to do the big push to get the defects found and cleared ?

Monday, 22 September 2008

Don't be a Negative Tester

I was reading a blog post about the concept of a Net Negative Producing Programmer or NNPP

A NNPP is a programmer who

"inserts enough spoilage to exceed the value of their production"

It made me wonder if there were NNPT's, Net Negative Producing Testers, and then I read a discussion on the Software Testing Club site about low quality bug reports with the great title of

If I were a developer...... I'd hate you too

Having been on the wrong end of such bug reports when I was a developer I knew just what he meant - and then I thought about the amount of time wasted trying to find out what the bug was and how to reproduce it

It wasn't only poor bug reports that wasted time. The majority of the bug reports were low-hanging fruit ones, easy to find ( and fix ) but trivial in nature - tabbing order, buttons misaligned etc. There was no bug triage in place to sort them into priority order so it meant either reading through the entire bug list to find the severe ones - or most commonly just starting with the first on the list and working through it

The result - lots of time spent fixing minor bugs and the customer finding the severe ones.
This then led to management wondering what the point of testing was if they were unable to find the bugs that mattered - why not send it straight out to the customers ?

Make sure the developers hate you for the right reasons and make sure your efforts are helping the project, not hindering it

Friday, 19 September 2008

Test Like A Pirate

To mark Talk Like A Pirate Day I could drink some grog, make some developers walk the plank and say arrrrrhh a lot...

Or I could remind people about the The Pirate Heuristic - "when you run out of ideas - steal someone else's"

Then I found that I couldn't make developers walk the plank as some of them do Pirate Testing themselves

Didn't find any "testers are like pirates because..." analogies to add to my testing cliches list
Any ideas ?

Monday, 15 September 2008

Impatient to learn

On the SCRUM mailing list I read about a new book that was in progress -
" Succeeding with Agile: Getting Started and Getting Good " by Mike Cohn

Went to the books website and was disappointed to read that

“The publisher has announced a publication date of June 4, 2018

That’s 10 years away, doesn’t seem very agile to me...

I’ve also been disappointed with another book - the latest one from Jerry Weinberg

Perfect Software And Other Illusions About Testing

I have had it on pre-order with Amazon for ages. Seems people who were at the CAST conference were able to get their hands on a copy but I'm still waiting and waiting and waiting...

Hopefully Lisa Crispin's next book on Agile testing wont run into any difficulties

Good thing there's plenty of blogs to read

Friday, 12 September 2008

Testing Cliches

Surfing Google News to see what the latest developments on the testing front were, I found this article about a static analysis tool.

It sounded like an interesting, useful and cool tool but sadly the article said things like

"Klocwork [Insight's] static analysis takes the runtime burden away from engineering and QA,"


"However, if engineers are able to see and fix their own code, they are able to
preclude that defect from ever being seen by QA or customers. "

Surely the use of such a tool is part of a QA process and means that defects picked up by the tool wont be seen by QC ?

Yes, the old QA is not QC argument...

One I had many times at my last company, trying to educate people into why saying they were giving the program to QA to be QA'd was wrong and why I wanted directory names on the test server called "FOR_QA" to be renamed as "FOR_QC"

And now that I've blogged about it I can join the list of QA/QC bloggers such as


John McConda and Antony Marcano ( and again ) on testingReflections

The Braidy Tester

Alan Page

Steve Rowe

For my next post I need to think which "testing is like..." analogy I can use as that is also a common theme of a testing blog.

Detectives, surgeons, kung-fu, playing pool, driving a car, introducing a guest in your home, dishwashing,a box of chocolates, flossing teeth, growing turnips, toilet paper, hide and seek, marriage and Magic The Gathering have already been done so I'd better get thinking hard if I'm going to come up with a new one

and finally, I gave this post the title 'testing cliches' which is a good example of the problems of the English language and how it can be ambiguous
Does "testing cliches" mean a list of cliches that apply to testing ?
Or does it mean that cliches are being tested to see if they are true ?

Tuesday, 9 September 2008

Inside the brain of an agile tester

Monday evening I went to a free evening talk, Understanding QA/Testing On Agile Projects given by Antony Marcano who runs the testingReflections website
( which incidentally was one of the first testing websites I found when I was making the switch from dev to test)

It was an interesting talk, dealing with a lot of the misconceptions and objections to Agile, how and why to write and use User Stories, use - or not - of a bug tracking system in an agile project, skills an agile tester should have etc

There were 2 great things about the talk

1 - Antony has been involved in XP/Agile for 8 years and so his talk was full of real life cases about how it can be made to work

2 - The passion he shows for the subject. Even after 8 years doing it he is still so enthusiastic about it and admitted that he is still learning

The talk and Q&A session was an hour and a half but it could easily have gone on for several hours

After the talk some people moved onto the pub where I was able to meet and talk to another of the attendees - Gojko Adzic

A pleasure to meet him and great to listen to him and Antony enthusiastically debating ways of running automated tests that could be written in a customer friendly fashion.

The talk was given the wrong title though - it was called "In The Brain Of Antony Marcano", it should have been "In the HEART of Antony Marcano"

( the talk was being videod and should be available soon

Wednesday, 3 September 2008

Minor Defects, Major Thoughts

Back to blogging after a 2 week break to go and get married in the US, the procedure to get a UK visa for my wife led to some thoughts

You can apply for your Visa online, the website is

Initial impressions weren't too good when we had to choose the current state of residence of my partner and found Massachusetts spelt as "Massachusettes"

My testing senses then tingled some more when I saw screens like the one below

As there is no option on this screen to enter any text, why bother to have the instruction that all questions must be answered in English ?

No serious bugs though so we were able to complete the application and got the visa

Once we were back in England the next step was to get a National Insurance number, this is done at your nearest Job Centre and there is a website to help find the nearest one.
All you had to do according to the on-screen instructions was enter your town/city and press the 'Find' button
So I entered 'Bracknell' and was told that "Bracknell was not a valid postcode"
Enter my postcode and voila, the location of my nearest centre

All of which led to this sequence of thoughts

Initial thought on finding these problems was "Who tested this ???"

Thinking more, it was possible that the defects had been found but a decision had been made not to fix them. They didn't stop the programs from working. Would be nice if government websites had higher quality standards though.

For the postcode/town problem, what was the real defect ? The spec could have been that a customer enters a postcode and that it was the label telling you to enter a town that was wrong.

Or maybe the tester was so used to entering a postcode when prompted for a location that they didn't pay attention to the label.

What if I was a uTest tester and reported the postcode defect as the application not recognising "Bracknell". Would my bug be rejected and I earn no money because the actual defect was that the label asked for a town ? Unless I have access to the requirements then how would I know which part was correct ?

Having heard James Whittaker's talk on the future of testing ( which he has now blogged about ) and how crowdsourcing could work then maybe I was proof of the concept - I was part of the crowd and I had found a bug some bugs.

Except that I wasn't going to take the time to report them.

In the future would all programs have a 'report this bug' button ?
Would The Crowd bother to press it ?

Who would the bug report go to ? The website owner ? But what if it was a browser bug or operating system bug - should they be sent all bug reports as well ?

And what happens when the bug report arrives - the tester of the future gets a virtualised copy of my system, a log of my actions - but could still end up looking at the screen and wondering exactly what the problem was and what bug I was reporting.

Or maybe I just think too much and shouldn't be such a grumpy old man when I find mistakes....