A recent posting on the Software Testing Club about memorable bugs you had found got me thinking...
I posted to the discussion about how it was a tester missing a bug that led to me becoming a tester and then I thought about other bugs that had been memorable to me
The 15 second bug
Having established a reputation as someone who could break a program very easily, one day one of the programmers feeling quite confident about his code challenged me to see how fast I could break his latest release.
He was somewhat ashen faced when 15 seconds later I had a crash to show him. It was nothing special - one of the old testers bag of tricks of leaving an input field blank and I knew this particular programmer had a history of doing that ( he never seemed to learn )
Why was this memorable ? Because it established my credibility with the programmers ( I did it in front of few of them just like a magician ), they wanted to know how I did it so fast and some of them began to learn what to do to do testing themselves, it was the start of getting them test infected.
The Competition Bug
This was when I got my first inkling that I was a tester. I was in a team of programmers working on a new system and one of the programmers found a bug in my part of the code. Much teasing and pointing of fingers. So of course I had to be childish and retaliate and try to find a bug in HIS code. Easy. Then I found another. And another. And another and soon he was begging for mercy.
And we were wondering why the test team weren't finding these bugs so I ended up spending more and more time testing rather than programming.
Sadly at the end of the project I was moved back to programming but it was something I remembered a few years later when I was considering a career change.
( Good thing I wrote code with bugs in it !! )
So, bugs don't always have negative consequences
Thursday, 30 October 2008
Sunday, 26 October 2008
CSI Testing
Finding the first defect on a program can be a bittersweet moment.
Finding one means that you have some evidence to show that it's worth having a tester on your team, thats one less defect a customer is going to find ( assuming it does get fixed )so rah rah RAH to the tester team.
Thw downside is that it means there is a defect in the product, once again it has been shown that humans + software writing = mistakes
In a recent blog entry, James Whittaker was trying to explain to his son which part he did and was unable to give him a satisfactory answer.
Which led me to thinking about one of the common 'testing is like...' analogies and that is the being a tester is like being a detective such as this one
Do detectives/police have the same bittersweet moment when they get called to a crime scene ? Do they involve themselves in the process of the work, collecting evidence etc, that they dont see the bigger picture and wish there was no crimes for them to be called out to ? Do they wish they could tackle the root causes of the crime rather than the aftermath ?
Or are they happy to go off and " create a GUI interface in Visual Basic, see if I can track an IP address."
Finding one means that you have some evidence to show that it's worth having a tester on your team, thats one less defect a customer is going to find ( assuming it does get fixed )so rah rah RAH to the tester team.
Thw downside is that it means there is a defect in the product, once again it has been shown that humans + software writing = mistakes
In a recent blog entry, James Whittaker was trying to explain to his son which part he did and was unable to give him a satisfactory answer.
Which led me to thinking about one of the common 'testing is like...' analogies and that is the being a tester is like being a detective such as this one
Do detectives/police have the same bittersweet moment when they get called to a crime scene ? Do they involve themselves in the process of the work, collecting evidence etc, that they dont see the bigger picture and wish there was no crimes for them to be called out to ? Do they wish they could tackle the root causes of the crime rather than the aftermath ?
Or are they happy to go off and " create a GUI interface in Visual Basic, see if I can track an IP address."
Friday, 24 October 2008
Grammar Heuristic
One of my recent blog posts was picked up by QA Hates You who made a comment about quality
This does seem to be a common reaction - if the UI looks bad then assume the underlying code is also as bad ( or worse )
I posed the question on Twitter ( if you see a bad UI do you assume the code is bad )and got a couple of responses
Jason Barile
I would certainly question the quality of code and perhaps the priorities of the coders/testers
james_christie
I cringe. I fear for the level of overall quality if no-one noticed it. It suggests sloppiness and a lack of pride in the work.
It can be a good heuristic to use but it's not always valid
From my dark days as a programmer there are two situations where it doesn't tell thefull story
For a lot of programmers it's all about the code - grudgingly they will fit a UI onto the top of their code so that mere mortals can use it but it's not their top priority. At one company there was always the promise that a professional UI designer would be brought in to take care of the UI but that never happened. So we'd sling together some rough prototype, ask for some feedback (which never arrived ) and then the code would ship
It could indicate that management of the company didn't take quality highly enough that they would pay for a UI designer ( and tester ) but as a measure of code quality it wasnt a fair indication.
Alternatively, there is also the case that some companies rely on smoke and mirrors and will put a large amount of effort into making the UI look slick and polished ( especially when there is an upcoming trade show to demonstrate at ) and pay little attention to the real functionality behind it
And maybe the UI can put a slight bias on testing efforts - if the UI is sloppy then there must be bugs to find, if it's slick then maybe, just maybe, you wont try as hard. With the increasing amount of programmers using unit testing then the correlation between poor UI and poor code is not as fixed as maybe it used to be
Anyway, it was about time I had a blog post with 'heuristic' in the title
Additionally, this would include an interface that lacks grammar and spelling issues; any time I see those, I just assume the developers are as code-illiterate as they are English-illiterate and that logic defects aren’t far below the surface
This does seem to be a common reaction - if the UI looks bad then assume the underlying code is also as bad ( or worse )
I posed the question on Twitter ( if you see a bad UI do you assume the code is bad )and got a couple of responses
Jason Barile
I would certainly question the quality of code and perhaps the priorities of the coders/testers
james_christie
I cringe. I fear for the level of overall quality if no-one noticed it. It suggests sloppiness and a lack of pride in the work.
It can be a good heuristic to use but it's not always valid
From my dark days as a programmer there are two situations where it doesn't tell thefull story
For a lot of programmers it's all about the code - grudgingly they will fit a UI onto the top of their code so that mere mortals can use it but it's not their top priority. At one company there was always the promise that a professional UI designer would be brought in to take care of the UI but that never happened. So we'd sling together some rough prototype, ask for some feedback (which never arrived ) and then the code would ship
It could indicate that management of the company didn't take quality highly enough that they would pay for a UI designer ( and tester ) but as a measure of code quality it wasnt a fair indication.
Alternatively, there is also the case that some companies rely on smoke and mirrors and will put a large amount of effort into making the UI look slick and polished ( especially when there is an upcoming trade show to demonstrate at ) and pay little attention to the real functionality behind it
And maybe the UI can put a slight bias on testing efforts - if the UI is sloppy then there must be bugs to find, if it's slick then maybe, just maybe, you wont try as hard. With the increasing amount of programmers using unit testing then the correlation between poor UI and poor code is not as fixed as maybe it used to be
Anyway, it was about time I had a blog post with 'heuristic' in the title
Thursday, 23 October 2008
Testing Books Market Dries Up
Monday, 20 October 2008
Finding My Way
A few weeks ago I was fortunate enough to visit Venice. The guidebooks warned that it was easy to get lost in Venice so I tried to prepare myself with maps, directions, itineraries.
Didn't help, within minutes of being dropped off by the water taxi we were lost.
Wandering round we'd often end up back where we started or totally lost again.
Impossible to walk in a straight line as there are so many turns and twists, canals and bridges and long Italian place name so unless you have a life-size map or are at one of the main sights then really tricky to find yourself on a map. And the narrow alleys and buildings mean there's no landmarks to spot to help
So what has this got to with testing ?
A number of analogies came to mind
It could be used to argue against the waterfall approach - all the careful planning I did just didn't help very much when I was faced with the reality of Venetian streets.
It could be used as a good example of the dangers of ad-hoc testing, wandering off without any plan or direction meant going round in circles. Adopting a fully scripted approach of following directions exactly would not only have been incredibly tedious ( checking where you were every 5 yards ) but would have meant missing out on some great discoveries when we did wander off the direction we were meant to go.
We found that the best approach was to establish a general idea of where we were going and check our progress every so often.
One thing I have found when looking back at my trip is how it reflected my Myers-Briggs personality type I'm pretty much an ISTJ, I like things organised so initally Venice was a shock to the system.
As are software projects utilising the CHAOS methodology where there is no order - but knowing how I react to them helps me cope and get to work in making them less chaotic
Wednesday, 15 October 2008
LinkedIn Twits
I help Rosie Sherry run the Software Testing Club which also has an associated LinkedIn Group. I have manager powers over there and so can Approve or Decline requests to join
Which led me to finding the bug shown below - there were 10 requests to join, I approved 4 which should leave 6
Not 10
I then found a bug using Twitter, hitting the Update button was taking me to a page of a user called "update" as shown below. Sadly I didn't have the time to investigate it in any depth but I was having one of those days where everything I touched broke.
Nice to know I haven't lost my touch
Which led me to finding the bug shown below - there were 10 requests to join, I approved 4 which should leave 6
Not 10
I then found a bug using Twitter, hitting the Update button was taking me to a page of a user called "update" as shown below. Sadly I didn't have the time to investigate it in any depth but I was having one of those days where everything I touched broke.
Nice to know I haven't lost my touch
Monday, 13 October 2008
50 years old test
Great blog post by Casey Charlton about TDD and unit tests - Testing Is Not Technically Hard, It Is Hard Because It Requires Clear Thought and Understanding which led to a long discussion on the TDD group about how to teach people to write a good test. Although the discussion was centred around TDD and unit tests it was a common testing discussion, very easy to write tests, not so easy to write good tests.
It is more and more common to read blogs from developers talking about testing and so I'll try and make it along to the next DeveloperDeveloperDeveloper! Day so I can listen to talks from people like Ben Hall talking about Microsoft Pex - The future of unit testing?
Sadly I was never exposed to any of this in my development days and took the classic code-release-fix approach and was always surprised when bugs were found in my code.
The situation outlined in this blog - Are your applications ‘legacy code’ before they even hit production? was all too familiar.
Though after reading Jerry Weinbergs Perfect Software and other illusions about testing maybe I was in the era when devs didnt do testing. Jerry says
It is more and more common to read blogs from developers talking about testing and so I'll try and make it along to the next DeveloperDeveloperDeveloper! Day so I can listen to talks from people like Ben Hall talking about Microsoft Pex - The future of unit testing?
Sadly I was never exposed to any of this in my development days and took the classic code-release-fix approach and was always surprised when bugs were found in my code.
The situation outlined in this blog - Are your applications ‘legacy code’ before they even hit production? was all too familiar.
But if you don’t understand what was wrong with the last project you worked on, you’ll be doomed to repeat all of its mistakes. Even with the best of intentions, new legacy code is written, and without knowing it, you’ve created another maintenance nightmare just like the one before it.
Though after reading Jerry Weinbergs Perfect Software and other illusions about testing maybe I was in the era when devs didnt do testing. Jerry says
That's why I'm a strong advocate of the test-first philosophy, whereby developers write their tests to include expected results before they write a line of code. It's what we did fifty years ago, but the practice was gradually lost when industry trends separated testing from development.
Wednesday, 8 October 2008
The Grail Test
One small comment from yours truly about the Holy Grail seems to have led to a few discussions and at least one blog post
It was a phrase that I happened to come across during some surfing, my first reaction was to dismiss it as marketing bumf but then led me to think about whether there was a Holy Grail of testing
A quick Google search reveals lots of discussions about the Holy Grail of S/W Development ranging from real-time feedback, simplicity, Software Factories, Software Reuse, “getting to zero” defects and security vulnerabilities and the ultimate - a level 5 score in SEI evaluations
So if the software developers cant agree on what their Holy Grail is, how can us testers test to see if they have it ?
It was a phrase that I happened to come across during some surfing, my first reaction was to dismiss it as marketing bumf but then led me to think about whether there was a Holy Grail of testing
A quick Google search reveals lots of discussions about the Holy Grail of S/W Development ranging from real-time feedback, simplicity, Software Factories, Software Reuse, “getting to zero” defects and security vulnerabilities and the ultimate - a level 5 score in SEI evaluations
So if the software developers cant agree on what their Holy Grail is, how can us testers test to see if they have it ?
Thursday, 2 October 2008
Quality 2.0
"What is Quality?" is yet another of those testing cliches that all experienced testers have argued about ( or read arguments on ) so I am not about to offer up my definition.
I was, however, reading a blog post titled In A Web 2.0 World, Quality Is Irrelevant
The author was not writing about Twitter uptime or Facebook apps crashing, he was writing about traditional journalists adapting to the new Web 2 world with a different definition of quality.
Testers, too, can struggle with different definitions of quality.
A release with a known defect can be the equivalent of a missing full stop in a story - hard to let it go and say that it doesn't matter no matter how many times you repeat that "testers are not the gatekeepers"
I was, however, reading a blog post titled In A Web 2.0 World, Quality Is Irrelevant
The author was not writing about Twitter uptime or Facebook apps crashing, he was writing about traditional journalists adapting to the new Web 2 world with a different definition of quality.
Still, I'm not in full rosy concurrence with the idea that we should kick quality completely to the curb. For one, it's not that quality doesn't matter -- it's that the definition of what constitutes quality is changing. The old idea that quality is defined by editing an article six ways from Sunday so that it's denatured of all passion and advocacy, and so that that it has every freakin' semicolon and middle initial in the correct place -- that's what's dead
Testers, too, can struggle with different definitions of quality.
A release with a known defect can be the equivalent of a missing full stop in a story - hard to let it go and say that it doesn't matter no matter how many times you repeat that "testers are not the gatekeepers"
Subscribe to:
Posts (Atom)