Tuesday, 20 April 2010

Quis custodiet ipsos custodes?

"Quis custodiet ipsos custodes?" or "Who will guard the guardians?" or in my case as a software tester - who tests the testers?

As a system tester I am the final link between software being 'in development' and it being live for real users to use. We are often seen as the guardians of the users, ensuring that before it is made generally available the software is as bug free as we can possibly make it.

But who's job is it to test the system testers? who are we culpable to? or to paraphrase who "invites me for tea, biccies and a little chat" when things go wrong?

During my day job at IBM I provide the development organisation with High level plans of what testing we plan to inflict upon their software. This plan is reviewed by both them and my test colleagues. During test execution we report how well we are doing against the plan.

So our intentions are reviewed and approved by the development stakeholders. However testing is not a exhaustive activity. We can never reach 100% completion and state that the product is bug free!

The people that get hit are the customers of the product. Using some configuration, use case, or pattern that we didn't think of first, the customer found a bug. Personally I hate it when a bug is found in a product that I was testing. I will ask myself; why didn't I find that, why didn't I try that configuration etc.

Apart from the moral pain it may cause the tester, for the customer it may mean delays to their work schedule as they workaround or wait for a fix to the problem. But there is another organisation that will also get impacted.

Most (if not all) software houses will have a support organisation whose job it is to fix bugs that customers have found in the product. They bear the brunt of a customer complaint when the software does not behave as expected. This group of people more than test or development understand what customers are doing when the software fails. They understand the usage patterns that a customer is using when they notice a bug in the software.

It is the service team that 'pay the price' along with the customer when things go wrong. Perhaps it is this position that means that they should be allowed to guard the guardians and have close scrutiny of test plans to ensure that the tester has thought of common usage patterns that a customer may use.

I'm not suggesting that this approach will mean we hit 100% bug free software but perhaps it will mean that customers will not hit bugs when they are doing something that 'is normal for them' and that has to be a good thing.

Post a Comment