- I was asked the question below (lightly edited for anonymity, clarity, and length) today and found it intriguing, so I thought I'd post it here.
- The Question:
This is an attempt to understand how (and why) users, practitioners, and professionals perceive the difference between a good software product and a bad software product, specifically released software products.
- My Response:
- I can think of several ways to address that question, but the two that jump out at me are:
- How does an end user differentiate between a good and a bad software product.
- How does a stakeholder differentiate between a good and a bad software product.
- I believe that an end-user primarily makes their assessment based on their ability to use the software product to accomplish the task they wish to accomplish with the software product. This in itself is interesting, because from an end-user perspective, the software product that does the best job at helping them accomplish their task, for instance, may be priced to highly for them to afford, or be too hard to learn, or not work on their system -- thus making it "bad" in their eyes.
- On the other hand, a stakeholder may view a software product as good if it generates sufficient revenue or publicity, even if that software product is generally considered to be of "low quality".
- Let me use a software product that is no longer being actively sold (it may not even be available anymore) as an example. Before IBM purchased Rational, Rational had a load generation tool that (at least at one point in time) was called Performance Studio. Performance Studio was (probably still is) my favorite load generator of all time. It's the tool I learned on. I knew how to make that tool do *almost* anything one could want from a load generator. I had (still have, actually) the most amazing peer support group around that tool I could imagine.
- That said, over years and years of using that tool, I came to find out the following:
- It was prohibitively expensive.
- It was *very* hard for most people to learn.
- If you didn't reboot the machine after ever test, you were likely to loose all of the data from the next test as the software crashed *after* the conclusion of the 2nd test while writing the results files to disk.
- I had built up a massive library of extensions for the tool, that newer tools didn't need because newer tools handled the situations for which the extensions my support network and I had written.
- Technical support was TERRIBLE.
- The product frequently did not work the way the documentation said it did.
- I could go on, but I'll stop there. The point is, even with all of these obvious "bad" traits, I *loved* that software product. For me, it was (and still would be) a *good* software product. For much of the rest of the industry who had need for the functionality that product provided, it was considered *bad*.
- I guess, all that is to say:
- "Good is in the eyes of the stakeholder."
Chief Technologist, PerfTestPlus, Inc.
Co-Author, Performance Testing Guidance for Web Applications
Author, Web Load Testing for Dummies
Contributing Author, Beautiful Testing, and How To Reduce the Cost of Testing
"If you can see it in your mind...
you will find it in your life."