Scott Barber's Top 10 Things About Testing That Should Die:
10. Egocentricity
Face reality testers, neither the product nor the business revolve around you. Think about it. No business, no product, no developers => no need for testers. In fact, if developers wrote perfect code, there'd be no need for you. You are a service provider and your primary clients are the managers, developers, and/or executives. Your secondary clients are product users and investors. So stop whining and stomping your feet when your clients don't make decisions you like with the information you provide. It's not your call. If you want it to be your call, get on track to become a project manager, product manager, or executive, otherwise, get right with the fact that you provide a service (hopefully a valuable one) and get back to providing it.
9. Unreasonable Demands
Testers, none of the following people work for you: managers, executives, or product team. You provide a service to them (see point 10 above). Stop trying to tell them what they must do before you're willing to start doing what you were hired to do. It is completely unreasonable to demand "complete and accurate requirements or specifications." Seriously, it's not going to happen -- and even when someone claims that they are complete, how many times have you seen a software product not get released due to a single, minor "requirement" not being met? It is also completely unreasonable to demand stable releases before you start testing. Honestly, what the team needs is for you to help them figure out what needs to be fixed, not for you to tell them to fix it before you're even willing to try to help. It's ok to let folks know what things you can't reasonably do without X, but after that it's time to get to work doing what you can to help the project. If that's testing, great. If that's creating end-user documentation, fine. If that's taking a break and doing your quarterly expense report because you're blocked, that's fine too.
8. Process Weinie-ness
Even though many testers have the label of QA, very, very few are actually responsible and accountable for enforcing processes across the team. The reality is that processes were put in place to manage or fix something that wasn't working, not to block progress. Just do what makes sense to help make the project as successful as possible as quickly, cheaply and painlessly as possible. For example:
- If your team likes you to report bugs verbally as you find them, and they get fixed directly, is there really a reason to write up some complex report in the defect tracking system? Defect tracking systems are just customized workflow engines anyway. If the work can be complete before you can get it entered into the system, there's no work to track. If you mine the defect tracker for patterns, then just enter the info needed for mining -- which I'm willing to bet doesn't include detailed replication steps.
- If the "process" says that you're supposed to write detailed, step-by-step test cases that a brain damaged zombie could follow before you ever even see a prototype or a mock-up and you know that they won't get used or updated, don't waste your time. Instead, find out what problem that process was put in place to solve and help the appropriate folks find a better solution.
7. Isolation
Unless you are doing IV & V (if you don't know what I mean, you aren't - trust me), I simply cannot understand the value proposition of the big, opaque, soundproof wall I frequently see between the developers and testers. I can't even come up with a realistic risk it might mitigate that can't be mitigated better some other way without sacrificing the value of collaboration. The bottom line is that we are all paid by the same company to help them build and deliver a viable product as quickly and cheaply as possible with a degree of quality that is acceptable to the buyer/end-user at the price the company plans to charge. Until someone can explain to me how to do this better with isolated vs. collaborative teams, I can only presume that the reason the isolation is either the result of some turf war at the management (or higher) level, a naive historical artifact, or unadulterated stupidity.
*Note* Even if you are doing IV & V, or there is some valid reason to have a group doing isolated, independent, "unbiased by collaboration or inside knowledge" testing, that's no excuse for not having another group of testers collaborating with the rest of the team.
6. Excessive Faith in "Dumb" Automation and Checking
Sometimes, it is important to check things. Automation can be a fast and useful way to check things. Automation can be an amazing time saving device for brain engaged testers. And, of course, automation is absolutely critical to some specialized types of testing, like load and stress testing. That being said, unless your product is so simple that checks are sufficient, or your automation includes some super-advanced artificial intelligence that can reliably mimic human judgement, don't kid yourself into believing that just because a bunch of automated checks turn the magic cell green, that means that the product is flawless, needing no further inspection, validation, or testing by a brain-engaged human being. Do yourself, your project teammates and your company a favor, don't put excessive faith in your automation, and do your best not to let anyone else do it either.
5. Misguided Metrics
Counting test cases as if they are standardized units of measures as an indication of coverage. Reporting percentage of tests passed vs. failed as if all tests are both binary and of equal value as an indicator of quality. Tracking number of bugs found over time as if it's an indication of the number of defects remaining in the system instead of an indication that the test team has run out of new test ideas. I could go on, but I won't (this post is excessively long already). These are all misguided metrics that have far more potential to cause harm than to help folks make good decisions.
If you don't know what questions your metrics are being used to answer, or you have doubts about whether or not the metrics that are getting reported indicate what people think they do, pause for a moment and ask yourself the following questions:
- What are the questions the recipients of these metrics are trying to answer?
- Would you use these metrics to answer those questions?
- If not, why not?
- What metrics would you use?
- Can you provide those metrics?
- If not, what metrics can you provide that you'd find more useful?
4. Testing Phase (aka Belief That Only Testers Can Test)
I can't figure this one out at all. Breaking testing out into it's own phase leads to isolation (see point 7 above), and implies to me that testing should be reserved for this phase. How does that work, exactly? Every time analysts ask questions, that's testing. Every time developers compile or execute code to see if it's working right, that's testing. Every member of the team can, and probably does, test - and anyone who cares about product and project success wants them to. All I can guess is that folks advocating for testing phases either have some special definition of testing that makes the idea of a test phase seem, at least, to be not stupid in a particular context, or are trying to leverage some kind of power or control for some political reason. In either case, I say "ick".
What is the perceived value of a testing phase anyway? An instance of the application that isn't changing during testing? You don't need a phase for that! Just spin up a virtual environment somewhere and promote the build you want to test until you've found enough bugs to make further testing of that build less valuable than moving to a newer build. Don't tell me it's too expensive either... even 3rd party solutions are $0.08/hr for a pretty reasonable virtual server in the cloud. For 10 testers each testing 2000 hrs per year, that's a whopping $1600 total for each of them to have their very own test environment! That's a small price to pay for not having to sit around waiting for the next test phase after you find a showstopper!
And what do we lose by phasing testing (aside from collaboration)? We ensure the developer has moved on to something else by the time we identify things s/he wants/needs to fix, thus having completely lost the thread of what s/he was thinking when s/he coded that section in the first place. So s/he ends up injecting more bugs as compared to if we found them sooner or if we paired with him/her (gasp) and did at least some testing before s/he merged the code into the main branch.
Bottom line; stop being selfish about testing. You want everyone doing as much testing as is valuable and reasonable. You want to be helping them make their testing better. Think about it, the more testing other folks do, and the better they do it, the more chances you'll have to do the really cool and valuable testing that you want to do, but never seem to have time for!
*Note* I'm not advocating against legal or regulatory mandated "final validation" phases. Those are simply beyond the scope of this commentary.
3. Misleading and Unbacked Certifications
As my father, the 8th grade metal shop teacher turned middle school guidance counselor (now retired) told me the day he received a nifty plaque in the mail that said something like "Supervisory Certificate, K-12":
"Cool, according to this, I am now certified to teach any class, at any grade level, in any public school in the state. Of course, I'm far from qualified to teach most of them but with this I'm allowed to anyway"Of course this became a running joke between he and I which got shortened over time to simply "Certified != Qualified".
As far as I am concerned, any so-called "certification" issued by an org not subject to regulatory oversight that either makes no claims about what you are qualified to do as a holder of the certification or makes claims but offers no guarantees and takes no legal or fiduciary responsibility if the "certified" individual fails to be able to live up to those claims, is not a certification. It may be a certificate of course completion, hoop jumping, or fee paying, but it's no certification. Testerland is full of these. I've written on this topic before, several times, I'll step down from this soapbox and move on to the next one.
2. Ignorance of Businesses Risk and Value
This is really a corollary to point 10 above. Testers seem to have a knack for deciding what they thing matters without any direct knowledge of what the business and/or product owners (eg the ones whose budget your salary comes from) really care about.
- Maybe its time to market. In the software business, you often need to be either first to market or best in class to maximize the business value of a product. Trying to do both at the same time will simply result in doing neither effectively.
- Maybe its cost minimization. Often times, there is only so much money available to get a product to market, and not getting the product to market means no more money.
- Maybe its getting good or avoiding bad press. A bug that causes bad press can be far more expensive than a bug that generates support calls. Bad press keeps people from buying the product in the first place, support calls reduce the profit margin of the products that have been sold.
- Maybe its something that appears entirely irrational to you.
If this concept seems strange to you, or you don't feel you understand how focusing on identifying business risks, assessing business risk controls and mitigation measures, and testing for business value would change what you are doing, I recommend getting some training in business and/or product management. I assure you, doing so will make many things that currently confuse/frustrate you about your testing job much more clear.
1. The Under-Informed Leading the Under-Trained to do the Irrelevant
This is how I see the current state of the practice of software testing, not the leaders and not the broken, but rather the "norm". Admittedly, I expect that it's mostly the leaders and the soon-to-be leaders who have read this far, so don't assume I'm talking about you. Having said that, I think it's very rare indeed when this statement isn't at least partly true. If you're not sure what I mean by it, let me explain:
- Managers and executives of companies that employ testers are regularly directing testers to do these things they don't really understand rumored to be "best practices"-- and of course, then blame the testers when those "best practices" don't provide sufficient value.
- There is no "basic training" for testers, and there is no "manager basic training" that covers testing and test management as it relates to value to the business.
Concluding Statements
The act of testing will never be dead as long as code is being written -- testing is simply part of how software is developed. Who conducts that testing, when, and how is what is really being discussed when folks talk about whether or not "test is dead." I don't believe that testers who provide understandable value to their companies will ever have to worry about their jobs going away. I do believe; however, that testers who aren't able to distinguish their value proposition to the business from the value proposition of other people or groups that can also conduct some degree of testing, or aren't able to deliver on that value proposition, could well find themselves getting pruned from their company's payroll like dead wood.
--
Scott Barber
Chief Technologist, PerfTestPlus, Inc.
About.me
Co-Author, Performance Testing Guidance for Web Applications
Author, Web Load Testing for Dummies
Contributing Author, Beautiful Testing, and How To Reduce the Cost of Testing
"If you can see it in your mind...
you will find it in your life."
11 comments:
There are a lot of good points here (at least 10 :). But I do have a comment on 4. Testing Phase. I agree with much of what you say. But a problem we have had to deal with is when a tester starts testing a feature and reports a bunch of issues, only to have the developer say "yeah, I'm not quite done with that feature." Worst case the developer declares these bug reports invalid and the tester has to redo everything they already did. Hence management requires a "release to test" flag, sort of like a test phase. This can be improved with the proper communication. If the developer says "I'm done with the import, but still have to work on the reporting" for example, then the tester can get a legitimate start. Looser "rules" require better communication.
Thanks Sean!
I have encountered this same issue - which is actually why I advocate against a test phase; HOWEVER, you are correct that simply eliminating the test phase does not solve this problem. Close collaboration (or as you put it "proper communication") is necessary to avoid this situation.
I'd argue, based on my experience, that *if* the developers and testers are working closely together that this wouldn't occur because:
- The dev would have test assessing what they've done *prior* to merging it into the main branch of source.
- The tester would be doing their "initial" round of testing paired w/ the dev (or at least in very close collaboration), so that the issues are "reported" immediately and verbally -- and only logged in the tracking system if those issues will not be dealt with immediately.
Again, this concept demands a mature degree of teamwork where members of the team do not need micromanagement (which I think ought to be a goal of all teams, but I recognize that not all teams will be successful).
I think that might be a "unifying point" here. I get that not every team will start, or even successfully mature, to be able to implement these concepts effectively. But that should, in no way, be taken as an excuse for the industry to lower the bar & accept mediocrity (or micromanagement) as the desired end-state of their team's evolution.
Via twitter, a comment:
It might be interesting to describe the anti-pattern.
I wonder what I would see, hear, feel when I visited for a week - an organization with all ten behaviors?
Fantastic post Scott. I agree with the comments on Twitter that this could easily be a great conference session or a really great keynote.
I've struggled with encounters on almost all of these elements over the years but 4 and 7 really seem to follow wherever I may travel. They're first on my hit list at any place I find myself working :)
There is definitely something between elements 10 and 9 which strikes a bit of a chord.
"You are a service provider and your primary clients are the managers" ...
I've worked as both a consultant to a project, and as a permie within project group. It's certainly a lot easier to be dispassionate as a consultant.
Esp when projects are being steered through technical decisions by project members like marketing etc who are least technically skilled to make those decisions, it's hard to "sit back".
As a professional, you are supposed to use your experience to try and keep a project from steering itself into rocks. That's not egocentric, it's an actual commitment to the customer.
Sometimes your job in supporting the business owner is to protect the business owner from the business owner - esp when they are given to making too many change, arbitary decisions (with big impacts to the project) etc ...
I enjoyed this article. Thanks!
TestSheep,
Yes, it is your job to make the right people aware of things you feel are important (as well as things you feel *should* be important to them). The difference between that and egocentricity is the understanding that it is *their* job, not yours, to make the decision about what to do with that information.
They have (or at least *should* have) information that you don't, which means that sometimes they will make perfectly correct and appropriate decisions that seem completely nonsensical and idiotic based on the information we have.
The best we can do is ensure that folks have and understand the best information available upon which to base their decisions and then accept the decisions they make.
If they regularly make decisions that are unacceptable by your standards, and those decisions are supported by their peers and superiors, that is a good indicator that you are not employed by a company that is a good fit for you culturally & that it may be time to look for a new employer. "Whining and gnashing of teeth" will do nothing other than lead to folks listening to you less, not more... at least in my experience.
Liked the last one (point 1) :)
Love this post.
As you mentioned that "testers communicate bugs verbally and dev team fix the bugs quickly on testing environment.
But what if developers keep making changes on testing environment but it might break other working areas if changes are done in hurry..
Good and agreed on points Scott but it kind of concerns me a post like this is needed in the first place.
I guess it's good to lay them out.
Is it bad that I yelled laughed and yelled "so true ..." at the screen as I read this? Thank you! :)
Post a Comment