Showing posts with label Opinion. Show all posts
Showing posts with label Opinion. Show all posts

Tuesday, April 3, 2012

Agile Sprint Sanctity Valued at Over $13M?!?

During STPCon last week (which, BTW, was fabulous, but more on that in another post, 'cause I've got to get this off my chest) I was a panelist for The Hard Stuff: Questions About Agile. During the course of the discussion, someone asked a question that I heard as the following:
"... but what should I do about our sprints getting messed up when [executive] comes in and tells us to stop what we're doing and add [feature X] before the end of the following week so s/he can finalize the $13 Million deal with [new client Y, but only if the feature X is implemented by then]..."

Tuesday, March 27, 2012

Software Quality Assurance Engineer... Happiest job?!?

If you haven't seen this article, you want to read it:

http://finance.yahoo.com/blogs/secrets-toyour-success/happiest-jobs-america-173044519.html

About half way down it says:
The happiest job of all isn't kindergarten teacher or dentist. It's software quality assurance engineer. Professionals with this job title are typically involved in the entire software development process to ensure the quality of the final product. This can include processes such as requirements gathering and documentation, source code control, code review, change management, configuration management, release management, and the actual testing of the software, explains Matt Miller, chief technology officer at CareerBliss.
With an index score of 4.24, software quality assurance engineers said they are more than satisfied with the people they work with and the company they work for. They're also fairly content with their daily tasks and bosses.

These professionals "typically make between $85,000 and $100,000 a year in salary and are the gatekeepers for releasing high quality software products," Miller says. Organizations generally will not allow software to be released until it has been fully tested and approved by their software quality assurance group, he adds.
So I have a bunch of comments:
  1. I guess I don't know what a "Software Quality Assurance Engineer" is -- or this Matt Miler guy doesn't. 
  2. *If* anyone "ensures the quality of the final product" in software, it's a PM or higher.
  3. I don't think I've met anyone with that title who smiled and told me how much they love their job.
  4. I'm certain I've never met someone with that title that makes that much money. 
  5. I think I'd rather shoot myself in the head than have those tasks... even at such a generous salary.
I could go on, but I'll stop.  I want to see these questions, & I want to know the demographics of the people surveyed, & I want to see the titles actually reported by respondents that got rolled up under "Software Quality Assurance Engineer." I'd also like to have a word or 73 with this Matt Miller dude... CTO to CTO, 'cause lets face it, we all know that testers wouldn't be caught dead bragging about how *happy* their job makes them, or how *satisfying* it is. Testers tend to love the act of testing, but not their jobs, or their bosses, or their companies -- and if this ain't referring to testers, I wanna know why these process people are apparently so happy about being forced to do the actual testing on top of their "real" job.


Feel free to share your thoughts, but this strikes me as "not *even* wrong" to a degree that I can't seem to even reverse-engineer a single measurement dysfunction that could account for all the ways in which this article strikes me as "just not right".

 
--
Scott Barber
Chief Technologist, PerfTestPlus, Inc.
Director, Computer Measurement Group
About.me

Co-Author, Performance Testing Guidance for Web Applications
Author, Web Load Testing for Dummies
Contributing Author, Beautiful Testing, and How To Reduce the Cost of Testing

"If you can see it in your mind...
     you will find it in your life."

Friday, March 23, 2012

Trust is a Cornerstone to Delivering Business Value

In my last post about Metrics I introduced the notion of trust as it relates to Business Value by stating:
"Failing to trust 'the Business' does NOT add Business Value"
I'd like to generalize that statement further to say "A lack of trust that individuals or groups involved in the project are primarily focused on helping the business succeed undermines business value".

Now, I can only imagine the reaction many testers are having while reading this. For instance "If I trust the developer when they say 'This is fine, you don't need to test it', we'll have major bugs make it to production." And anyone thinking that would be absolutely right -- because that is not the *kind* of trust I'm talking about.

When I say trust, I don't mean "Trust others to tell you how to do your job" or "Trust others to do what you believe is correct/best" or even "Trust others to be successful in accomplishing what they have been assigned to accomplish on time, on mission, on quality, and on budget"

When I say trust, I mean "Trust others to approach their role with integrity" and "Trust that others are doing the best they can to make the decisions or take the actions appropriate to their role and responsibilities based on the information they have" and "Trust that if you haven't been assigned to do or to be the decision maker about something, that task or decision is better handled by someone else -- whether or not *you* have the information necessary to make sense out of why.

Friday, March 16, 2012

Business Value of Testing: Find Bugs ≠ Mission

My introduction to software testing was as a performance tester. Before the completion of my first project, I had a firm understanding of the primary mission of performance testing. That understanding has changed very little to this day, though I have improved how I communicate that mission. Currently, I phrase the (generic) primary mission of performance testing as:
"To find how many, how fast, how much and for how long; to assist business w/ related technical and/or business decisions and/or; to support related improvement activities as prioritized by the business."
That certainly doesn't mean that all performance testing efforts include every aspect of that mission, but I'm hard pressed to imagine something that I'd call performance testing that includes none of the aspects of that mission. It is also true that I have experienced all manner of secondary missions as part of a performance testing effort (missions related to fail-over, and denial of service attack security, for example). Those combinations, permutations, variations and additions are all where context comes into play.

However; when I was first asked to help out with some functional testing, I quickly realized that I didn't really know what was expected of me, so I asked. As you might imagine, folks looked at me like I'd grown a second head... a green one... with scales and horns. After about the 4th time I asked the question, got a funny look, watched as it became clear I was serious and received some variant of the answer:
"Find bugs"

Monday, March 12, 2012

Processing may take up to 60 seconds?!?






Seriously?!? This was a simple ccard transaction for a storage unit! Admittedly, it only took about 20 seconds, but it was still long enough for me to push the button, read the text, exclaim "You've *GOT* to be *KIDDING* me!!", my 12 y/o son to ask "What?", me to respond "60 seconds to process a payment on line, " him to reply "That's stupid", me to launch snipping tool & grab a capture before it processed my payment.

Grrr.... Hey, I've got an idea, why don't they give me a unit free for the next 10 years in exchange for 25 hrs of performance testing/tuning (and that would *still* be less than my typical bill rate) so that other folks don't have to deal with this crap.

Friday, March 9, 2012

Context-Driven Testing Crossroads: Addendum

I guess I wasn't as done talking about this as I thought. Earlier today, I posted the following comment (except with a few extra typos that I chose to fix below) on Tim Western's blog in response to his post Is the Context Driven School of Testing - Dead?:
"A point that I think many miss is that this is not just about individual testers.

50 years ago (more or less) testING began fighting a rather arduous battle to establish an identity separate from developMENT. This, eventually, led to testERS establishing an identity separate from developERS.

Thursday, March 1, 2012

With the Context-Driven School "closed" what's next?

This is Part II in a series of entries related to the following quote from the "about page" of context-driven-testing.com hosted by Cem Kaner:

"...However, over the past 11 years, the founders have gone our separate ways. We have developed distinctly different visions. If there ever was one context-driven school, there is not one now..."
If you haven't done so already, I recommend starting with Part I: Is Testing Dead? Dunno, but the Context-Driven School Is


Much like when one completes an educational program at one institution and ponders whether or not to enroll in another program (and if so, which one), or to enter the workforce and continue their learning along the professional development or self-education path, I think it's fair for those who have come to self-identify as members of the Context-Driven School to be asking themselves similar questions.

And much like completing an educational program does not equate to losing the lessons learned (as opposed to the lesson's taught) in the program, the Context-Driven Principles and the lessons many of us have learned by studying in (or, for that matter, rebelling against) the Context-Driven School remain despite Cem's announcement that (in my words) the school is now closed.

Tuesday, February 28, 2012

Is Testing Dead? Dunno, but the Context-Driven School Is

Well, I'm sure this is a bit of a shocker for many of you, but the following quote comes from the "about page" of context-driven-testing.com hosted by Cem Kaner:

"...However, over the past 11 years, the founders have gone our separate ways. We have developed distinctly different visions. If there ever was one context-driven school, there is not one now..."
This is Part I of a series of entries on this topic. Links to subsequent parts will be added to the bottom of this entry as they are posted.


Of course, this doesn't  negate or erase the Context-Driven Principles, and Cem has committed to keeping the original content on landing page of the revised site:
"...When you land on this site, you see the context-driven-testing.com landing page (the Principles) as it was when we originally published it. I’ll keep it that way (with the same set of Principles), because several people have found it useful..."
To my way of thinking, the *most* important point made by Cem on the About Page is the following:
..."This notion of evolution comes with a built-in assumption: If my thinking will evolve to something else in the future, it must be wrong today. Progress on my path to better understanding and practice of testing (and of anything else that I’m serious about) includes discovering what needs to be changed in my thinking, and changing it.
This is an important aspect of science. We don’t run experiments to confirm what we already know. We run experiments to prove that what we think we already know is wrong. And to help us develop something better..."
This is the point I'd like folks to focus on.

Sunday, December 25, 2011

Curse of the Performance Tester?

Seriously?!? After wrapping gifts until nearly 5am (I was behind by even my standards due mostly to work travel, client commitments & preparing to close the corporate books for 2011), and getting up before 8am to celebrate Christmas with my boys, I finally stole a few minutes when I noticed they'd both fallen asleep on the couch to play with *my* new toy (i.e. install Skyrim), only to be foiled by...

Monday, December 5, 2011

10 Must-Know Tips for Performance Test Automation

More than other automation, bad performance test automation leads to:
  • Undetectably incorrect results
  • Good release decisions, based on bad data
  • Surprising, catastrophic failures in production
  • Incorrect hardware purchases
  • Extended down-time
  • Significant media coverage and brand erosion
More than other automation, performance test automation demands:
  • Clear objectives (not pass/fail requirements)
  • Valid application usage models
  • Detailed knowledge of the system and the business
  • External test monitoring
  • Cross-team collaboration
Unfortunately, bad performance test automation is:
  • Very easy to create,
  • Difficult to detect, and
  • More difficult to correct.
The following 10 tips, based on my own experiences, will help you avoid creating bad performance test automation in the first place.

Tip #10: Data Design
  • *Lots* of test data is essential (at least 3 sets per user to be simulated – 10 is not uncommon)
  • Test Data to be unique and minimally overlapping (updating the same row in a database 1000 times has a different performance profile than 1000 different rows)
  • Consider changed/consumed data (a search will provide different results, and a item to be purchased may be out of stock without careful planning)
  • Don’t share your data environment (see above)

Tuesday, November 8, 2011

On the Alleged Death of Testing

Out of respect for your time, I'll give you the bottom line up front as a simulated interview that I privately hoped for, but never came. After the mock interview is a supporting narrative for those of you more interested in my thinking on the matter.

Q: There's been a lot of talk recently about testing being dead, so my first question is testing dead?
A: No.

Q: Some of those talking about the alleged death of testing are saying that it's not that testing as a whole is dead, but that testing as it is commonly understood today dead. Is it?
A: No.

Q: Ok, so is testing as it is commonly understood today dying?
A: Not that I can see.

Q: Then why all the talk about testing "as we know it" being dead?
A: IMHO? Wishful thinking.

Thursday, October 27, 2011

WOPR 17, my takeaways


The Workshop On Performance and Reliability (WOPR) 17 was held Oct 20-22, 2011 on the theme of “Finding Bottlenecks”.  Beyond the fact that this was a historic event in the sense that no other peer workshop inspired by LAWST has convened this many times.  Of course, as a co-founder of WOPR, I’m (somewhat unreasonably) proud of this accomplishment, but the fact that over the last 9 years so many folks have been so inspired by the community and value of WOPR that they have been willing to volunteer their time to plan and organize these events, their companies have been willing to donate meeting space (and often food & goodies), and participants have been frequently willing to pay their own way (sometime taking vacation time) to attend makes 17 events, one every 6 months, since WOPR 1 is a significant achievement – whether or not my “founder’s pride” is justified.  :)

As is the tradition of WOPR 20-25 folks, selected or invited by the “content owner” (a.k.a. the person or team who chose the theme to be explored this time) brought their personal experiences related to “Finding Bottlenecks” to share and explore with one another.  Also as is the tradition, certain patterns and commonalities emerged as these experiences were described and discussed. Everyone has their own take, there are no official findings, and I’m not even going to pretend that I can attribute all the contributing experiences and/or conversations to my takeaways below.

  • Finding bottlenecks can be technically challenging, examples include: 
    •  Analyzing the test & the data is far from straight forward 
    •  The “most useful” tools to narrow down the bottleneck may not be available – forcing us to be technically “creative” to work around those roadblocks.   
  • Finding bottlenecks can be *very* socio-politically challenging, examples include:o   
    • Lack of Trust (e.g. “That’s not a bottleneck, that’s the tool!”)
    • Denial (e.g. “It’s not possible that’s related to my code!”)
    • Lack of cross-team collaboration (e.g. “No, you can’t install that monitor on *our* system!”)
  • Sometimes human bottlenecks need to be resolved before technical bottlenecks can be found. (e.g. Perf Team being redirected, resources being re-allocated, excessive micromanagement, etc.)
Some other topics that came up that were relevant and interesting (such as the frequent discrepancy between tester/technical goals & business goals), but since these weren’t “on theme” we didn’t discuss these topics deeply enough for me to draw any conclusions other than “the points and positions that did come up were consistent with what I would have anticipated if I’d thought about it in advance”, which, for me, is a nice confirmation.

My point in sharing these thoughts on finding bottlenecks is so that all the folks out there who feel like theirs is the only organization that is thwarted by socio-political challenges even more than technical ones can realize that they really aren’t alone.

The findings of WOPR17 are the result of the collective effort of the workshop participants: AJ Alhait, Scott Barber, Goranka Bjedov, Jeremy Brown, Dan Downing, Craig Fuget, Dawn Haynes, Doug Hoffman, Paul Holland, Pam Holt, Ed King, Richard Leeke, Yury Makedonov, Emily Maslyn, Greg McNelly, John Meza, Blaine Morgan, Mimi Niemiller, Eric Proegler, Raymond Rivest, Bob Sklar, Roland Stens, and Nishi Uppal.
 
--
Scott Barber
Chief Technologist, PerfTestPlus, Inc.
About.me

Co-Author, Performance Testing Guidance for Web Applications
Author, Web Load Testing for Dummies
Contributing Author, Beautiful Testing, and How To Reduce the Cost of Testing

"If you can see it in your mind...
     you will find it in your life."

Monday, October 24, 2011

Best Ice Cream Practice

A twitter conversation from Friday, Oct 21...

@TesterAB Anna Baik: What's best practice for icecream? I don't know what flavour of icecream I should eat, and I'm afraid to get it wrong.
  • @skillinen Sylvia Killinen: @TesterAB Test ALL the ice cream, that way you'll know which one best satisfies conformance. :)
  • @adampknight Adam Knight: @TesterAB it's vanilla, if you're not eating vanilla you are doing it wrong. I'd suggest getting yourself CVM certified as soon as you can
    • @adampknight  Adam Knight: @sbarber @TesterAB we should be specific. I'll clarify in my "10 ways to check if you are truly vanilla" blog post #BestIceCreamPractice
    • @sbarber Scott Barber: @adampknight @TesterAB Certified Valuation Manager ™? No, no, no, that's only appropriate for *children's* icecream! #BestIceCreamPractice
  • @testingqa Guy Mason: @TesterAB Best to go for that which you most prefer at that point in time?
    • @TesterAB Anna Baik: @testingqa No no no. There must be one flavour of icecream that is best for everybody to eat at all points in time.
    • @TesterAB Anna Baik: @sbarber @testingqa Yes! None of this wishy-washy nonsense, I only want to eat the BEST flavour of icecream. #BestIceCreamPractice
    • @sbarber Scott Barber: @TesterAB @testingqa So chocolate, pistachio, lemon sorbet, raspberry swirl, topped with caramel & orange soda, right? #BestIceCreamPractice
    • @TesterAB Anna Baik: @sbarber @testingqa #BestIceCreamPractice Finally, someone who'll give me an answer! ...wait. How do I know you're qualified?
    • @sbarber Scott Barber: @TesterAB @testingqa #BestIceCreamPractice I founded a non-profit to establish BICP qualification stds and issued myself a certification.
    • @TesterAB Anna Baik: @sbarber @testingqa #BestIceCreamPractice Sounds reassuring, I knew there'd be an Official Body somewhere to tell me what icecream to eat
    • @sbarber Scott Barber: @TesterAB @testingqa #BestIceCreamPractice The invoice for my services are in the mail. $400/hr + $25,000 for the BICP flavor report.
    • @TesterAB Anna Baik: @sbarber @testingqa Eeek! Don't I even get something to show to people to prove I now know #BestIceCreamPractice?
    • @sbarber Scott Barber: @TesterAB @testingqa #BestIceCreamPractice when check clears we mail you a Certified BICP Practitioner Certificate (suitable for framing)
Questions?  No? Didn't think so.  :)

 
--
Scott Barber
Chief Technologist, PerfTestPlus, Inc.
About.me

Co-Author, Performance Testing Guidance for Web Applications
Author, Web Load Testing for Dummies
Contributing Author, Beautiful Testing, and How To Reduce the Cost of Testing

"If you can see it in your mind...
     you will find it in your life."

Tuesday, October 18, 2011

Please, no new "certifications"

I just saw an advertisement for this Building a Certification Testing Program - Cutting through the hype to see how it really works on LinkedIn, and I couldn't stop myself from adding the following comment:
Please make it stop. We don't need more "certification" programs -- not unless you are going to be the first organization that allows itself to be held legally and financially accountable when people you "certify" can't do what you "certified" they can.

Otherwise, conduct all the training you want. Assess student performance if you want. Only "pass" students who "pass" the assessment if you want.

Just do us all a favor and *STOP* calling it certification until you are willing to do things like:
  • reimburse hiring expenses to employers who hire folks you certified as being able to X who can't X
  • implement periodic re-assessment to enforce some bar of continued knowledge/skill/ability over time
  • implement some way to revoke certifications of folks who fail to demonstrate knowledge/skill/ability in the workforce
The list goes on, but I know it's pointless. The certification machine will continue no matter how loudly, or how frequently I point out the ways in which it is frequently (at least arguably) unethical and fraudulent - at least in "testerland."
Seriously, this drives me insane.  Others can make stands about content, assessment methods, etc. -- I have my opinions on those things, but honestly that part of the topic bores me.  People decide what university to attend, what to major in, what electives to take, etc. for their degree programs ... they can decide on whether or not the content of some professional development program (with or without "certification" program) is worth their effort.  What I want to see is the "certifying bodies" being held accountable for complying with the claims they make about the individuals they "certify."

I mean, seriously, have any of you seen any data that you'd consider either statistically significant, empirical (vs. anecdotal), or free enough from obvious experimental design flaws to support the claims we see from "certifying bodies"?  If you have, please share the data with me and I'll list it in line -- unless of course, it's flawed, in which case, I'd be happy to point out how and why the data doesn't support the conclusion.

Otherwise, please, please, please don't engage in creating more of these things.  Please.

--
Scott Barber
Chief Technologist, PerfTestPlus, Inc.
About.me

Co-Author, Performance Testing Guidance for Web Applications
Author, Web Load Testing for Dummies
Contributing Author, Beautiful Testing, and How To Reduce the Cost of Testing

"If you can see it in your mind...
     you will find it in your life."

Monday, October 17, 2011

Having lunch with a giant...

I "officially" started my career in software performance in Feb of 2000, only much later to realize I'd started down that path years prior.  In the fall of 2001 (10 years ago), I felt I was stagnating in my self-guided education and went on a hunt for books, articles, training, and/or people to learn from.  I found some peers (and eventually co-founded WOPR with Ross Collard to maximize peer learning) and I found 3 "giants" on whose shoulders I've stood since then (meaning, all of my work was and has remained consistent, complimentary, and/or extended from their work in the field).  Those "giants" are Connie Smith, Ph.D. (Software Performance Engineering), Daniel Menasce, Ph,D. (Capacity and Scalability Planning) and Alberto Savoia (Performance Testing).

Last fall, I had the honor of being on a panel with Connie and spending some time talking to Daniel during the CMG conference in Orlando.  I'd never spoken or corresponded with them before that, but it was nice to meet them and we had some great conversations.

Over the years, however, I have corresponded regularly with Alberto Savoia.  As it turns out, he was moving on to what he would now call his next "it" from software performance as I was becoming known in the industry, so we didn't converse regularly, but we did follow each others careers.  During that time, I drew a lot of inspiration from Alberto.  Not just from the work he'd done in the software performance space, but also from his other accomplishments in technology, the kind and complimentary recommendations he gave me and by graciously agreeing to write a forward for Performance Testing Guidance for Web Applications when I asked.

So earlier this year when I had the chance, I dropped everything to review and comment on his new "it", Pretotyping. He said the review was helpful and that some of what I'd commented on would be included in the next version.

Today, I finally met Alberto face to face.  We had lunch.  We talked about projects & passions old and new, we recalled history and speculated about the future.  He gave me a signed copy of Pretotype It, and I gave him a signed copy of Web Load Testing for Dummies, both of which had been prepared in advance.  And while Alberto has accomplished far more in his technology career than I have, somehow I didn't feel like I was having lunch with the "giant" on whose shoulders most of the work I am known for stands, I felt like I was having lunch with an old friend that I hadn't seen in too long.

To some of you, I suspect this seems a silly thing for me to be making a big deal about, but for a guy who left a small town twenty-some-odd years ago, never imagining that I'd meet anyone "famous", let alone become a "celebrity" of sorts in my (admittedly very small) field, it means a lot to me that someone who I've often credited as being a luminary to me, would not only take the time to have lunch with me, but to share thoughts and ideas with me like friends do.

So, thanks Alberto.  Thanks for the years of inspiration & thanks for the confirmation of friendship.  It means a lot to me, and know that you've provide me with another lead I intend to follow with anyone I may inspire during my career and later have the opportunity to meet.


--
Scott Barber
Chief Technologist, PerfTestPlus, Inc.
About.me

Co-Author, Performance Testing Guidance for Web Applications
Author, Web Load Testing for Dummies
Contributing Author, Beautiful Testing, and How To Reduce the Cost of Testing

"If you can see it in your mind...
     you will find it in your life."

Thursday, October 13, 2011

Top 10 Automation Tips from STP Online Summit

I had the pleasure of hosting the second Online Summit, delivered by Software Test Professionals: Achieving Business Value With Test Automation.  The online summit format consists of 3 sessions each for 3 consecutive days.  The sessions for this summit were:
One of my duties as host was to try to summarize the most valuable nuggets of information from across all of the presentations into a "top tips" list.  This is what I came up with:

Scott's Top 10 Automation Tips from:


Tuesday, October 4, 2011

Is Junosphere the world's first cloud testing environment? Not really - IT in Context

Is Junosphere the world's first cloud testing environment? Not really - IT in Context

I don't get too irked by companies coining new phrases to make subtle marketing distinctions in services, but when they do it so they can make first/best claims flips my bozo bit. Seriously, if your service is so bland or weak that you need to invent a new term so you can claim that it's the "best the best thing called blah" without being called out for fraud, maybe you should just improve your service.

--
Scott Barber
Chief Technologist, PerfTestPlus, Inc.
About.me

Co-Author, Performance Testing Guidance for Web Applications
Author, Web Load Testing for Dummies
Contributing Author, Beautiful Testing, and How To Reduce the Cost of Testing

"If you can see it in your mind...
you will find it in your life."

Friday, September 30, 2011

Agile backlash series...

From SearchSoftwareQuality.com:

Agile backlash series: Exploring Agile development problems and solutions


 I think Jan Stafford did a great job on this series.  I don't agree with every opinion from everyone interviewed, but I wouldn't expect to.  I think it's fair, honest, insightful, and (best of all) focuses on experiences, challenges, and ideas about overcoming challenges instead of theory, marketing fluff, and excessive exaggeration.  :)

Of course, I'm always happy when someone is willing to publish quotes of mine like the following excerpts from Why Agile should not marginalize software testers:

"SSQ: You come in frequently to integrate testing into Agile development. What kind of problems do you see organizations having when integrating testing?

Scott Barber: The first thing that I hear about is, ‘What do we need testers for if we’re doing Agile? Isn’t everyone in Agile a generalist?’

Thursday, September 29, 2011

Making Every Test Count

This is from a while back, but I wouldn't call it dated.  It's a webinar, it runs for 48 min.  I like it, for whatever that's worth.  ;)

Abstract:

Do you ever find yourself wondering what the point is to executing this test... again!?!  Have you ever felt like the purpose of a test is to ensure there is a check mark in a particular check box?  Are you ever asked to get *more* information in even less time with even fewer resources than the lst test project you worked on?

In this presentation, Scott Barber will introduce you to a variety of tips and techniques you can apply to virtually any testing you do as you strive to make ever test you execute add value to the project.


--
Scott Barber
Chief Technologist, PerfTestPlus, Inc.
About.me

Co-Author, Performance Testing Guidance for Web Applications
Author, Web Load Testing for Dummies
Contributing Author, Beautiful Testing, and How To Reduce the Cost of Testing

"If you can see it in your mind...
     you will find it in your life."

Friday, September 2, 2011

Thoughts on Agile & Agile Testing

This past weekend, I finally made time to start reading Agile Testing: A Practical Guide For Testers And Agile Teams, Lisa Crispin & Janet Gregory, Addison-Wesley (2009).  I made it through the first two chapters before life called me away.  After I put the book down and starting going about accomplishing a mundane series of errands, I realized that I was feeling disappointed and that the disappointment had started growing just a few pages into the book.  Not because of what the book had to say, what it said was pretty good – not exactly how I would have expressed a few things, but thus is the plight of a writer reading what someone else has written on a topic they also care and write about.  What was disappointing me was the fact that the stuff in those chapters needed to be said at all.

You see, as Lisa and Janet were describing what Agile Testing and Testing on Agile Teams was all about, and explaining how it is “different” than “traditional testing”, my first thought was: