Thursday, October 27, 2011

WOPR 17, my takeaways


The Workshop On Performance and Reliability (WOPR) 17 was held Oct 20-22, 2011 on the theme of “Finding Bottlenecks”.  Beyond the fact that this was a historic event in the sense that no other peer workshop inspired by LAWST has convened this many times.  Of course, as a co-founder of WOPR, I’m (somewhat unreasonably) proud of this accomplishment, but the fact that over the last 9 years so many folks have been so inspired by the community and value of WOPR that they have been willing to volunteer their time to plan and organize these events, their companies have been willing to donate meeting space (and often food & goodies), and participants have been frequently willing to pay their own way (sometime taking vacation time) to attend makes 17 events, one every 6 months, since WOPR 1 is a significant achievement – whether or not my “founder’s pride” is justified.  :)

As is the tradition of WOPR 20-25 folks, selected or invited by the “content owner” (a.k.a. the person or team who chose the theme to be explored this time) brought their personal experiences related to “Finding Bottlenecks” to share and explore with one another.  Also as is the tradition, certain patterns and commonalities emerged as these experiences were described and discussed. Everyone has their own take, there are no official findings, and I’m not even going to pretend that I can attribute all the contributing experiences and/or conversations to my takeaways below.

  • Finding bottlenecks can be technically challenging, examples include: 
    •  Analyzing the test & the data is far from straight forward 
    •  The “most useful” tools to narrow down the bottleneck may not be available – forcing us to be technically “creative” to work around those roadblocks.   
  • Finding bottlenecks can be *very* socio-politically challenging, examples include:o   
    • Lack of Trust (e.g. “That’s not a bottleneck, that’s the tool!”)
    • Denial (e.g. “It’s not possible that’s related to my code!”)
    • Lack of cross-team collaboration (e.g. “No, you can’t install that monitor on *our* system!”)
  • Sometimes human bottlenecks need to be resolved before technical bottlenecks can be found. (e.g. Perf Team being redirected, resources being re-allocated, excessive micromanagement, etc.)
Some other topics that came up that were relevant and interesting (such as the frequent discrepancy between tester/technical goals & business goals), but since these weren’t “on theme” we didn’t discuss these topics deeply enough for me to draw any conclusions other than “the points and positions that did come up were consistent with what I would have anticipated if I’d thought about it in advance”, which, for me, is a nice confirmation.

My point in sharing these thoughts on finding bottlenecks is so that all the folks out there who feel like theirs is the only organization that is thwarted by socio-political challenges even more than technical ones can realize that they really aren’t alone.

The findings of WOPR17 are the result of the collective effort of the workshop participants: AJ Alhait, Scott Barber, Goranka Bjedov, Jeremy Brown, Dan Downing, Craig Fuget, Dawn Haynes, Doug Hoffman, Paul Holland, Pam Holt, Ed King, Richard Leeke, Yury Makedonov, Emily Maslyn, Greg McNelly, John Meza, Blaine Morgan, Mimi Niemiller, Eric Proegler, Raymond Rivest, Bob Sklar, Roland Stens, and Nishi Uppal.
 
--
Scott Barber
Chief Technologist, PerfTestPlus, Inc.
About.me

Co-Author, Performance Testing Guidance for Web Applications
Author, Web Load Testing for Dummies
Contributing Author, Beautiful Testing, and How To Reduce the Cost of Testing

"If you can see it in your mind...
     you will find it in your life."

No comments: