Last week, I hosted STP's
Online Performance Summit,
a 3 half-day, 9 session, live, interactive webinar. As far as I know,
this was the first multi-presenter, multi-day, live webinar by testers
for testers. The feedback from attendees and presenters that I have seen
has all been very positive, and personally, I think it went very well.
On top of that, I had a whole lot of fun playing "radio talk show host".
The event sold out early at 100 attendees with more folks wanting to
attend, but were unable. Since this was an experiment of sorts in terms
of format and delivery, we made a commitment to the smallest and least
expensive level of service from the webinar technology provider, and by
the time we realized we had more interest than "seats", it was simply
too late to make the necessary service changes to accommodate more
folks. We won't be making that mistake again for our next online summit
to be held October 11-13 on the topic of "Achieving Business Value with
Test Automation". Keep your eyes on the
STP website for more information about that and other future summits.
With all of that context, now to the point of this post. During Eric Proegler's session (
Strategies for Performance Testing Integrated Sub-Systems),
a conversation emerged in which it became apparent that many
performance testers conduct some kind of testing that involves real
users interacting with the system under test while a
performance/load/stress test was running for the purposes of:
- Linking the numbers generated through performance tests to the degree of satisfaction of actual human users.
- Identifying items that human users classify as performance issues that do not appear to be issues based on the numbers alone.
- Convincing stakeholders that the only metric we can collect that
can be conclusively linked to user satisfaction with production
performance is the percent of users satisfied with performance during
production conditions.
The next thing that became apparent was that everyone who engaged in the conversation called this something different. So we
didn't
do what one would justifiably expect a bunch of testers to do (i.e.
have an ugly argument about who's term came first, is more correct, that
continues until no decision is made and all goodwill is lost). Instead,
we held a contest to name the practice. We invited the speakers and
attendees to submit their ideas, from which we'd select a name of the
practice. The stakes were that the submitter of the winning submission
would receive a signed copy of Jerry Weinberg's book
Perfect Software, and that the speakers and attendees would use and promote the term.
The speakers and attendees submitted nearly 50 ideas. The speakers
voted that list down to their top 4, and then the attendees voted for
their favorite. In a very close vote, the winning submission from
Philip Nguyen was User Experience Under Load (congratulations Philip!).