Tuesday, October 23, 2007

From The Web: "Noncertified IT pros earn more..."

Stop the presses! Can it be true? The industry wants effective, qualified, multi-dimensional people who are capable of understanding business drivers & risk mitigation and applying that in a sapient way to their job as opposed to folks who paid someone to teach them how to pass a multiple-choice exam?!? Amazing!
 
Noncertified IT pros earn more than certified counterparts: survey
 
--
Scott Barber
Chief Technologist, PerfTestPlus, Inc.
About.me

Co-Author, Performance Testing Guidance for Web Applications
Author, Web Load Testing for Dummies
Contributing Author, Beautiful Testing, and How To Reduce the Cost of Testing

"If you can see it in your mind...
     you will find it in your life."

Thursday, October 18, 2007

From the Mailbox: Software Development: Art or Science?

Here’s a question that I didn’t realize I had much to say about until I read my own response.
 
The Question:
Software Development: Is it an art or a science? An age old question I know, but what do you think and why?
My Response:

Tuesday, October 16, 2007

From the Mailbox: What makes software "good" or "bad"?

I was asked the question below (lightly edited for anonymity, clarity, and length) today and found it intriguing, so I thought I'd post it here.
 
The Question:
This is an attempt to understand how (and why) users, practitioners, and professionals perceive the difference between a good software product and a bad software product, specifically released software products.
My Response:

Monday, September 17, 2007

Gentleman, Start Your Engines!!!

My most recent column, inspired by a surprise trip to the Brickyard 400, has just been posted on TechTarget in which I discuss the distinction between "delivery" and "done" when it comes to testing the performance of software systems.
Countless hours of development are now in the past. Testing indicates that everything is ready for the big day. The whole team is on hand, and the world is watching. It's the moment of truth; time to find out if all of the hard work is going to pay off. Anticipation builds until the command is given…
"Gentlemen, start your engines!"
The cars come to life. They take a few pace laps and at last, the green flag drops. In fewer than 90 seconds the cars are back on the front stretch approaching speeds of 200 mph -- the pinnacle of stock car performance.
This summer I worked on a project in Indianapolis. Usually when I travel to remote client sites I fly home on the weekends, but there was one weekend that I chose to stay. I chose to stay for two reasons. First, the flights for that weekend were insanely expensive and second, I have some friends in Indianapolis whom I'm always happy to have an excuse to visit. As luck would have it, the flights were expensive because that was the weekend of the Brickyard 400, and one of the friends I wanted to spend time with had a spare ticket, which I shamelessly accepted when he offered.
During the pomp and circumstance leading up to the start of the race I realized what a fabulous example the race was of one of my most-quoted sound bites related to performance testing: "Don't confuse delivery with done."
 
See the column for the rest of the story.
 
--
Scott Barber
Chief Technologist, PerfTestPlus, Inc.
About.me

Co-Author, Performance Testing Guidance for Web Applications
Author, Web Load Testing for Dummies
Contributing Author, Beautiful Testing, and How To Reduce the Cost of Testing

"If you can see it in your mind...
     you will find it in your life."

Saturday, August 25, 2007

Model Workloads for Performance Testing: FIBLOTS

This is the third installment of a currently unknown number of posts about heuristics and mnemonics I find valuable when teaching and conducting performance testing.
 
Other posts about performance testing heuristics and mnemonics are:
 
 
For years, I have championed the use of production logs to create workload models for performance testing. During the same period, I've been researching and experimenting with methods to quickly create "good enough" workload models without empirical data that increase the value of the performance tests. I recently realized that these two ideas are actually complimentary, not exclusionary, and that with or without empirical usage data from production logs, I do the same thing, I:
 
FIBLOTS.
While the play on words makes this mnemonic particularly memorable, I'm not saying that I just make it up. Rather the acronym represents the following guideword heuristics that have served me well in deciding what to include in my workload models over the years.
 
  • Frequent: Common application usage.
  • Intensive: i.e. Resource hogging activities.
  • Business Critical: Even if these activities are both rare and not risky
  • Legal: Stuff that will get you sued or not paid.
  • Obvious: Stuff that is likely to earn you bad press
  • Technically Risky: New technologies, old technologies, places where it’s failed before, previously under-tested areas
  • Stakeholder Mandated: Don’t argue with the boss (too much).
--
Scott Barber
Chief Technologist, PerfTestPlus, Inc.
About.me

Co-Author, Performance Testing Guidance for Web Applications
Author, Web Load Testing for Dummies
Contributing Author, Beautiful Testing, and How To Reduce the Cost of Testing

"If you can see it in your mind...
     you will find it in your life."