More than other automation, bad performance test automation leads to:
- Undetectably incorrect results
- Good release decisions, based on bad data
- Surprising, catastrophic failures in production
- Incorrect hardware purchases
- Extended down-time
- Significant media coverage and brand erosion
More than other automation, performance test automation demands:
- Clear objectives (not pass/fail requirements)
- Valid application usage models
- Detailed knowledge of the system and the business
- External test monitoring
- Cross-team collaboration
Unfortunately, bad performance test automation is:
- Very easy to create,
- Difficult to detect, and
- More difficult to correct.
The following 10 tips, based on my own experiences, will help you avoid creating bad performance test automation in the first place.
Tip #10: Data Design
- *Lots* of test data is essential (at least 3 sets per user to be simulated – 10 is not uncommon)
- Test Data to be unique and minimally overlapping (updating the same row in a database 1000 times has a different performance profile than 1000 different rows)
- Consider changed/consumed data (a search will provide different results, and a item to be purchased may be out of stock without careful planning)
- Don’t share your data environment (see above)