Hi all, I been playing with flood a bit. One thing that I liked in ab that I found missing in flood was the statistical summary that ab does.
While analyze-relative does the average stuff, I wanted the standard deviation and the percentile ramp. As I didn't see anything that did that, I whipped one up that I called flood_stat_report. While it could use a bit of internal improvement I thought I would run it up the flag pole and see what others thought. If anyone else thinks it useful, perhaps we could stuff it into the tree so the next me will not have to rewrite it :-). I will attach it to this email. A question about distributed load generation..... has anyone thought much about how to look at the load when it stabilizes ? It would seem to me that you want to measure the load at steady state, and that there is a window after all of the flood clients start and before the clients start to finish that would be the best measure of the reponses of the system. One approach would be to combine all of the client outputs and lop off the beginning and end results - then look hard at the middle bits. There are a few ways to get at the middle bits (looking at all the files to combine and selecting only the common ranges, or perhaps running for x + 2 minutes and then lopping the first and last minute off). Has anyone tooled anything like this ? My program will do the skip time + window thing, but kind of assumes that the input data is sorted (it uses the first entry to pick the starting absolute time which might not be very acurrate if I just cat-ed a number of results files together). Dave.
flood_stat_report.c
Description: Binary data