Joshua Gatcomb wrote:
1.  Would people prefer missing data for benchmarks
where they won't work or a manually entered high
number to draw attention to them?
Make the harness time out at ten minutes, and enter a completion time of 11 minutes for those that don't finish in time? (For many graphs of, say, IP performance, a router that drops 99 of 100 packets, but delivers that 1 packet after only 1ns will show up as simply having a 1ns average response time, which is amazingly misleading.)

2.  Should we be checking that the output of the
benchmarks (right or wrong) is consistent?
Possibly. I'd be more interested in running the test suite as well as the benchmarks, and plotting a line of %success along with the response time.

        -=- James Mastros,
        theorbtwo

Reply via email to