Joshua Gatcomb wrote:
1. Would people prefer missing data for benchmarks
where they won't work or a manually entered high
number to draw attention to them?
Make the harness time out at ten minutes, and enter a completion time of
11 minutes for those that don't finish in time? (For many graphs of
Matt Diephouse <[EMAIL PROTECTED]> wrote:
> Maybe the benchmarks should be part of the test suite? They're valid
> code, so they should work at all times: if they don't, something's
> broken. Seems like a good opportunity for testing to me.
Yep.
Patches welcome. But please make sure that they do
On Thu, 4 Nov 2004 08:57:28 -0800 (PST), Joshua Gatcomb
<[EMAIL PROTECTED]> wrote:
> What I have found interesting though is when
> individual benchmarks don't work. For instance, from
> 10/20 to 10/22, gc_generations and gc_header_reuse
> would just hange (still running after 10 minutes).
> Last
All:
In collecting the historical data for the benchmark
statistics and graphs, I discovered that there were a
few days where I had to play the CVS time game to get
a working parrot for that day. I expected this.
What I have found interesting though is when
individual benchmarks don't work. For