flawed on one
way or another.
I suggest that we add a benchmark/ subdirectory and create a
canonical suite of benchmarks that exercise things well (and
hopefully fully). Then we can all post relative times for runs on
this benchmark suite, and we will know exactly what is being tested
and how
Gordon Henriksen <[EMAIL PROTECTED]> wrote:
> Would be nice if there were a convenient way to run the lot of them
> collect the timing information, though.
Yep. That would be really great. That is: have per platform numbers over
time (correlated to patches) about performance of current and a lot
add a benchmark/ subdirectory and create a canonical
suite of benchmarks that exercise things well (and hopefully fully).
Then we can all post relative times for runs on this benchmark suite,
and we will know exactly what is being tested and how valid it is.
Well, there's already exa
all post relative times for runs on this benchmark suite,
and we will know exactly what is being tested and how valid it is.
Like, for example, examples/benchmarks ?
It's quite difficult to create benchmarks that test *everything*. But
any time someone posts a good benchmark, it really shou
at we add a benchmark/ subdirectory and create a canonical
> suite of benchmarks that exercise things well (and hopefully fully).
> Then we can all post relative times for runs on this benchmark suite,
> and we will know exactly what is being tested and how valid it is.
Like, for example, exa
suite of benchmarks that exercise things well (and hopefully fully).
Then we can all post relative times for runs on this benchmark suite,
and we will know exactly what is being tested and how valid it is.
Matt