I think it would be good if we could track Guile's performance better,
and how it changes over time.  But...

1. We don't currently have many benchmarks.  There are just 3 in the
   repo, and they're all pretty trivial.

2. I have no experience in, and no immediate feel for, how we manage
   the output from benchmarks.  I'm thinking of things like

   - needing to run benchmarks multiple times, in order to average out
     effects like the code being cached in memory

   - different people running the benchmarks in different environments
     (architectures, chip speeds etc.), and whether the results from
     those environments can be sensibly merged or collated

   - how best to present results, and show how they change over time
     (i.e. with Guile releases).

(And I imagine there are other problems and subtleties that I haven't
even thought of...)

So, any help or advice on these points would be much appreciated!  If
anyone has some interesting benchmark code that they'd like to
contribute, or wants to write new benchmarks, please do so.  Or if
there are standard Scheme benchmarks that we could incorporate, please
point them out.

And then if anyone has expertise on actually running benchmarks
regularly, and collating and presenting the results, and would like to
contribute that, please go ahead!

Many thanks,
    Neil



Reply via email to