On Sat, Jul 24, 2010 at 10:46 AM, Harald Schilly
<harald.schi...@gmail.com> wrote:
> On Jul 24, 8:10 am, Robert Bradshaw <rober...@math.washington.edu>
> wrote:
>> We should do this as part of the tests, collect timing data on each
>> test block (and perhaps even each line?).
>
> I don't think this would work for all lines because completing all the
> tests would take too long (if we want to use "timeit", each line is
> repeated several times)
> I rather suggest to add something to the doctest infrastructure, that
> executes those lines in timeit(..) that are tagged via "# timeit"
> appended to the line.
> e.g.
> sage: x = 1
> sage: _ = x*x # timeit
> Data for these timings are collected in a key-value dictionary that is
> pickled into a file named after the current date+time (maybe also
> revision number?) ... The key should also be something useful, e.g. a
> 3-tuple consisting of the relative path and name to the file, the line
> number and the string literal of the exectued line.
> Based on that it should be straightforward to write some code that
> analyzes execution time regressions.

+1 to the idea of a timeit decorator, which would be useful for
microbenchmarking. I think timing whole blocks would be useful as well
to make sure there's no macro regressions. Though any one datapoint
isn't as useful, if everyone was consistently getting a slowdown for a
given doctest, that would be an indicator that something is going on.

- Robert

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org

Reply via email to