+1 to the idea of time testing doctests. I would use them all the time
for regression testing and when testing improvements; not the least
for my own Sage "library".

It seems to me that only very rarely it would be interesting to look
at other's timing tests, and so, I don't really think the extra work
required in implementing and maintaining a solution which collects
everyone's timing centralised is justifiable. However, +1 for the idea
of looking for an existing Python Open Source project or contacting
others for starting a new one.

> Also, I was talking to Craig Citro about this and he had the
> interesting idea of creating some kind of a "test object" which would
> be saved and then could be run into future versions of Sage and re-run
> in. The idea of saving the tests that are run, and then running the
> exact same tests (rather than worrying about correlation  of files and
> tests) will make catching regressions much easier.

Not to go off topic, but am I the only one bothered about the - at
least theoretical - overhead of searching for, extracting and parsing
every single doctest each time I want to run them? Does anyone know
how much overhead is actually hidden there (timewise)? If it is the
least bit significant, we could think about extending the above
excellent idea of test objects to be what is _always_ being tested
when doctesting. And then just adding a flag -extract_doctest to ./
sage for rebuilding the test objects only whenever I know there has
been a change.

Cheers,
Johan

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org

Reply via email to