On Sun, Sep 16, 2012 at 10:46 PM, Jed Brown <jedbrown at mcs.anl.gov> wrote:

> On Sun, Sep 16, 2012 at 10:13 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>
>>
>>    What are the various plans for running the test cases on examples?
>>
>>     As I've said before I'd really like each example to be self-contained
>> which means its tests and its test results would all sit in the same file
>> (so, for example, the file could trivially be moved to another directory
>> location and the tests still work).  One way to do this is to have each
>> example file to have a chunk of python code after the source that ran all
>> the tests and then after that a chunk of all the outputs.   Having the test
>> cases inside the makefile (or in some directory or gasp "single location
>> specific file) and the output files in the output directory is, to me, not
>> desirable.
>>
>
> It sounds like you are asking for something like Python's doctest for
> executables.
>
> http://docs.python.org/library/doctest.html
>
> If the output was feasibly short, we could build a system this way.
>
>
>>
>>    Jed,  what is your horrible plan?
>>
>
> I wrote a parser for our makefile-based tests. It currently creates files
> that describe tests like this:
>
> with Executable('ex15.c', libs='PETSC_TS_LIB'):
>     Test(id='1', args='-da_grid_x 20 -da_grid_y 20 -boundary 0
> -ts_max_steps 10')
>     Test(id='2', args='-da_grid_x 20 -da_grid_y 20 -boundary 0
> -ts_max_steps 10 -Jtype 2', compare='ex15_1')
>     Test(id='3', args='-da_grid_x 20 -da_grid_y 20 -boundary 1
> -ts_max_steps 10')
>     Test(id='4', np=2, args='-da_grid_x 20 -da_grid_y 20 -boundary 1
> -ts_max_steps 10')
>     Test(id='5', args='-da_grid_x 20 -da_grid_y 20 -boundary 0
> -ts_max_steps 10 -Jtype 1', compare='ex15_1')
>
> I would the "id" field to use more descriptive names when appropriate so
> these numbers are just the inherited names. This snippet of Python
> registers the executable and the associated tests in a global index (will
> probably make at a sqlite database). The test result can be reported back
> to the database. I wrote most of a parallel executor with active progress
> monitoring, similar to nose-progressive, but not all the pieces are playing
> nicely together yet.
>
> Me eventual plan is to be able to batch up the test results, which will
> include basic timing information, and report it back to a central server so
> we can keep a log. That could be done with buildbot or Jenkins, which
> fulfill most of our needs for a "dashboard", but lack a good database query
> API.
>
> Test specification, which seems to be what you are most concerned with,
> could also be via "doctests". The main thing is that if you execute the
> test script, it first updates its database with all executables and tests
> in subdirectories (specified any way we like), then does whatever
> operations you ask for (building, testing, etc).
>
>
>>
>>    Matt, what is your horrible plan?
>>
>
What I currently have:

  Very much like Jed, except I use a dictionary instead of a Test() object,
which in Python is almost the same thing.
Could be converted in 10min.

What I want:

  I have no problem with regression, but I do not think it makes a
difference whether that output is in a separate file.
What I really want is unit-style tests for things. For instance, I want
norms for iterates as objects, so I can test
them with a certain relative tolerance. I want solver descriptions as
objects so I can just test for equality, etc.

    Matt


>     Can we converge to something agreeable to all?
>>
>>    Barry
>>
>>
>>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120917/cb3e92bc/attachment.html>

Reply via email to