In article <87fwpse4zt....@benfinney.id.au>,
 Ben Finney <ben+pyt...@benfinney.id.au> wrote:

> Raymond Hettinger <pyt...@rcn.com> writes:
> 
> > I think you're going to need a queue of tests, with your own test
> > runner consuming the queue, and your on-the-fly test creator running
> > as a producer thread.
> 
> I have found the ‘testscenarios’ library very useful for this: bind a
> sequence of (name, dict) tuples to the test case class, and each tuple
> represents a scenario of data fixtures that will be applied to every
> test case function in the class.
> 
>     <URL:http://pypi.python.org/pypi/test-scenarios>
> 
> You (the OP) will also find the ‘testing-in-python’ discussion forum
> <URL:http://lists.idyll.org/listinfo/testing-in-python> useful for this
> topic.

That link doesn't work, I assume you meant

http://pypi.python.org/pypi/testscenarios/0.2

This is interesting, and a bunch to absorb.  Thanks.  It might be what 
I'm looking for.  For the moment, I'm running the discovery then doing 
something like

    class_name = 'Test_DiscoveredRoute_%s' % cleaned_route_name
    g = globals()
    g[class_name] = type(class_name, bases, new_dict)

on each discovered route, and calling unittest.main() after I'm done 
doing all that.  It's not quite what I need however, so something like 
testscenarios or raymondh's test queue idea might be where this needs to 
go.
-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to