Hi Floris,

On Sun, Sep 30, 2012 at 12:57 +0100, Floris Bruynooghe wrote:
> Hi all,
> 
> While pytest-timeout was targeting hanging/deadlocked code there have
> been various requests which relate to failing tests based on
> performance.  Another issue which came up is the control of timeouts
> during fixtures (funcarg/resource) setup.  So instead of saying that
> this is not in the scope of pytest-timeout as I have been doing so far
> here is a proposal of how to address these requirements.  I'd be very
> grateful on any feedback you could provide.
> 
> 
> Firstly I'd like to introduce a timeout funcarg resource which allows
> you do to things like this::
> 
>    @pytest.funcarg()
>    def shared_resource(timeout):
>        with timeout.pause(), timeout.skip(30):
>            return acquire_shared_resource()

> 
> Some details:
> 
> timeout.pause: context manager to pause the normal timeout timer if
> one is active.
> 
> timeout.skip and timeout.fail: These are context managers which will
> try to "soft" timeout the code block within them.  This means if
> reasonably possible it will try to interrupt the code block after the
> timeout, but if not it will just run the code block until it finished.
> Either way when exiting the context manager either Skip or Fail will
> be raised.

Should skip be raised because the system/network whatever is deemed
to slow to handle the tests?  Usually skip (as oposed to xfail) are
there for a dependency or environment miss/mismatch so that the test
cannot be run at all.  In any case, I also don't see why timeout neeeds
to be a funcarg.  Could it not work like this::

    @pytest.funcarg()
    def shared_resource():
        with pytest.timeout(30, ontimeout=lambda: pytest.fail("too slow")):
             ...

and such a timeout helper could also be used in test functions.  Of course
it doesn't even need to be pytest specific timeout helper - it could already
exist in some little library?

> Also no extra information like e.g. thread and stack
> details will be provided.  It's full signature will probably something
> like timeout.skip(timeout, timer=time.time, msg=None) where the idea
> is to provide some timers in the pytest namespace like
> pytest.timer_ms, pytest.timer_pystone.  This can be useful in both
> fixtures as well as test functions.

For me, passing in a string like "10mp" (10 megapystones) would be fine enough.
I did something like for timeout-specs in the PyPy test suite several
centuries ago.

> Lastly I'm thinking of providing a way of getting a timer which uses
> pytest-cache and to measure the duration of a test (in pystones) and
> will fail the test if it takes longer then the previous pystones
> value.  This would only be reset by clearing the cache.  I'm not sure
> whether to do this as part of the new timeout funcarg resource or as a
> new marker,so either::
> 
>    def test_foo(timeout):
>        with timeout.noregress():
>            performance_critical_func()
> 
> or::
> 
>    @pytest.mark.timeout_noregress
>    def test_foo():
>        performance_coritical_func()

The "timeout" name here makes not much sense to me here.  I wonder if the
performance related thgoughts (which i think are very
interesting) shouldn't better go to a "pytest-speed" or perf plugin
with according markers/funcarg names.

The above two ideas otherwise sound both potentially useful to me.
In addition, running and recording timings for all test functions 
of the last release and then comparing against my current trunk
would be useful - something like a report on "top 10 slowed down 
test funcs, top 10 sped up test funcs" or so.  I think anyway, that
the real work with such a plugin/approach lies in the reporting part.
As an initial use i'd rather be interested in this last top10 bit as a general
starting point (you have to do almost nothing other than re-running
your code with different checkouts and get a nice report).  Then i
probably discover i want to do something more fine-grained like
the above ...

> Both version could probably take a timer argument too re-using the
> pytest.timer_ms, pytest.timer_pystones etc timers.
> 
> I'm slightly in favour of the context manager since it makes the timed
> section more explicit.  But it might possibly only be allowed once per
> test function (or have a "key" argument).
> 
> I'm very keen to hear other people's input on these ideas before
> implementing them.  Especially since they solve problems I don't
> really have personally.

Heh, maybe you should try to identify an area where you do care about
speed regressions :) And/or invite the people who wanted to have
features from pytest-timeout to comment and/or detail their thoughts on
what they actually want to have :)

best,
holger

P.S.: I think that many pytest users have subscribed rather
to Testing-in-Python rather than the py-dev mailing list because TIP
is mentioned first on the web page for a year now. So CCing TIP is 
maybe not a bad idea, especially if the topic (performance measuring etc.)
is of general interest.
_______________________________________________
py-dev mailing list
py-dev@codespeak.net
http://codespeak.net/mailman/listinfo/py-dev

Reply via email to