On 2016-03-07 23:54:49 +0800 (+0800), Duncan Thomas wrote:
> Complexity can be tricky to spot by hand, and expecting reviewers to get it
> right all of the time is not a reasonable expectation.
> 
> My ideal would be something that processes the commit and the jenkins logs,
> extracts the timing info of any new tests, and if they are outside some
> (fairly tight) window, then posts a comment to the review indicating that
> these tests should get closer scrutiny. This does not remove reviewer
> judgement from the equation, just provides a helpful prod that there's
> something to be considered.

Has any analysis been performed on the existing body of test timing
data so far (e.g., by querying the subunit2sql data used for
<URL: 
http://status.openstack.org/openstack-health/#/job/gate-cinder-python27?groupKey=project&resolutionKey=hour
 >)?
I suspect you'll find that evaluating durations of tests on the
basis of individual runs is fraught with false positive results due
to the significant variability in our CI workers across different
service providers. If it really were predictably consistent, then
it seems like just adding a timeout fixture at whatever you
determine is a sane duration would be sufficient.
-- 
Jeremy Stanley

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to