On 01/07/2016 07:24 PM, Jeff Darcy wrote:
I'd prefer a "defined level of effort" approach which *might* reduce the
benefit we derive from NetBSD testing but *definitely* keeps the cost
under control.
Did we identify the worst offenders within the spurious failing tests?
We could ignore their output on NetBSD (this is how I started)
There do seem to be patterns - ironically, NFS-related tests seem to show up a 
lot - but I haven't studied this enough to give a detailed answer.  More to the 
point, is there really much difference between running tests all the time and 
ignoring certain ones, vs. running them nightly/weekly and triaging the results 
manually?  Besides resource consumption, I mean.  If we find something in a 
nightly/weekly test that closer inspection leads us to believe is a generic and 
serious problem, we should be able to create a Linux reproducer or even block 
merges by fiat.  Then the only difference is whether we default to allowing 
merges to occur despite NetBSD failures or default to blocking them.  Either 
way we can make exceptions.
Agree with your point. If we are ready to make exceptions, then we might as well not block all the patches. As Jeff suggested, triaging the nightly/weekly results manually and making any serious issues a blocker should suffice. We have been following the current model for so long, and have been constantly facing issues because of it. I feel there is no harm in trying the nightly/weekly approach for once (even temporarily if we have to), and see how things work out.
_______________________________________________
Gluster-infra mailing list
gluster-in...@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

_______________________________________________
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Reply via email to