--
- Milan

On Aug 21, 2014, at 10:12 , Chris AtLee <cat...@mozilla.com> wrote:

> On 17:37, Wed, 20 Aug, Jonas Sicking wrote:
>> On Wed, Aug 20, 2014 at 4:24 PM, Jeff Gilbert <jgilb...@mozilla.com> wrote:
>>> I have been asked in the past if we really need to run WebGL tests on 
>>> Android, if they have coverage on Desktop platforms.
>>> And then again later, why B2G if we have Android.
>>> 
>>> There seems to be enough belief in test-once-run-everywhere that I feel the 
>>> need to *firmly* establish that this is not acceptable, at least for the 
>>> code I work with.
>>> I'm happy I'm not alone in this.
>> 
>> I'm a firm believer that we ultimately need to run basically all
>> combinations of tests and platforms before allowing code to reach
>> mozilla-central. There's lots of platform specific code paths, and
>> it's hard to track which tests trigger them, and which don't.
> 
> I think we can agree on this. However, not running all tests on all platforms 
> per push on mozilla-inbound (or other branch) doesn't mean that they won't be 
> run on mozilla-central, or even on mozilla-inbound prior to merging.
> 
> I'm a firm believer that running all tests for all platforms for all pushes 
> is a waste of our infrastructure and human resources.
> 
> I think the gap we need to figure out how to fill is between getting per-push 
> efficiency and full test coverage prior to merging.

The cost of not catching a problem with a test and letting the code land is 
huge.  I only know this for the graphics team, but to Ehsan’s and Jonas’ point, 
I’m sure it’s not specific to graphics.  Now, one is preventative cost (tests), 
one is treatment cost (fixing issues that snuck through), so it’s sometimes 
difficult to compare, and we are not alone in first going after the 
preventative costs, but it’s a big mistake to do so.

Now, if we need to save some electricity or cash, I understand that as well, 
and it eventually translates to the cost to the company the same as people’s 
time.  If we can do something by skipping every n-th debug run, sure, let’s try 
it.  We have to make sure that a failure on a debug test run triggers us going 
back and re-running the skipped ones, so that we don’t have any gaps in the 
tests where something may have gone wrong.


> 
>> It would however be really cool if we were able to pull data on which
>> tests tend to fail in a way that affects all platforms, and which ones
>> tend to fail on one platform only. If we combine this with the ability
>> of having tbpl (or treeherder) "fill in the blanks" whenever a test
>> fails, it seems like we could run many of our tests only one one
>> platform for most checkins to mozilla-inbound.
> 
> There are dozens of really interesting approaches we could take here.
> Skipping every nth debug test run is one of the simplest, and I hope we can 
> learn a lot from the experiment.
> 
> Cheers,
> Chris
> _______________________________________________
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to