A lot of pull requests are now passing so I'm re-opening the tree. I'll watch Travis to see if it's more red than usual and re-close if that's the case.

I think it was just a mix of known intermittents + npm failures + no one restarting jobs. I ususally re-run a lot of tests that fail because of intermittents so maybe it just looked more red than usual because I was sleeping.

By the way, I cannot re-run builds in the Travis UI anymore. Has anyone fiddled with Travis/Github integration?

On 28/11/13 07:13, Dale Harvey wrote:
Travis is still red, however its late here and I need to go to bed, I have
started bisecting so should get to the bottom of this in the morning,
apologies for the delay (luckily its the holidays for the US at least :)

again if anyone else on a different timezone could pick this up I am sure
everyone would appreciate the work to reopen the tree, apologies for the
inconvenience (but yay for sticking to process and working on a permagreen
tree)


On 27 November 2013 21:48, Dale Harvey <d...@arandomurl.com> wrote:

I just triggered a rerun, seeing a crash with in
https://travis-ci.org/mozilla-b2g/gaia/jobs/14610294

test_launch_everything_me_app 
(test_everythingme_launch_app.TestEverythingMeLaunchApp) ... ERROR

Dont think its solved yet


On 27 November 2013 21:19, Jonathan Griffin <jgrif...@mozilla.com> wrote:

  Gaia tests on TBPL haven't been showing any increase in intermittents.

Travis is really a bad UI for sheriffing these kinds of problems, and
we're not dumping gecko.log from Travis which doesn't help either.  We
should dump gecko.log at the end of the run so we can see what sorts of
errors the build is sending to stdout/stderr.

Some digging shows a few details.  We haven't seen the apparent crash in
the last few hours; all the failures more recent than about 7 hours ago are
single-test failures, mostly of tests that we're not running in TBPL
(yet).  Probably, these are flakey tests, and certainly, we should be using
the same manifest in TBPL as we do in Travis so we can use TBPL to catch
flakey test issues.

The crash occurred several times when the tests were using the build
http://ftp.mozilla.org/pub/mozilla.org/b2g/tinderbox-builds/mozilla-central-linux64_gecko/1385546575/b2g-28.0a1.multi.linux-x86_64.tar.bz2,
and a few earlier builds, but haven't been seen with later builds...maybe
this problem has already resolved itself?

Jonathan


On 11/27/2013 12:58 PM, Dale Harvey wrote:

  mentioning marionette was just a stab in the dark, I was more saying
that it seems like something handwavey infrastructurey than specific
component failures, something crashy when going through the b2g code path
seems feasible

  If anyone better than me at reading tbpl could see if there was any
point in the last day in which gaia tests got less stable, that would be
useful information



  On 27 November 2013 20:47, Jonathan Griffin <jgrif...@mozilla.com>wrote:

  Yes, that landed on Saturday, so if the problems you're seeing date
back that far, that's a potential culprit.

Jonathan


On 11/27/2013 11:42 AM, Gareth Aye wrote:

Either way this is not necessarily a marionette server issue.


On Wed, Nov 27, 2013 at 2:42 PM, Gareth Aye <gareth....@gmail.com>wrote:

http://hg.mozilla.org/mozilla-central/rev/1fd1596ffb9f also?


On Wed, Nov 27, 2013 at 2:39 PM, Jonathan Griffin <jgrif...@mozilla.com
wrote:

The only Marionette change to have gone in recently is:

http://hg.mozilla.org/mozilla-central/rev/9cc147e1d222

Jonathan


On 11/27/2013 11:28 AM, Gareth Aye wrote:

Also fwiw I don't understand how this could have landed if the python
tests
are running on tbpl...

On Wed, Nov 27, 2013 at 2:16 PM, Gareth Aye <gareth....@gmail.com>
wrote:

  Zac,

I'm a bit busy with other work right now. Would you mind bisecting
gecko
to find the culprit patch for us?


On Wed, Nov 27, 2013 at 2:10 PM, Zac Campbell <zcampb...@mozilla.com
wrote:

  On 27/11/13 18:29, Dale Harvey wrote:

  Tracking bug is:
https://bugzilla.mozilla.org/show_bug.cgi?id=943980

Been seeing test failures on travis for the last 6 hours (
https://travis-ci.org/mozilla-b2g/gaia/builds) so we have closed
the
tree

The failures wildly vary and all seem to be intermittent, so this
is
likely
due to a new b2g build / marionette changes, currently debugging
(although
I will be afk for ~90minutes) so if anyone else wants to help
debug that
would be great :)

Will reopen as soon as we get consistent passing builds

Cheers
Dale
_______________________________________________
dev-b2g mailing list
dev-b2g@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-b2g

  wrt the gaia-ui-tests failures:
"Connection Refused" does look like the whole b2g/gecko is
crashing. The
console trace is similar to when we run these tests on device and
the
device crashes.

Also the one about:
TypeError: app is undefined ; stacktrace: execute_async_script
@gaia_test.py, line 74
inline javascript, line 251
src: " let result = {frame: app.frame.firstChild,"

I have witnessed on device and this is when b2g fails to start up.
The
device was stuck on the splash screen, but it is conscious enough
for it to
be stopped and restarted (and thus the test suite continue).


There was one test failing intermittently earlier in the day:
test_edit_contact.py
I've prepared a fix so it doesn't stop you from re-opening the tree:
https://github.com/mozilla-b2g/gaia/pull/14136


Good luck,
Zac
_______________________________________________
dev-b2g mailing list
dev-b2g@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-b2g



--
Best,
Gareth







  --
Best,
Gareth




--
Best,
Gareth







_______________________________________________
dev-b2g mailing list
dev-b2g@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-b2g

Reply via email to