At Mozilla we regularly change our build and test configurations in CI
based on business and market needs (new platforms, OS upgrades, cost
savings, etc).
We’ve recently documented this process to provide more clarity about
timelines and responsibilities when one of these changes happen:
This was announced last night:
https://groups.google.com/forum/#!topic/mozilla.dev.platform/k-irJtmCcqg
On Thu, Apr 25, 2019 at 5:34 PM Botond Ballo wrote:
> On Thu, Apr 25, 2019 at 4:58 PM Bobby Holley
> wrote:
> > we won't ship Fennec past ESR68
>
> That's news to me. Was this announced
, if others disagree or have other thoughts, it would be
good to know.
On Thu, Apr 25, 2019 at 4:58 PM Bobby Holley wrote:
> On Thu, Apr 25, 2019 at 9:38 AM Joel Maher wrote:
>
>>
>>
>> On Thu, Apr 25, 2019 at 12:12 PM Bobby Holley
>> wrote:
>>
>>> On T
On Thu, Apr 25, 2019 at 12:12 PM Bobby Holley wrote:
> On Thu, Apr 25, 2019 at 3:36 AM Joel Maher wrote:
>
>>
>>
>> On Wed, Apr 24, 2019 at 1:39 PM Bobby Holley
>> wrote:
>>
>>> >
>>>> > Thanks Mike!
>>>> >
>&
On Wed, Apr 24, 2019 at 1:39 PM Bobby Holley wrote:
> >
>> > Thanks Mike!
>> >
>> > So Fennec is the last remaining non-e10s configuration we ship to users.
>> > Given that Fennec test coverage is somewhat incomplete, we probably
>> want to
>> > keep running desktop 1proc tests until Fennec EOL.
Here is where we initially turned on non-e10s tests for win7:
https://bugzilla.mozilla.org/show_bug.cgi?id=1391371
and then moved to linux32:
https://bugzilla.mozilla.org/show_bug.cgi?id=1433276
Currently mochitest-chrome and mochitest-a11y run as 1proc in-tree- these
run this way as they do not
On Thursday, April 11, 2019 at 10:08:27 AM UTC-4, William Lachance wrote:
> On 2019-04-09 11:00 a.m., Gian-Carlo Pascutto wrote:
> > On 5/04/19 15:35,jma...@mozilla.com wrote:
> >> Currently linux32 makes up about .77% of our total users. This is
> >> still 1M+ users on any given week.
> > I
Thanks everyone for your comments.
If we were to run linux32 tests in whole on mozilla-central only that would
result in about half of the load we see from linux32 today and about 1
backfill a week (given that most unique linux32 regressions result in test
disabling). That alone would be a good
Thanks for asking Jan. I think 16% is the maximum we can save. In talking
with a few more people, I think a middle of the road proposal would be to:
Turn off linux64/windows7/windows10 opt builds+tests on autoland and
mozilla-inbound. Leave them on for mozilla-central and try.
What this does
you could edit the taskcluster/ci/test/tests.yml file and add
|linux64/debug: both| in the places we have |windows7-32/debug: both|, as
seen here:
http://searchfox.org/mozilla-central/source/taskcluster/ci/test/tests.yml#138
If there are concerns with windows build times, please ask for those to
Thanks everyone for chiming in here. I see this isn't as simple as a
binary decision and to simplify things, I think turning on all non-e10s
tests that were running for windows7-debug would give us reasonable
coverage and ensure that users on our most popular OS (and focus for 57)
have a stable
to get fixed.
On Tue, Aug 15, 2017 at 4:39 PM, Ben Kelly <bke...@mozilla.com> wrote:
> On Tue, Aug 15, 2017 at 4:37 PM, Joel Maher <jma...@mozilla.com> wrote:
>
>> All of the above mentioned tests are not run on Android (well
>> mochitest-media is to some degree). I
All of the above mentioned tests are not run on Android (well
mochitest-media is to some degree). Is 4 months unreasonable to fix the
related tests that do not run in e10s? Is there another time frame that
seems more reasonable?
On Tue, Aug 15, 2017 at 4:34 PM, Ben Kelly
These are maintained by developers and as perf sheriffs we have been
working towards turning off what is too noisy and filing bugs for what is a
real regression. That requires a bit more work on our part and more
documentation including ownership of each test that reports.
On Sun, Jul 16, 2017
Good suggestion here- I have seen so many cases where a simple
fix/disabled/unknown/needswork just do not describe it. Let me work on a
few new tags given that we have 248 bugs to date.
I am thinking maybe [stockwell turnedoff] - where the job is turned off- we
could also ensure one of the last
Thank for pointing that out. In some cases we have fixed tests that are
just timing out, in a few cases we disable because the test typically runs
much faster (i.e. <15 seconds) and is hanging/timing out. In other cases
extending the timeout doesn't help (i.e. a hang/timeout).
Please feel free
I wonder if we could make a single link in orangefactor that would give you
the range of TEST-UNEXPECTED-FAIL messages to help with this. I filed
https://bugzilla.mozilla.org/show_bug.cgi?id=1339937 to track this, Please
do offer suggestions/use cases for that specific bug.
On Tue, Feb 14, 2017
Thanks for mentioning this Ehsan. I had a few other people express
concerns about the time in general, so this is changed.
This meeting and future meetings (every other week) will be on Tuesday at
08:30am PDT. This means that the first meeting will be Tuesday October
11th, 2016:
we have 32 bit windows builds, are there specific concerns you have about
the perf impact of SSE2 and linux32 builds?
On Fri, May 20, 2016 at 7:14 AM, Mike Hommey wrote:
> On Thu, Sep 24, 2015 at 06:40:08PM -0700, jma...@mozilla.com wrote:
> > Our infrastructure at Mozilla
This is a great point, and I still have no idea what caused the linux 32/64
machines to change on July 30th. It appeared to be a gradual rollout
(indicates a machine issue which was picked up on reboot or something
similar). For running talos tests we pin to a specific revision in the
talos
Great question Brian!
I believe you are asking if we would generate alerts based on individual
tests instead of the summary of tests. The short answer is no. In looking
at reporting the subtest results as new alerts, we found there was a lot of
noise (especially in svgx, tp5o, dromaeo, v8) as
As one of the primary people sheriffing alerts, I have found that we get
decisions made much faster as a result of this policy. I would be
interested to hear if others have differing opinions as I could be seeing
this with tunnel vision.
-Joel
On Fri, Mar 27, 2015 at 4:34 PM, Lawrence Mandel
I wanted to post an update. We are very close to achieving all green tests
on browser-chrome (tracking in bug 1057512)- there are 3 bugs remaining:
*Bug 525284* https://bugzilla.mozilla.org/show_bug.cgi?id=525284 -
browser_bug400731.js
is fragile, not always passing
** patch for review, should be
Some thoughts on the subject-
I would argue against running performance tests inside of mochitest. The main
reason is that mochitest has a lot of profile stuff for testing as well as many
other tests bundled inside of the same browser session. For a standalone
metric unrelated to a user
24 matches
Mail list logo