Re: Shippable Builds Incoming...

2019-03-26 Thread Justin Wood
Yes it does, I'm currently looking at a good way to improve mozregression
ASAP, which has been partially broken since the opt->pgo work mentioned
above.

This is tracked in 1532412
<https://bugzilla.mozilla.org/show_bug.cgi?id=1532412> and whatever
solution we get there should be translatable for shippable and make this
easy.

I'm hoping to touch base with :wlach soon so we can figure out a good
solution to this code.

~Justin Wood (Callek)

On Tue, Mar 26, 2019 at 3:05 PM Kartikaya Gupta  wrote:

> Will the change to taskcluster indexes affect mozregression?
>
> On Tue, Mar 26, 2019 at 2:56 PM Justin Wood  wrote:
> >
> > Hey Everyone,
> >
> > tl;dr I landed  "shippable builds" [1] to autoland, and I'll assume it
> > sticks. This replaces Nightly and PGO for most job types for desktop
> jobs.
> > Expect to see tests and builds in treeherder reflecting this, and no
> longer
> > seeing most PGO platforms or Nightly platforms available for your pushes.
> >
> > Longer story is that we are replacing all nightlies and current pgo
> builds
> > with 'shippable' variants. These combine what we do for PGO and Nightlies
> > into one job type, with the main goal of consolidating build types, and
> > simplifying the build configurations. We will not be changing Fennec
> build
> > types at this time
> >
> > This work mirrors in part what we did with opt->pgo tests [2] to keep our
> > overall test load down and does move osx on push to shippable (previously
> > was an opt build) and linux32 testing to shippable (was previously also
> on
> > an opt variant).
> >
> > This will obsolete many of the taskcluster routes you have come to
> expect,
> > and instead using shippable indexes. We have made some efforts to
> preserve
> > old indexes where possible, but If you are using taskcluster indexes in
> any
> > of your own code, I'd love to know where and for what purpose, so we can
> > document it going forward.
> >
> > Future work this unlocks is the ability to do "Nightly Promotion" which
> > should reduce the cost of Nightly CI and greatly speed up the turnaround
> > time for Nightly builds on mozilla-central. And once we are promoting
> > builds from the mozilla-central push to Nightly users, then we will
> truely
> > be testing against what we ship, so a net win for our code quality as
> well.
> >
> > There will also be some future work to cleanup some of the code in the
> > taskgraph code itself, to remove confusing code paths that mixed
> 'nightly'
> > build types along with the 'nightly' cadence of a released product. As
> well
> > as work to remove the temporary transitional index routes I tried to add.
> >
> > Feel free to find me in IRC #ci or on slack at #firefox-ci if you have
> any
> > questions or concerns.
> >
> > Thank You,
> > ~Justin Wood (Callek)
> >
> > [1]
> >
> https://groups.google.com/d/msg/mozilla.dev.planning/JomJmzGOGMY/vytPViZBDgAJ
> > [2]
> >
> https://groups.google.com/d/msg/mozilla.dev.platform/0dYGajwXCBc/4cpYrUMyBgAJ
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Shippable Builds Incoming...

2019-03-26 Thread Justin Wood
Hey Everyone,

tl;dr I landed  "shippable builds" [1] to autoland, and I'll assume it
sticks. This replaces Nightly and PGO for most job types for desktop jobs.
Expect to see tests and builds in treeherder reflecting this, and no longer
seeing most PGO platforms or Nightly platforms available for your pushes.

Longer story is that we are replacing all nightlies and current pgo builds
with 'shippable' variants. These combine what we do for PGO and Nightlies
into one job type, with the main goal of consolidating build types, and
simplifying the build configurations. We will not be changing Fennec build
types at this time

This work mirrors in part what we did with opt->pgo tests [2] to keep our
overall test load down and does move osx on push to shippable (previously
was an opt build) and linux32 testing to shippable (was previously also on
an opt variant).

This will obsolete many of the taskcluster routes you have come to expect,
and instead using shippable indexes. We have made some efforts to preserve
old indexes where possible, but If you are using taskcluster indexes in any
of your own code, I'd love to know where and for what purpose, so we can
document it going forward.

Future work this unlocks is the ability to do "Nightly Promotion" which
should reduce the cost of Nightly CI and greatly speed up the turnaround
time for Nightly builds on mozilla-central. And once we are promoting
builds from the mozilla-central push to Nightly users, then we will truely
be testing against what we ship, so a net win for our code quality as well.

There will also be some future work to cleanup some of the code in the
taskgraph code itself, to remove confusing code paths that mixed 'nightly'
build types along with the 'nightly' cadence of a released product. As well
as work to remove the temporary transitional index routes I tried to add.

Feel free to find me in IRC #ci or on slack at #firefox-ci if you have any
questions or concerns.

Thank You,
~Justin Wood (Callek)

[1]
https://groups.google.com/d/msg/mozilla.dev.planning/JomJmzGOGMY/vytPViZBDgAJ
[2]
https://groups.google.com/d/msg/mozilla.dev.platform/0dYGajwXCBc/4cpYrUMyBgAJ
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to Desupport: Fennec Automated Updates (when Sideload Installed)

2019-02-05 Thread Justin Wood
There is not any in product support for Whats New Pages, or any sort of
Details urls, which we can set from Balrog for Desktop.

So I think we're left with outreach for these cases. Generally speaking
Nightly users are a substantially small subset of our users, and the uptake
counts on Nightly have been substantially low as well (that is uptake for
users who get Balrog queries) -- I think the fact that approximately half
of queries even checked balrog *once* using the current trunk version in
the timeframe [out of all queries] is pretty telling, and coupled with the
idea that this is Nightly and not Release/Beta I'm less concerned.

I haven't heard of a reason to hold out on this from any of the product
owners for this reason, and can revert the code if there is a need for
product support of such a thing, in which case I can advise on the balrog
side, but would need a few coordinated efforts to do (Some
website/translation work for the notice, some product work to support the
new param and to cause it to notify users in some way, etc)

I personally feel the effort is better spent on improving tooling going
forward, and informing via a SUMO article if enough people ask about it and
via commentary in bugs. (I also created a FAQ on the RFC for how to move
forward if you did sideload, to try and mitigate the problems this could
cause).

I'm open to other ways that I can facilitate user notification though.

~Justin Wood (Callek)

On Tue, Feb 5, 2019 at 3:32 PM Jeff Gilbert  wrote:

> Did we figure out how to tell users they're not going to get updates
> anymore? I don't see a resolution in that PR.
>
> This used to be the only way to get Nightly, so I'd expect long time
> users to be surprised. I only stopped using the downloaded APK last
> year, I think.
>
> On Tue, Feb 5, 2019 at 10:04 AM Justin Wood  wrote:
> >
> > To follow up here, this is now landed on autoland, once this merges
> Fennec
> > Nightly will no longer find new updates when it was installed outside of
> > Google Play.
> >
> > See linked document from thread for further insight.
> >
> > ~Justin Wood (Callek)
> >
> > On Mon, Jan 28, 2019 at 3:36 PM Justin Wood  wrote:
> >
> > > Hello,
> > >
> > > I am intending to stop offering Fennec updates when Fennec itself is
> > > installed manually instead of via the Google Play store.
> > >
> > > More details are at
> https://github.com/mozilla-releng/releng-rfcs/pull/13
> > > or in rendered form at
> > >
> https://github.com/Callek/releng-rfcs/blob/no_fennec_balrog/rfcs/0013-disable-fennec-balrog.md
> > >
> > > Please contain (most) comments to the markdown on github.
> > >
> > > I intend to implement this on autoland/central as of Feb 5, 2019 and
> allow
> > > it to ride the trains to release.
> > >
> > > Thank You,
> > > ~Justin Wood (Callek)
> > > Release Engineer
> > >
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to Desupport: Fennec Automated Updates (when Sideload Installed)

2019-02-05 Thread Justin Wood
To follow up here, this is now landed on autoland, once this merges Fennec
Nightly will no longer find new updates when it was installed outside of
Google Play.

See linked document from thread for further insight.

~Justin Wood (Callek)

On Mon, Jan 28, 2019 at 3:36 PM Justin Wood  wrote:

> Hello,
>
> I am intending to stop offering Fennec updates when Fennec itself is
> installed manually instead of via the Google Play store.
>
> More details are at https://github.com/mozilla-releng/releng-rfcs/pull/13
> or in rendered form at
> https://github.com/Callek/releng-rfcs/blob/no_fennec_balrog/rfcs/0013-disable-fennec-balrog.md
>
> Please contain (most) comments to the markdown on github.
>
> I intend to implement this on autoland/central as of Feb 5, 2019 and allow
> it to ride the trains to release.
>
> Thank You,
> ~Justin Wood (Callek)
> Release Engineer
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Fwd: Intent to Desupport: Fennec Automated Updates (when Sideload Installed)

2019-01-28 Thread Justin Wood
CC'ing dev-platform, since dev-planning got blackholed with my message.

-- Forwarded message -
From: Justin Wood 
Date: Mon, Jan 28, 2019 at 3:36 PM
Subject: Intent to Desupport: Fennec Automated Updates (when Sideload
Installed)
To: planning , firefox-ci <
firefox...@mozilla.com>, release-engineering <
release-engineer...@lists.mozilla.org>, release ,
release-drivers 


Hello,

I am intending to stop offering Fennec updates when Fennec itself is
installed manually instead of via the Google Play store.

More details are at https://github.com/mozilla-releng/releng-rfcs/pull/13
or in rendered form at
https://github.com/Callek/releng-rfcs/blob/no_fennec_balrog/rfcs/0013-disable-fennec-balrog.md

Please contain (most) comments to the markdown on github.

I intend to implement this on autoland/central as of Feb 5, 2019 and allow
it to ride the trains to release.

Thank You,
~Justin Wood (Callek)
Release Engineer
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to adjust testing to run on PGO builds only and not test on OPT builds

2019-01-03 Thread Justin Wood
I don't think its much burden, but when we have code complexity it can add
up with a matter of "how useful is this really.." Even if maintenance
burden is low it is still a tradeoff. I'm just saying I suspect its
possible to do this, but not sure if it is useful in the end (and I'm not
looking to make the call on that)

~Justin Wood (Callek)

On Thu, Jan 3, 2019 at 1:22 PM Steve Fink  wrote:

> On 01/03/2019 10:07 AM, Justin Wood wrote:
> > on the specific proposal front I can envision us allowing tests to be run
> > on non-pgo builds via triggers (so never by default, but always
> > backfillable/selectable) should someone need to try and bisect an issue
> > that is discovered... I'm not sure if the code maintenance burden is
> worth
> > it for the benefit but I don't hold a strong opinion there.
>
> Is it a lot of maintenance? We have this for some other jobs
> (linux64-shell-haz is the one I'm most familiar with, but it's a
> standalone job so doesn't have non-toolchain graph dependencies). I get
> quite a bit of value out of the resulting faster hack-try-debug cycles;
> I would imagine it to be at least as useful to have a turnaround time of
> 1 hour for opt vs 2 hours for pgo.
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposal to adjust testing to run on PGO builds only and not test on OPT builds

2019-01-03 Thread Justin Wood
I should say that the shippable build proposal (
https://groups.google.com/d/msg/mozilla.dev.planning/JomJmzGOGMY/vytPViZBDgAJ)
doesn't seem to intersect negatively with this.

And in fact I think these two proposals compliment each other quite nicely.

Additionally I have no concerns over this work taking place prior to my
work being complete.

on the specific proposal front I can envision us allowing tests to be run
on non-pgo builds via triggers (so never by default, but always
backfillable/selectable) should someone need to try and bisect an issue
that is discovered... I'm not sure if the code maintenance burden is worth
it for the benefit but I don't hold a strong opinion there.

~Justin Wood (Callek)

On Thu, Jan 3, 2019 at 11:44 AM Andrew Halberstadt  wrote:

> CC Callek
>
> How will this interact with the "shippable builds" project that Callek
> posted
> about awhile back? My understanding is there's a high probability PGO is
> going away. Would it make sense to wait for that to project to wrap up?
>
> -Andrew
>
> On Thu, Jan 3, 2019 at 11:20 AM jmaher  wrote:
>
>> I would like to propose that we do not run tests on linux64-opt,
>> windows7-opt, and windows10-opt.
>>
>> Why am I proposing this:
>> 1) All test regressions that were found on trunk are mostly on debug, and
>> in fewer cases on PGO.  There are no unique regressions found in the last 6
>> months (all the data I looked at) that are exclusive to OPT builds.
>> 2) On mozilla-beta, mozilla-release, and ESR, we only build/test PGO
>> builds, we do not run tests on plan OPT builds
>> 3) This will reduce the jobs (about 16%) we run which in turn reduces,
>> cpu time, money spent, turnaround time, intermittents, complexity of the
>> taskgraph.
>> 4) PGO builds are very similar to OPT builds, but we add flags to
>> generate profile data and small adjustments to build scripts behind MOZ_PGO
>> flag in-tree, then we launch the browser, collect data, and repack our
>> binaries for faster performance.
>> 5) We ship PGO builds, not OPT builds
>>
>> What are the risks associated with this?
>> 1) try server build times will increase as we will be testing on PGO
>> instead of OPT
>> 2) we could miss a regression that only shows up on OPT, but if we only
>> ship PGO and once we leave central we do not build OPT, this is a very low
>> risk.
>>
>> I would like to hear any concerns you might have on this or other areas
>> which I have overlooked.  Assuming there are no risks which block this, I
>> would like to have a decision by January 11th, and make the adjustments on
>> January 28th when Firefox 67 is on trunk.
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Localized Repacks now visible on (most) pushes

2018-05-30 Thread Justin Wood
Hello Everyone,

tl;dr You should now see "L10n" jobs on treeherder with many pushes, these
are tier 1 and if they break they would also be breaking Nightly so your
patch would need to be backed out.

As many of you know, especially the old guard [1] here, Localized Repacks
have frequently been known to fail in weird and interesting ways on Nightly
and Beta builds.

Throughout the movement to taskcluster we have been reducing the
differences in automation to make what we ship to release users happen with
the same process as what we ship to nightly users. We have recently
achieved that parity now that we have finished our migration to taskcluster
[2]

One straggler was on our implementation of L10n builds on try [3][4] which
had begun to frequently fail when users add/remove any localized file (.dtd
or .ftl). And similarly we have always lacked the ability to easily vet a
change to central/inbound/autoland as "will this break l10n".

With the work I've now done we have aligned this "try" l10n job with what
we perform in the Nightly and Release Promotion process, as well as allowed
ourselves the ability to run these on every push.

Implementation details:
* For now these still run only when a subset of files change [5] but this
list can be expanded easily, or we can rip it out and instead *always* run
these jobs.
* These jobs are performed using live L10n repositories, but just a small
set of our total localization, specifically: en-CA, he, it, ja, ja-JP-mac
[6]
* As part of doing this work, we needed to specify the STUB Installer
differently, if we need it on any new channels/builds we need to specify it
in the build taskcluster kind, like [7]. We have a check in configure to
error if its not set correctly [8]

If you have any questions, feel free to reach out to me/releng.
~Justin Wood (Callek)

[1] - https://en.wikipedia.org/wiki/Old_Guard
[2] - https://atlee.ca/blog/posts/migration-status-3.html
[3] - https://bugzilla.mozilla.org/show_bug.cgi?id=848284
[4] - https://bugzilla.mozilla.org/show_bug.cgi?id=1458378
[5] -
https://hg.mozilla.org/integration/mozilla-inbound/annotate/d7472bf663bd/taskcluster/ci/l10n/kind.yml#l177
[6] - https://hg.mozilla.org/integration/mozilla-inbound/rev/d2f587986a3b
[7] -
https://hg.mozilla.org/integration/mozilla-inbound/diff/e250856b4688/taskcluster/ci/build/windows.yml
[8] - https://hg.mozilla.org/integration/mozilla-inbound/rev/a562a809e8dc
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


INTENT TO DEPRECATE (taskcluster l10n routing)

2017-12-01 Thread Justin Wood
Hey Everyone,

tl;dr if you don't download nightly l10n repacks via taskcluster index
routes, this does not affect you.

Up until recently you could only find nightly l10n repacks with the
following routes:

*
.gecko.v2.{project}.revision.{head_rev}.{build_product}-l10n.{build_name}-{build_type}.{locale}
*
.gecko.v2.{project}.pushdate.{year}.{month}.{day}.{pushdate}.{build_product}-l10n.{build_name}-{build_type}.{locale}
*
{index}.gecko.v2.{project}.latest.{build_product}-l10n.{build_name}-{build_type}.{locale}

Recently I have updated the routing to match that of regular Nightlies,
specifically one such route is:

gecko.v2.mozilla-central.nightly.revision.a21f4e2ce5186e2dc9ee411b07e9348866b4ef30.firefox-l10n.linux64-opt

This deprecation is in preparation of actually building l10n repacks on
(nearly) every code checkin, rather than just on nightlies.

Let me know if there are any questions or concerns.

~Justin Wood (Callek)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Nightly Start Time and Daylight Savings

2017-11-06 Thread Justin Wood
Hey Everyone,

I was alerted to a confusion about when Nightlies Start since the US
Daylight Savings change this past weekend.

Previously [on buildbot] Nightlies would start at 3am Pacific Time.

Now with Taskcluster the start time is anchored in UTC so doesn't move
along with Daylight Savings, currently anchoring at 10am and 10pm UTC.

This has some implication for when you may expect to see new nightlies
available in your own local timezone.

Thank You,
~Justin Wood (Callek)
Release Engineering
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Switching to TaskCluster Windows builds on Wednesday July 26th

2017-07-26 Thread Justin Wood
As of now we are declaring this work a success!

We have unfrozen Nightly updates for 10% of our userbase right now.  We
plan to update 100% of our users tomorrow after the next set of Nightlies
are produced.

There are a few minor issues we are following up on at this time, but
expect to have them resolved ASAP.

This work was accomplished without closing the trees, so big thank you to
everyone who made this milestone possible.

~Justin Wood (Callek)

On Wed, Jul 26, 2017 at 10:54 AM, Justin Wood <jw...@mozilla.com> wrote:

> This work has now begun.
>
> On Wed, Jul 26, 2017 at 9:13 AM, Justin Wood <jw...@mozilla.com> wrote:
>
>> Hello Everyone,
>>
>> Just a reminder that this work will be taking place in just under
>> aproximately 2 hours. We are on track to complete it as outlined.
>>
>> Trees may be closed during parts of today while stuff lands, and
>> Nightlies will likely stay frozen to the latest buildbot-produced set until
>> this evening or sometime tomorrow.
>>
>> Thank You again,
>> ~Justin Wood (Callek)
>>
>> On Fri, Jul 21, 2017 at 2:59 PM, Justin Wood <jw...@mozilla.com> wrote:
>>
>>> Hello,
>>>
>>> tl;dr
>>>
>>> What: Windows opt & nightly builds switching to TaskCluster
>>>
>>> When: Wednesday, July 26th at 11:00ET
>>>
>>> Developer impact: Much rejoicing, Windows builds ~15 minutes faster,
>>> Some Windows 10 testing switched to Tier1.
>>>
>>> Next Wednesday, July 26th, at 11:00ET we will be switching remaining
>>> production windows builds from Buildbot to TaskCluster. Buildbot builds for
>>> Windows will be disabled as this hits the trees.
>>>
>>> As part of this work, many TaskCluster Windows tests will also be
>>> enabled as Tier1, including Windows 10 tests. Test suites requiring Windows
>>> hardware, and tests that are not yet ready to migrate from Win8 to Win10
>>> will remain on Buildbot. For Win8 tests that already migrated to Win10, we
>>> will disable the corresponding Win8 variants.
>>>
>>> If you have questions, please contact us in #releng or via email.
>>>
>>> Relevant bugs:
>>>
>>> Migrate Win64 nightly builds to TaskCluster
>>>
>>> https://bugzilla.mozilla.org/show_bug.cgi?id=1267427
>>>
>>> Migrate Win32 nightly builds to TaskCluster
>>>
>>> https://bugzilla.mozilla.org/show_bug.cgi?id=1267428
>>>
>>> FAQ:
>>>
>>> - Will builds running in TaskCluster be available more quickly?
>>>
>>> Builds in TaskCluster are approximately 15 minutes faster than in
>>> buildbot.
>>>
>>> - Do the same tests pass/fail on a TC build as on a BB build?
>>>
>>> Yes, test results should be the same. Performance results should also be
>>> the same
>>>
>>> - Will there be any impact to release schedules?
>>>
>>> No, We have performed additional testing to ensure a smooth Firefox 56
>>> Beta cycle, to be sure that we are ready with our release automation for
>>> the changes this change brings with it, and we do not expect any delays to
>>> the release pipeline with regard to this landing.
>>>
>>> Additionally we have scheduled this change to land on mozilla-central
>>> now to minimize any potential impact it may have on efforts with Firefox 57
>>> and project Quantum.
>>>
>>> -How will Try be affected?
>>>
>>> Traditionally the Try Server has followed the configuration of
>>> mozilla-central, however since the ability to test older branches on Try is
>>> important we have devised the following short term plan.
>>>
>>> We will leave BB builds enabled on Try.
>>>
>>> When you push to Try from a Gecko 56+ tree after the changes land, you
>>> will get TaskCluster and Buildbot builds, all testing will be triggered by
>>> TaskCluster, and you can safely ignore the BB jobs.
>>>
>>> When you push to Try from Gecko 56 before this change or any older gecko
>>> tree, you will get buildbot builds for your push, and all testing will be
>>> triggered by buildbot.
>>>
>>> The test scheduling mentioned here is made possible by a configuration
>>> item in mozharness we are toggling, so at the cost of some extra overhead
>>> in Windows build load on Try we can support older branches.
>>>
>>> -How will Try be affected medium/long term?
>>>
>>> Medium term we hope to explore some options to make Buildbot builds be
>>> off by default on Try, maybe requiring special Try syntax to enable them.
>>> This is however not well defined yet, so we will followup with an e-mail to
>>> these lists whenever we expect that to change.
>>>
>>> Long term, we will just turn off Try support of Windows Buildbot Builds,
>>> and use strictly TaskCluster.
>>>
>>> --
>>> Thank You,
>>> ~Justin Wood (Callek)
>>> Mozilla Release Engineer
>>>
>>>
>>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Switching to TaskCluster Windows builds on Wednesday July 26th

2017-07-26 Thread Justin Wood
This work has now begun.

On Wed, Jul 26, 2017 at 9:13 AM, Justin Wood <jw...@mozilla.com> wrote:

> Hello Everyone,
>
> Just a reminder that this work will be taking place in just under
> aproximately 2 hours. We are on track to complete it as outlined.
>
> Trees may be closed during parts of today while stuff lands, and Nightlies
> will likely stay frozen to the latest buildbot-produced set until this
> evening or sometime tomorrow.
>
> Thank You again,
> ~Justin Wood (Callek)
>
> On Fri, Jul 21, 2017 at 2:59 PM, Justin Wood <jw...@mozilla.com> wrote:
>
>> Hello,
>>
>> tl;dr
>>
>> What: Windows opt & nightly builds switching to TaskCluster
>>
>> When: Wednesday, July 26th at 11:00ET
>>
>> Developer impact: Much rejoicing, Windows builds ~15 minutes faster,
>> Some Windows 10 testing switched to Tier1.
>>
>> Next Wednesday, July 26th, at 11:00ET we will be switching remaining
>> production windows builds from Buildbot to TaskCluster. Buildbot builds for
>> Windows will be disabled as this hits the trees.
>>
>> As part of this work, many TaskCluster Windows tests will also be enabled
>> as Tier1, including Windows 10 tests. Test suites requiring Windows
>> hardware, and tests that are not yet ready to migrate from Win8 to Win10
>> will remain on Buildbot. For Win8 tests that already migrated to Win10, we
>> will disable the corresponding Win8 variants.
>>
>> If you have questions, please contact us in #releng or via email.
>>
>> Relevant bugs:
>>
>> Migrate Win64 nightly builds to TaskCluster
>>
>> https://bugzilla.mozilla.org/show_bug.cgi?id=1267427
>>
>> Migrate Win32 nightly builds to TaskCluster
>>
>> https://bugzilla.mozilla.org/show_bug.cgi?id=1267428
>>
>> FAQ:
>>
>> - Will builds running in TaskCluster be available more quickly?
>>
>> Builds in TaskCluster are approximately 15 minutes faster than in
>> buildbot.
>>
>> - Do the same tests pass/fail on a TC build as on a BB build?
>>
>> Yes, test results should be the same. Performance results should also be
>> the same
>>
>> - Will there be any impact to release schedules?
>>
>> No, We have performed additional testing to ensure a smooth Firefox 56
>> Beta cycle, to be sure that we are ready with our release automation for
>> the changes this change brings with it, and we do not expect any delays to
>> the release pipeline with regard to this landing.
>>
>> Additionally we have scheduled this change to land on mozilla-central now
>> to minimize any potential impact it may have on efforts with Firefox 57 and
>> project Quantum.
>>
>> -How will Try be affected?
>>
>> Traditionally the Try Server has followed the configuration of
>> mozilla-central, however since the ability to test older branches on Try is
>> important we have devised the following short term plan.
>>
>> We will leave BB builds enabled on Try.
>>
>> When you push to Try from a Gecko 56+ tree after the changes land, you
>> will get TaskCluster and Buildbot builds, all testing will be triggered by
>> TaskCluster, and you can safely ignore the BB jobs.
>>
>> When you push to Try from Gecko 56 before this change or any older gecko
>> tree, you will get buildbot builds for your push, and all testing will be
>> triggered by buildbot.
>>
>> The test scheduling mentioned here is made possible by a configuration
>> item in mozharness we are toggling, so at the cost of some extra overhead
>> in Windows build load on Try we can support older branches.
>>
>> -How will Try be affected medium/long term?
>>
>> Medium term we hope to explore some options to make Buildbot builds be
>> off by default on Try, maybe requiring special Try syntax to enable them.
>> This is however not well defined yet, so we will followup with an e-mail to
>> these lists whenever we expect that to change.
>>
>> Long term, we will just turn off Try support of Windows Buildbot Builds,
>> and use strictly TaskCluster.
>>
>> --
>> Thank You,
>> ~Justin Wood (Callek)
>> Mozilla Release Engineer
>>
>>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Switching to TaskCluster Windows builds on Wednesday July 26th

2017-07-26 Thread Justin Wood
Hello Everyone,

Just a reminder that this work will be taking place in just under
aproximately 2 hours. We are on track to complete it as outlined.

Trees may be closed during parts of today while stuff lands, and Nightlies
will likely stay frozen to the latest buildbot-produced set until this
evening or sometime tomorrow.

Thank You again,
~Justin Wood (Callek)

On Fri, Jul 21, 2017 at 2:59 PM, Justin Wood <jw...@mozilla.com> wrote:

> Hello,
>
> tl;dr
>
> What: Windows opt & nightly builds switching to TaskCluster
>
> When: Wednesday, July 26th at 11:00ET
>
> Developer impact: Much rejoicing, Windows builds ~15 minutes faster, Some
> Windows 10 testing switched to Tier1.
>
> Next Wednesday, July 26th, at 11:00ET we will be switching remaining
> production windows builds from Buildbot to TaskCluster. Buildbot builds for
> Windows will be disabled as this hits the trees.
>
> As part of this work, many TaskCluster Windows tests will also be enabled
> as Tier1, including Windows 10 tests. Test suites requiring Windows
> hardware, and tests that are not yet ready to migrate from Win8 to Win10
> will remain on Buildbot. For Win8 tests that already migrated to Win10, we
> will disable the corresponding Win8 variants.
>
> If you have questions, please contact us in #releng or via email.
>
> Relevant bugs:
>
> Migrate Win64 nightly builds to TaskCluster
>
> https://bugzilla.mozilla.org/show_bug.cgi?id=1267427
>
> Migrate Win32 nightly builds to TaskCluster
>
> https://bugzilla.mozilla.org/show_bug.cgi?id=1267428
>
> FAQ:
>
> - Will builds running in TaskCluster be available more quickly?
>
> Builds in TaskCluster are approximately 15 minutes faster than in buildbot.
>
> - Do the same tests pass/fail on a TC build as on a BB build?
>
> Yes, test results should be the same. Performance results should also be
> the same
>
> - Will there be any impact to release schedules?
>
> No, We have performed additional testing to ensure a smooth Firefox 56
> Beta cycle, to be sure that we are ready with our release automation for
> the changes this change brings with it, and we do not expect any delays to
> the release pipeline with regard to this landing.
>
> Additionally we have scheduled this change to land on mozilla-central now
> to minimize any potential impact it may have on efforts with Firefox 57 and
> project Quantum.
>
> -How will Try be affected?
>
> Traditionally the Try Server has followed the configuration of
> mozilla-central, however since the ability to test older branches on Try is
> important we have devised the following short term plan.
>
> We will leave BB builds enabled on Try.
>
> When you push to Try from a Gecko 56+ tree after the changes land, you
> will get TaskCluster and Buildbot builds, all testing will be triggered by
> TaskCluster, and you can safely ignore the BB jobs.
>
> When you push to Try from Gecko 56 before this change or any older gecko
> tree, you will get buildbot builds for your push, and all testing will be
> triggered by buildbot.
>
> The test scheduling mentioned here is made possible by a configuration
> item in mozharness we are toggling, so at the cost of some extra overhead
> in Windows build load on Try we can support older branches.
>
> -How will Try be affected medium/long term?
>
> Medium term we hope to explore some options to make Buildbot builds be off
> by default on Try, maybe requiring special Try syntax to enable them. This
> is however not well defined yet, so we will followup with an e-mail to
> these lists whenever we expect that to change.
>
> Long term, we will just turn off Try support of Windows Buildbot Builds,
> and use strictly TaskCluster.
>
> --
> Thank You,
> ~Justin Wood (Callek)
> Mozilla Release Engineer
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Switching to TaskCluster Windows builds on Wednesday July 26th

2017-07-21 Thread Justin Wood
Hello,

tl;dr

What: Windows opt & nightly builds switching to TaskCluster

When: Wednesday, July 26th at 11:00ET

Developer impact: Much rejoicing, Windows builds ~15 minutes faster, Some
Windows 10 testing switched to Tier1.

Next Wednesday, July 26th, at 11:00ET we will be switching remaining
production windows builds from Buildbot to TaskCluster. Buildbot builds for
Windows will be disabled as this hits the trees.

As part of this work, many TaskCluster Windows tests will also be enabled
as Tier1, including Windows 10 tests. Test suites requiring Windows
hardware, and tests that are not yet ready to migrate from Win8 to Win10
will remain on Buildbot. For Win8 tests that already migrated to Win10, we
will disable the corresponding Win8 variants.

If you have questions, please contact us in #releng or via email.

Relevant bugs:

Migrate Win64 nightly builds to TaskCluster

https://bugzilla.mozilla.org/show_bug.cgi?id=1267427

Migrate Win32 nightly builds to TaskCluster

https://bugzilla.mozilla.org/show_bug.cgi?id=1267428

FAQ:

- Will builds running in TaskCluster be available more quickly?

Builds in TaskCluster are approximately 15 minutes faster than in buildbot.

- Do the same tests pass/fail on a TC build as on a BB build?

Yes, test results should be the same. Performance results should also be
the same

- Will there be any impact to release schedules?

No, We have performed additional testing to ensure a smooth Firefox 56 Beta
cycle, to be sure that we are ready with our release automation for the
changes this change brings with it, and we do not expect any delays to the
release pipeline with regard to this landing.

Additionally we have scheduled this change to land on mozilla-central now
to minimize any potential impact it may have on efforts with Firefox 57 and
project Quantum.

-How will Try be affected?

Traditionally the Try Server has followed the configuration of
mozilla-central, however since the ability to test older branches on Try is
important we have devised the following short term plan.

We will leave BB builds enabled on Try.

When you push to Try from a Gecko 56+ tree after the changes land, you will
get TaskCluster and Buildbot builds, all testing will be triggered by
TaskCluster, and you can safely ignore the BB jobs.

When you push to Try from Gecko 56 before this change or any older gecko
tree, you will get buildbot builds for your push, and all testing will be
triggered by buildbot.

The test scheduling mentioned here is made possible by a configuration item
in mozharness we are toggling, so at the cost of some extra overhead in
Windows build load on Try we can support older branches.

-How will Try be affected medium/long term?

Medium term we hope to explore some options to make Buildbot builds be off
by default on Try, maybe requiring special Try syntax to enable them. This
is however not well defined yet, so we will followup with an e-mail to
these lists whenever we expect that to change.

Long term, we will just turn off Try support of Windows Buildbot Builds,
and use strictly TaskCluster.

-- 
Thank You,
~Justin Wood (Callek)
Mozilla Release Engineer
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: [Sheriffs] Proposal for informing Developers and Sheriffs about new Testsuites

2017-06-14 Thread Justin Wood
 Or if you must enable it in tree for try ahead of it being ready,
enable it as *tier 3* that way, those of you who *do* care can look at
results, while the rest of the dev audience can be ignorant to it, and
happily so.

(Some reasons for enable in tree ahead of ready, could be to help keep
context for work that could be bitrotted easily. And to help share work
among multiple developers without needing a patchset that continuously gets
rebased.)

On Wed, Jun 14, 2017 at 2:14 PM, Bill McCloskey 
wrote:

> Related to this, I would like to ask that people not enabled test suites
> on try if they're orange. I've wasted a lot of time myself trying to
> understand failures that were not my fault, and I know other people have
> too.
>
> Since new tests suites are presumably being enabled through TaskCluster
> and don't require changes to the buildbot-config repo, is there any reason
> to enable a test suite if it's not already green? It seems like you can
> just do custom try pushes until it's green.
>
> -Bill
>
>
> On Wed, Jun 14, 2017 at 8:01 AM, William Lachance 
> wrote:
>
>> Isn't this the sort of thing m.d.tree-management is for? I would say that
>> + sheriffs@ should be enough.
>>
>> Will
>>
>> On 2017-06-14 10:37 AM, Carsten Book wrote:
>>
>>> i guess sheriffs list (sheriffs @ m.o ) and newsgroup dev.platform, fx
>>> -dev
>>> but i'm also open for suggetions from others :)
>>>
>>> - tomcat
>>>
>>> On Wed, Jun 14, 2017 at 4:12 PM, Kartikaya Gupta 
>>> wrote:
>>>
>>> This sounds good to me. What mailing lists should the intent email be
 sent to? It might help to have a template somewhere as well.

 ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
>
> ___
> Sheriffs mailing list
> sheri...@mozilla.org
> https://mail.mozilla.org/listinfo/sheriffs
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Heads up: archive.m.o is no longer updating tinderbox builds, AWSY is effectively dead

2017-03-07 Thread Justin Wood
However, we don't quite  have that ability...

The taskcluster nightly stuff, which is posting to archive.m.o is
doing so with a small subset of dedicated machines, which are not
overly powerful either in disk space, networking throughput or cpu.

Its also doing so by downloading the artifacts from the taskcluster
jobs themselves, and then uploading them to archive.m.o (so is
literally copying, which has overhead in terms of ensuring no
corruption in the process)

The old jobs on archive.m.o/../tinderbox-builds/ were signed, new CI
builds are not.

Signing is also done via dedicated small subset of machines, so
expanding this to every CI job in the taskcluster setup isn't great
(for now).

The actual definitions/descriptions of the jobs that do the upload to
archive.m.o is written in the task graph, depending on signing. To
wire this logic up to the CI builds will be at least a full week of
100% dedicated effort, probably more. And that is not accounting for
the need to spin up more machines to account for the higher load. This
code change would also be relatively risky in terms of taskgraph
generation, and need to land on all trees this affects.

Storing the artifacts in taskclusters index and archive.m.o (both) is
effectively doubling storage costs. We implemented this storage into
taskcluster from buildbot over a year ago in order to facilitate the
exact "lets give people an overlap of time to switch" -- Things that
used archive.m.o/tinderbox-builds that we knew about already switched
(e.g. artifact builds)

I personally feel the investment to create a stopgap here is not worth
it, but feel free to convince chris cooper or chris atlee otherwise.

You're also free to copy in any of my points to the bug.

~Justin Wood (Callek)



On Tue, Mar 7, 2017 at 1:48 PM, Eric Rahm <er...@mozilla.com> wrote:
> Can you add these details to the bug? We should probably take the
> conversation on the best way to fix bustage there.
>
> Given the fallout (read: memory regression tracking is gone) and, as you
> noted, we have the ability to continue posting taskcluster builds to
> archive.m.o, we should at least continue to post builds to archive.m.o for
> at least grace period in order to give AWSY, mozdownload, and others time to
> implement a switch to a purely taskcluster-only solution.
>
> -e
>
> On Tue, Mar 7, 2017 at 7:43 AM, <jw...@mozilla.com> wrote:
>>
>> On Monday, March 6, 2017 at 6:08:09 PM UTC-5, Boris Zbarsky wrote:
>> > On 3/6/17 5:29 PM, Mike Hommey wrote:
>> > > You can get the builds through the taskcluster index.
>> >
>> > Does that have the same lifetime guarantees as archive.mozilla.org?
>> >
>> > -Boris
>>
>> So, to be clear..
>>
>> This thread is talking about
>> http://archive.mozilla.org/pub/firefox/tinderbox-builds/**
>>
>> Oldest archive files in there (for autoland) is May 26'th 2016.
>>
>> On taskcluster's index, there are many ways to get the data (not just by
>> buildid) there is date, there is .latest. and there is by revision, but to
>> use a date based solution to help answer this, oldest on taskcluster is also
>> May 26...
>>
>>
>> https://tools.taskcluster.net/index/artifacts/#gecko.v2.autoland.pushdate.2016.05.26/gecko.v2.autoland.pushdate.2016.05.26
>>
>> Releng has traditionally considered the archive.m.o location for
>> on-checkin builds to be an implementation detail and not suitable outside of
>> our automation.  We've also published things to both archive.m.o and
>> taskcluster index for a long time now (we even publish osx and windows jobs
>> there, so its not like you have to keep both solutions implemented).
>>
>> Had we realized anyone was using the `tinderbox-builds/` location for
>> things before this, I feel we would have notified those projects and the
>> community in general more widely when we were to shut them off.
>>
>> I apologize this broke anyone, but there is a clear path forward. We
>> explicitly preserved the archive.m.o uploads for taskcluster nightlies, into
>> http://archive.mozilla.org/pub/firefox/nightly/
>>
>> ~Justin Wood (Callek)
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Spring cleaning: Reducing Number Footprint of HG Repos

2014-03-27 Thread Justin Wood (Callek)

On 3/27/2014 1:11 AM, Mike Hommey wrote:

On Wed, Mar 26, 2014 at 05:40:36PM -0700, Gregory Szorc wrote:

On 3/26/14, 4:53 PM, Taras Glek wrote:

*User Repos*
TLDR: I would like to make user repos read-only by April 30th. We should
archive them by May 31st.

Time  spent operating user repositories could be spent reducing our
end-to-end continuous  integration cycles. These do not seem like
mission-critical repos, seems like developers would be better off
hosting these on bitbucket or github. Using a 3rd-party host has obvious
benefits for collaboration  self-service that our existing system will
never meet.


How much time do we spend operating user repositories? I follow the repos
bugzilla components and most of the requests I see have little if anything
to do with user repositories. And I reckon that's because user repositories
are self-service.


Note that while user repositories are self-service on the creation side,
there is no obvious way to self-service a user repo removal. I'm not in
Taras's list, but after looking, I figured I had an old m-c copy with
old patches on top of it.


Prior to the hg migration to local disk there was (well technically 
still is):


ssh hg.mozilla.org edit repo

which allowed you to delete it. We even had/have this info on MDN. The 
bug exists today that the deletion does not propogate out to the 
local-storage webheads.


~Justin Wood (Callek)

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Spring cleaning: Reducing Number Footprint of HG Repos

2014-03-27 Thread Justin Wood (Callek)

On 3/26/2014 9:15 PM, Taras Glek wrote:




Bobby Holley mailto:bobbyhol...@gmail.com
Wednesday, March 26, 2014 17:27
I don't understand what the overhead is. We don't run CI on user
repos. It's effectively just ssh:// + disk space, right? That seems
totally negligible.

Human overhead in keeping infra running could be spent making our infra
better elsewhere.


Also, project branches are pretty useful for teams working together on
large projects that aren't ready to land in m-c. We only use them when
we need them, so why would we shut them down?

I'm not suggesting killing it. My suggestion is that project branch
experience would likely be better when not hosted by mozilla. It would
still trigger our c-i systems.


Except when you consider the disposable project branches get Level 2 
commit privs needed, and that to commit to our repos you need to have 
signed the committer agreement, which grants some legal recompense if 
malice is done.


These project branches run on non try based machines which have 
elevated rights vs what try does, and can do much much more harm if 
there is malice here.


I for one would not be happy from a sec standpoint if we allowed 
bitbucket-hosted repos to execute arbitrary code this way.


~Justin Wood (Callek)

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Spring cleaning: Reducing Number Footprint of HG Repos

2014-03-27 Thread Justin Wood (Callek)

On 3/27/2014 2:58 AM, Doug Turner wrote:

Want to move to github?

(0) sudo apt-get install python-setuptools
(1) sudo easy_install hg-git
(2) add |hggit =| under [extensions] in your .hgrc file
(3) Go to GitHub.com and create your new repo.
(4) cd hg_repo
(5) hg bookmark -r default master
(6) hg push git+ssh://g...@github.com/you/name of your repo you created
in step 3



hg-git can't run without a very very custom and difficult-to-setup hg on 
windows.


Specifically because hg uses py2exe which strips out EVERY unused python 
library. And even doing hg in a virtualenv is hard because you get a 
MUCH slower hg due to no compiled code.


I have never further tested hg-git on windows after I encountered the 
two issues above.


~Justin Wood (Callek)


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Poll: What do you need in MXR/DXR?

2013-10-02 Thread Justin Wood (Callek)
Erik Rose wrote:
 What features do you most use in MXR and DXR?
 
 Over in the recently renamed Web Engineering group, we're working hard to 
 retire MXR. It hasn't been maintained for a long time, and there's a lot of 
 duplication between it and DXR, which rests upon a more modern foundation and 
 has been developing like crazy. However, there are some holes we need to fill 
 before we can expect you to make a Big Switch. An obvious one is indexing 
 more trees: comm-central, aurora, etc. And we certainly have some bothersome 
 UI bugs to squash. But I'd like to hear from you, the actual users, so it's 
 not just me and Taras guessing at priorities.
 
 What keeps you off DXR? (What are the MXR things you use constantly? Or the 
 things which are seldom-used but vital?)
 
 If you're already using DXR as part of your workflow, what could it do to 
 make your work more fun?
 
 Feel free to reply here, or attach a comment to this blog post, which talks 
 about some of the things we've done recently and are considering for the 
 future:
 
 https://blog.mozilla.org/webdev/2013/09/30/dxr-gets-faster-hardware-vcs-integration-and-snazzier-indexing/
 
 We'll use your input to build our priorities for Q4, so wish away!
 
 Cheers,
 Erik
 

Few things for me that I don't see an easy way to do on dxr.m.o right
now at a glance.

Search `file names` I might remember a file is called sut_lib for
example, but unsure extension or where it is, but know I need to edit it!

Search text strings within a specific filename wildcard, e.g. I might
want a search for some method in any idl, but not care about the
underlying implementation in C++

Further insight I can't easily provide unless
http://mxr.mozilla.org/build and http://mxr.mozilla.org/comm-central/ is
replicated at dxr, I can provide insight on what the req's are for both
setups (since build/ is many repos, while comm-central is 2 large repos
and a few small ones)

- For me personally comm-central is less important for my testing since
mozilla-central meets most of the needs in useability.

~Justin Wood (Callek)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Tegra build backlog is too big!

2013-09-12 Thread Justin Wood
 hi kats (cross-posting to dev-b2g);
 
 tl:dr; we think all is ok again, details below. To avoid this happening
 again this week, we're changing tryserver to reduce the number of
 Android-tests-run-on-tegra-by-default. If you specifically want tegra
 testing on tryserver, you will need to state that when pushing to try.
 

Hey everyone, this change was just backed out.

After last night and todays recovery of tegra devices we are in a much better 
state than we were when kats was prompted to start this thread. We also have 
had developer confusion around this change (and some relatively minor unforseen 
problems with the patch, detailed in bug) that caused sheriffs to ask for this 
to be backed out.

We expect wait times to return to roughly what they were as of last week for 
now.

~Justin Wood (Callek)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: The state of the Aurora branch

2013-01-20 Thread Justin Wood (Callek)
Ed Morley wrote:
 On 19 January 2013 15:01:09, Ehsan Akhgari wrote:
 dbaron posted a summary of our options on release-drivers
 
 Please can that be posted somewhere public for those of us not on 
 release-drivers?

Not seeing anything that need be kept private, I'll forward a post or
two here:



- Original Message -
 From: L. David Baron stripped
 To: release-drivers
 Sent: Saturday, January 19, 2013 4:34:34 AM
 Subject: extended Aurora tree closure and options for reopening
(disabling  testpilot extension?)

 The mozilla-aurora tree is currently closed (and has been since
 Wednesday) due to a set of permanent test failures.  Failure to
 reopen the tree and allow fixes to land puts Firefox 20 at risk
 (with the risk increasing with the length of the closure).

 I wrote a detailed description of the situation and laid out the
 known options for moving forward in:
 https://bugzilla.mozilla.org/show_bug.cgi?id=823989#c52
 (with a few clarifications in the two comments after).

 The prior discussion of this issue that I'm aware of is mostly in
 that bug and in

https://groups.google.com/forum/?fromgroups=#!topic/mozilla.dev.platform/fffQo85eM8Y

 Of the three options I present, the one that I think has the
 strongest support and least opposition among the developers
 investigating the problems is option 2:

   # (2) Disable the testpilot extension on aurora using the patch in
   # comment 48, and reopen mozilla-aurora.  comment 43 says that
   # we're not currently running any studies using testpilot (and
   # also that ehsan supports this solution).

 I think release-drivers should be aware that this is currently the
 leading option; I'm not sure who should make the final call here,
 but probably somebody a bit more informed about testpilot than I am.


 (It's currently Saturday morning in London, and I plan to spend the
 weekend as a tourist, so I expect to be only intermittenly online
 today and tomorrow.)

 -David
 ___
 release-drivers mailing list



- Original Message -
 From: Alex Keybl stripped
 Sent: Saturday, January 19, 2013 7:36:36 PM
 Subject: Re: extended Aurora tree closure and options for reopening
(disabling  testpilot extension?)

 Let's move forward with option 2 to re-open the tree, and continue
 investigating how to find final resolution allowing test pilot to be
 re-enabled on Aurora. Cheng and Jinghua - please let us know when
 you were hoping to push out the next survey, so we can put a date on
 re-enabling.

 -Alex

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Why are there Android 4.0 opt test runs on mozilla-central and nowhere else?

2012-12-23 Thread Justin Wood (Callek)
L. David Baron wrote:
 On Saturday 2012-12-22 09:51 -0800, Daniel Holbert wrote:
 Mozilla-central TBPL has a row for Android 4.0 opt, which isn't
 available on mozilla-inbound or on Try.

 I was under the impression that all the tests that get run on
 mozilla-central are supposed to also be run on Try and Inbound.  Why is
 this not the case for these Android 4.0 opt tests?


This was a concious choice with ATeam+Releng to get a number of tests
running + visible with a small number of ready devices.

It is common practice to have things on try/inbound/m-c as well.

In a case like large inbound merge breaks stuff on these tests and only
these tests the intent was that we would simply hide the suite(s) that
break and try to figure out the problem out-of-band without forcing the
tree closed. I recognize we may not have communicated that plan/desire
well enough. [I *believe* edmorley as our paid sheriff knew]

We will work to get these enabled on inbound at the least shortly after
the holiday to help avoid this problem.

 I'd note that if the underlying problem is that we want to test
 something where we're *really* constrained in number of test
 devices, it might be ok to not run on Try by default and not run on
 inbound.  But it should be *possible* to run on Try via trychooser,
 so that when something like this happens, it can at the very least
 be bisected after the fact.  If we can't at least do that, then the
 tests shouldn't be shown on mozilla-central.

It is currently not possible to have an --all run *not* run a certain
test type by default (aiui), furthermore since these tests come off of
regular android builds, a simple -p android would also trigger them.

I *believe* sfink is working on improvements in this are for us, but
being able to default-limit and overall improve try capacity/use in ways
like this is also on our radar/wanted.

-- 
~Justin Wood (Callek)

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Integrating ICU into Mozilla build

2012-12-05 Thread Justin Wood (Callek)
Mike Hommey wrote:
 On Tue, Dec 04, 2012 at 07:51:21AM -0500, Justin Wood (Callek) wrote:
 Rafael Ávila de Espíndola wrote:

 Actually, ICU has several options for how its data is packaged. One option 
 is libraries (which are not sharable between architectures, AFAIK), but 
 another possibility is to use data package (.dat) files, which I believe 
 *could* be shared between the 32- and 64-bit builds.

 getting a bit off topic, but since we don't support 10.5 anymore, can't we 
 build just a 32 bit plugin container instead of the full browser as a 
 universal binary? Would the plugin container need to link with ICU too?


 Not yet, there are supported Hardware models using 10.6 that *do not*
 have 64 bit avaiable. Granted they are on the older end of stuff, but it
 does exist.
 
 Note that apparently, this is even worse than that. 10.6 didn't enable
 64 bits by default on 64 bits capable hardware. (I just figured while
 looking at something unrelated on my wife's mac running 10.6.8)

Yes, I meant that as well, since some of these older machines are by
default set like that, and no UI way to change it.

I noticed this as well on the x64 mini's SeaMonkey has that are 10.6 but
my research showed that x64 was unstable on that version of mini we have
so I didn't turn it on :-)

-- 
~Justin Wood (Callek)

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Minimum Required Python Version

2012-12-01 Thread Justin Wood (Callek)
Gregory Szorc wrote:
 
 If there are any objections, please voice them now.

Can we re-post this as an entirely new thread, out of shear luck I
noticed it, though its buried in the middle of my threaded view for way
back in September. In a thread I have long since chosen to ignore since
I knew the final outcome of that particular post (which is how I, and I
know many others, read the newsgroups here).

-- 
~Justin Wood (Callek)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: New backout policy for Ts regressions on mozilla-inbound

2012-10-18 Thread Justin Wood (Callek)
Ehsan Akhgari wrote:
 Hi everyone,
 
 As part of our efforts to get more value out of the Talos test suite for
 preventing performance regressions, we believe that we are now ready to put
 a first set of measures against startup time regressions.  We will start by
 imposing a new backout policy for mozilla-inbound checkins for regressions
 more than 4% on any given platform.  If your patch falls in a range which
 causes more than 4% Ts regression, it will be backed out by our sheriffs
 together with the rest of the patches in that range, and you can only
 reland after you fix the regression by testing locally or on the try server.
 
 The 4% threshold has been chosen based on anecdotal evidence on the most
 recent Ts regressions that we have seen, and is too generous, but we will
 be working to improve the reporting and regression detection systems
 better, and as those get improved, we would feel more comfortable with
 imposing this policy on other Talos tests with tighter thresholds.
 
 Please let me know if you have any questions.
 

Do we still have the bug where a test that finishes first, but is from a
later cset (say a later cset IMPROVES Ts by 4% or more) would make us
think we regressed it on an earlier cset if that earlier talos run
finishes later?

Such that we set graph points by the time the test finished, not time
the push was, etc.

~Justin Wood (Callek)

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Moving Away from Makefile's

2012-08-22 Thread Justin Wood (Callek)

Jeff Hammel wrote:

While a bit of an unfair example, our buildbot-configs fall into this
category.


IMO not unfair at all.

(p.s. to stay on topic, +1 to all else you said)

~Justin Wood (Callek)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: XUL Runner, and the future.

2012-08-15 Thread Justin Wood (Callek)

andreas.pals...@gmail.com wrote:

Hi.

I am curious if XUL Runner has an End-Of-Life policy?
Or is it intimately connected with Firefox, i.e. as long as there is Firefox 
releases based on XUL there will be XUL Runner available too?

The reason I ask if because I am trying to standardize it within my 
organization for the next 5-10 years.

Thank you.



DISCLAIMER: I could be incorrect, as I am answering as a community 
member based on my understandings, (Official people involved can correct 
me).


XULRunner is currently an unsupported piece of software, we don't run 
tests for it, and we *barely* ensure it still builds.


The largest reason we still build it is for the benefit of an SDK to 
build binary components for Firefox against.


The ideal for moving forward in a supported way is to write a WebRT 
application that takes advantage of HTML/HTML5/Web-Technologies for your 
system.


HTH,
--
~Justin Wood (Callek)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Increase in mozilla-inbound bustage due to people not using Try

2012-08-09 Thread Justin Wood (Callek)

Justin Lebar wrote:

In addition, please bear in mind that landing bustage on trunk trees actually
makes the Try wait times worse (since the trunk backouts/retriggers take test
job priority over Try) - leading to others not bothering to use Try either, and 
so
the situation cascades.


I thought tryserver used a different pool of machines isolated from
all the other trees, because we treated the tryserver machines as
pwned.  Is that not or no longer the case?



Yes and no, the build machines are completely different the test 
machines -- not so much.


The testers however are shared. Testers have a completely different 
passwords set, as well as other mitigations. The idea here is that our 
test machines also have no permissions to upload anyway, nor any way to 
leak/get sekrets. And all machines are in a restricted network 
environment overall anyway.


So load on inbound affects *test* load on try, yes.

--
~Justin Wood (Callek)


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform