The pyenv[1] project is a great way to manage multiple versions of python
on your system. I've found it easier than trying to compile directly from
source.
Cheers,
Chris
[1] https://github.com/pyenv/pyenv
On Wed, 10 Jun 2020 at 16:52, Kartikaya Gupta wrote:
> For those of you who like me are s
Thank you Joel for writing up this proposal!
Are you also proposing that we stop the linux64-opt and win64-opt builds as
well, except for leaving them as an available option on try? If we're not
testing them on integration or release branches, there doesn't seem to be
much purpose in doing the bui
This is really great news, I'm really excited to start using it!
Automated landings from code review is such a game changer for
productivity and security.
Congrats to everyone involved.
Cheers,
Chris
On Wed, 6 Jun 2018 at 11:01, Mark Côté wrote:
>
> The Engineering Workflow team is happy to an
On Tue, 29 May 2018 at 14:21, L. David Baron wrote:
>
> On Monday 2018-05-28 15:52 -0400, Chris AtLee wrote:
> > Here's a bit of a strawman proposal...What if we keep the
> > {mozilla-central,mozilla-inbound,autoland}-{linux,linux64,macosx64,win32,win64}{,-pgo}/
>
On Sun, 20 May 2018 at 19:40, Karl Tomlinson wrote:
> On Fri, 18 May 2018 13:13:04 -0400, Chris AtLee wrote:
> > IMO, it's not reasonable to keep CI builds around forever, so the
question
> > is then how long to keep them? 1 year doesn't quite cover a full ESR
cycl
The discussion about what to do about these particular buildbot builds has
naturally shifted into a discussion about what kind of retention policy is
appropriate for CI builds.
I believe that right now we keep all CI build artifacts for 1 year. Nightly
and release builds are kept forever. There's
To ensure a successful Firefox 57 release, teams responsible for Firefox CI
& release infrastructure have adopted an “approval required” policy for
changes that could impact Firefox development or release. This includes
systems like buildbot, Taskcluster services, puppet, hg, product delivery,
and
Bug 1349227[1] landed a few days ago, which means we are now doing
"nightly" builds twice a day at 1000 and 2200 UTC.
The purpose of doing multiple nightlies is to get fixes out to users in
Europe, Africa and Asia sooner.
We have some concerns about possible impact to the build infrastructure, so
Updates are enabled again for all platforms. Not all locales have finished
yet, they will receive updates once the repacks finish.
On 11 May 2017 at 09:30, Chris AtLee wrote:
> We've disabled updates for a bad crash: https://bugzilla.mozilla.org/
> show_bug.cgi?id=1364059
>
>
We've disabled updates for a bad crash:
https://bugzilla.mozilla.org/show_bug.cgi?id=1364059
We're working on backing out the offending patches and will re-spin nightly
builds shortly.
Cheers,
Chris
___
dev-platform mailing list
dev-platform@lists.mozil
As indicated on our status page:
https://status.mozilla.org/incidents/cpnkqqb6b5kh
We will be closing trees tomorrow from 0500-1200PT.
Tracking bug is https://bugzilla.mozilla.org/show_bug.cgi?id=1355897
Thank you for your patience
___
dev-platform mai
Regarding timestamps in tarballs, using tar's --mtime option to force
timestamps to MOZ_BUILD_DATE (or a derivative thereof) could work.
On 19 July 2016 at 04:11, Kurt Roeckx wrote:
> On 2016-07-18 20:56, Gregory Szorc wrote:
>
>>
>> Then of course there is build signing, which takes a private k
We've been having a lot of problems with capacity for our Windows test
pools, with Windows 8 being particularly bad.
Today we disabled running Windows 8 64-bit tests by default on Try. If you
do really need Windows 8 tests for your try pushes, you can add try syntax
like this to enable: "try: -b o
I'm very happy to let you know that we've recently started running some of
our Windows 7 tests in AWS. Currently we're running these suites in Amazon
for all branches of gecko 49 and higher:
* Web platform tests + reftests
* gtest
* cppunit
* jittest
* jsreftest
* crashtest
Since these are now wor
On 9 February 2016 at 14:51, Marco Bonardo wrote:
> On Tue, Feb 9, 2016 at 6:54 PM, Ryan VanderMeulen
> wrote:
>
> > I'd have a much easier time accepting that argument if my experience
> > didn't tell me that nearly every single "Test took longer than expected"
> or
> > "Test timed out" intermi
In this case, latest is just latest from wherever. I agree that l10n
nightlies should be under 'nightly' as well.
On Wed, Dec 2, 2015 at 3:04 PM, Axel Hecht wrote:
> On 12/1/15 3:48 PM, Chris AtLee wrote:
>
>> Localized builds should be at e.g.
>> gecko.v2.mo
On Tue, Dec 1, 2015 at 5:27 PM, Gregory Szorc wrote:
>
> On Tue, Dec 1, 2015 at 2:21 PM, Chris AtLee wrote:
>
>> Right now we've got debug OSX builds in the cloud on Try in parallel with
>> the regular builds. There's a bunch more work to be done there to be ab
On Tue, Dec 1, 2015 at 4:52 PM, Justin Dolske wrote:
> On 12/1/15 12:41 PM, Chris AtLee wrote:
>
> Last week we made the same change to the rest of the Windows build
>> infrastructure. All our Windows builds are now running in AWS. We're
>> seeing
>> good perf
not either one or the other, we can have both. But
> before filing a bug, I'd like to know if the general population thinks
> it's a good idea.
>
> --
> Julien
>
> Le 30/11/2015 21:43, Chris AtLee a écrit :
> > The RelEng, Cloud Services and Taskcluster teams have
A few weeks ago I posted about switching our Windows builds on Try over to
EC2, resulting in a 30 minute speed improvement.
Last week we made the same change to the rest of the Windows build
infrastructure. All our Windows builds are now running in AWS. We're seeing
good performance gains there to
tical for regression hunting.
>
> On 12/1/2015 9:49 AM, Chris AtLee wrote:
>
>> The expiration is currently set to one year, but we can (and should!)
>> change that for nightlies. That work is being tracked in
>> https://bugzilla.mozilla.org/show_bug.cgi?id=1145300
>>
The expiration is currently set to one year, but we can (and should!)
change that for nightlies. That work is being tracked in
https://bugzilla.mozilla.org/show_bug.cgi?id=1145300
On Mon, Nov 30, 2015 at 7:00 PM, Ryan VanderMeulen wrote:
> On 11/30/2015 3:43 PM, Chris AtLee wrote:
>
their assets by glancing at things.
> Are those to come?
>
> Also, I suspect we should rewrite wget-en-US? Or add an alternative that's
> index-bound?
>
> Axel
>
> On 11/30/15 9:43 PM, Chris AtLee wrote:
>
>> The RelEng, Cloud Services and Taskcluster teams have
The RelEng, Cloud Services and Taskcluster teams have been doing a lot of
work behind the scenes over the past few months to migrate the backend
storage for builds from the old "FTP" host to S3. While we've tried to make
this as seamless as possible, the new system is not a 100% drop-in
replacement
Over the past months we've been working on migrating our Windows builds
from the legacy hardware machines into Amazon.
I'm very happy to announce that we've wrapped up the initial work here, and
all our Windows builds on Try are now happening in Amazon.
The biggest win from this is that our Windo
On Mon, Nov 9, 2015 at 6:39 PM, William Lachance
wrote:
> On 2015-11-06 5:56 PM, Mark Finkle wrote:
>
>> I also think measuring build times, and other build related stats, would
>> be
>> useful. I'd like to see Mozilla capturing those stats for developer builds
>> though. I'm less interested in b
This is really great, thanks for adding support for this!
I'd like to see the size of the complete updates measured as well, in
addition to the installer sizes.
Do we have alerts for these set up yet?
Cheers,
Chris
On Wed, Nov 4, 2015 at 10:55 AM, William Lachance
wrote:
> Hey, so as describe
Partial updates should be functional again now. Sorry for the inconvenience!
On Thu, Oct 29, 2015 at 4:49 PM, Chris AtLee wrote:
> We've temporarily disabled generation of partial updates for Nightly and
> Dev-Edition (Aurora) versions of Firefox.
>
> Given that Dev-Ed
We've temporarily disabled generation of partial updates for Nightly and
Dev-Edition (Aurora) versions of Firefox.
Given that Dev-Edition updates are currently frozen as part of our uplift
process, the main impact of this is on Nightly users.
We hope to have partial update generation re-enabled i
Very interesting, thank you!
Would there be a way to add an environment variable or harness flag to run
all tests in chaos mode?
On Thu, Jun 4, 2015 at 5:31 PM, Chris Peterson
wrote:
> On 6/4/15 11:32 AM, kgu...@mozilla.com wrote:
>
>> I just landed bug 1164218 on inbound, which adds the abilit
Sounds great! I've filed
https://bugzilla.mozilla.org/show_bug.cgi?id=1161282 for this.
According to
https://secure.pub.build.mozilla.org/builddata/reports/reportor/daily/highscores/highscores.html,
we still have a ton of people using '-p all -u all' on try
On Mon, May 4, 2015 at 5:12 PM, Gregory
Hi,
Just a quick note that we're hoping to enable 64-bit windows builds &
tests across most trunk branches this week. This includes branches such
as mozilla-central, mozilla-inbound, fx-team, etc.
In order to get adequate test coverage without at the same time
overwhelming our windows test i
On 18:45, Mon, 13 Oct, Jonas Sicking wrote:
On Mon, Oct 13, 2014 at 5:52 PM, Gregory Szorc wrote:
On 10/13/14 5:42 PM, Andreas Gal wrote:
I looked at lzma2 a while ago for FFOS. I got pretty consistently 30%
smaller omni.ja with that. We could add it pretty easily to our
decompression code b
On 17:26, Tue, 23 Sep, Kyle Huey wrote:
On Tue, Aug 26, 2014 at 8:23 AM, Chris AtLee wrote:
Just a short note to say that this experiment is now live on
mozilla-inbound.
___
dev-tree-management mailing list
dev-tree-managem...@lists.mozilla.org
Just a short note to say that this experiment is now live on
mozilla-inbound.
signature.asc
Description: Digital signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform
On 17:37, Wed, 20 Aug, Jonas Sicking wrote:
On Wed, Aug 20, 2014 at 4:24 PM, Jeff Gilbert wrote:
I have been asked in the past if we really need to run WebGL tests on Android,
if they have coverage on Desktop platforms.
And then again later, why B2G if we have Android.
There seems to be enoug
On 18:25, Tue, 19 Aug, Ehsan Akhgari wrote:
On 2014-08-19, 5:49 PM, Jonathan Griffin wrote:
On 8/19/2014 2:41 PM, Ehsan Akhgari wrote:
On 2014-08-19, 3:57 PM, Jeff Gilbert wrote:
I would actually say that debug tests are more important for
continuous integration than opt tests. At least in cod
On 17:37, Sat, 22 Feb, L. David Baron wrote:
On Saturday 2014-02-22 15:57 -0800, Gregory Szorc wrote:
On Feb 22, 2014, at 8:18, Kyle Huey wrote:
> If you needed another reason to follow the style guide:
> https://www.imperialviolet.org/2014/02/22/applebug.html
>
Code coverage would have caught
Starting today [1], you'll see a new symbol on TBPL: "Bn". These are builds
running with unified sources disabled. We're now running these
periodically on 64-bit linux (opt and debug) on all trees on the same
cadence as the PGO builds.
The purpose of these builds is to catch build problems tha
On 18:23, Mon, 02 Dec, Ehsan Akhgari wrote:
As for identifying broken non-unified builds, can we configure one of
our mozilla-inbound platforms to be non-unified (like 32-bit Linux Debug)?
I think the answer to that question depends on how soon bug 942167 can
be fixed. Chris, any ideas?
We'
On 15:10, Tue, 05 Nov, James Graham wrote:
On 05/11/13 14:57, Kyle Huey wrote:
On Tue, Nov 5, 2013 at 10:44 PM, David Burns wrote:
We appear to be doing 1 backout for every 15 pushes on a rough average[4].
This number I am sure you can all agree is far too high especially if we
think about th
Hi!
Leak tests on OSX have been failing intermittently for nearly a year
now[1]. As yet, we don't have any ideas why they're failing, and nobody
is working on fixing them.
Would anybody be very sad if we shut them off? Are these tests providing
useful information any more?
If they are stil
On 11:50, Fri, 07 Jun, Tim Chien wrote:
I suggest the Apps team should aware of this and potentially invest
some resource into the Windows build, if we want to fully support FxOS
Simulator as a product.
It's my understanding that the FxOS Simulator doesn't directly depend on
these builds. Can
Windows desktop b2g builds have been pretty broken for several weeks
now, since around May 24 [1].
At this week's engineering and b2g meetings we discussed shutting these
off if nobody has a strong reason to keep them around.
If you are currently depending on the builds for anything, please l
On 02:54, Tue, 30 Apr, Justin Lebar wrote:
Is there sanity to this proposal or am I still crazy?
If we had a lot more project branches, wouldn't that increase the load
on infra dramatically, because we'd have less coalescing?
Yes, it would decrease coalescing. I wonder how many tree closures
On 14:29, Fri, 26 Apr, Gregory Szorc wrote:
On 4/26/2013 2:06 PM, Kartikaya Gupta wrote:
On 13-04-26 11:37 , Phil Ringnalda wrote:
Unfortunately, engineering is totally indifferent to
things like having doubled the cycle time for Win debug browser-chrome
since last November.
Is there a bug
On 01:48, Thu, 25 Apr, Justin Lebar wrote:
One idea might be to give developers feedback on the consequences of a
particular push, e.g. the AWS cost, a proxy for "time during which
developers couldn't push" or some other measurable metric. Right now
each push probably "feels" as "expensive" as e
On 16:34, Tue, 23 Apr, Gervase Markham wrote:
On 23/04/13 10:17, Ed Morley wrote:
Given that local machine time scales linearly with the rate at which we
hire devs (unlike our automation capacity), I think we need to work out
why (some) people aren't doing things like compiling locally and runni
On 16:11, Wed, 03 Apr, Gregory Szorc wrote:
On 4/3/13 3:36 PM, L. David Baron wrote:
On Wednesday 2013-04-03 17:31 -0400, Kartikaya Gupta wrote:
1. Take the latest green m-c change, commit your patch(es) on top of
it, and push it to try.
2. If your try push is green, flag it for eventual merge
> 3. Try to delay disabling PGO/LTCG until the next time that we hit the
> limit, and disable PGO/LTCG then once and for all. In order to
> implement this solution, we're going to need:
>* A person to own watching the graphs and report back when we step
> inside the danger zone again.
I th
On 18/10/12 06:44 PM, Justin Lebar wrote:
>> Do we still have the bug where a test that finishes first, but is from a
>> later cset (say a later cset IMPROVES Ts by 4% or more) would make us
>> think we regressed it on an earlier cset if that earlier talos run
>> finishes later?
>>
>> Such that we
On 12/10/12 01:01 PM, Chris Peterson wrote:
> On 10/11/12 7:36 PM, Justin Lebar wrote:
>>> 2. Linux is the foundation of B2G and Firefox for Android, where we
>>> *definitely* must deliver
>>> the fastest product we can
>>
>> I totally agree, but it's not clear to me whether continuing to do PGO
>>
On 01/10/12 10:51 PM, Justin Lebar wrote:
For case 1., an idea that has been floated here and again (in Automation and
Tools and Release Engineering, anyway) is landing directly from try ->
inbound (or central) for green try pushes. However, this isn't a small
endeavor, both for the reasons of bu
On 30/09/12 03:43 AM, Justin Lebar wrote:
We're all trying to build the best system we can here. We've been publishing
as much raw data as we can, as well as reports like wait time data for ages.
We're not trying to hide this stuff away.
I understand. My point is just that the data we currentl
On 29/09/12 05:30 PM, Gary Kwong wrote:
I think we should have this data feed into a cronjob that emails the
top ~5 weekly (ab)users of try, notifies them of their impact, and
suggests ways the can help avoid using these resources unnecessarily.
Gavin
I agree with Gavin, the top users ought t
On 29/09/12 04:14 PM, Justin Lebar wrote:
One proposal that's been made elsewhere
(https://bugzilla.mozilla.org/show_bug.cgi?id=791385) is to have a soft limit
of one active push per developer on try. If you try and push a 2nd time before
your previous jobs are all finished, you will be asked t
On 28/09/12 09:28 PM, Boris Zbarsky wrote:
On 9/28/12 8:32 PM, Chris Pearce wrote:
This is indeed unfortunate. However I'd prefer to add more capacity to
our test infrastructure, rather than discourage developers from properly
testing before landing.
I think the concern is the definition of "p
On 31/08/12 03:59 PM, Ehsan Akhgari wrote:
On 12-08-31 11:45 AM, Chris AtLee wrote:
On 31/08/12 11:32 AM, Ehsan Akhgari wrote:> There are extremely
non-stable Talos tests, and relatively stable ones.
> Let's focus on the relatively stable ones. There are extremely hard
&g
On 31/08/12 11:32 AM, Ehsan Akhgari wrote:> There are extremely
non-stable Talos tests, and relatively stable ones.
> Let's focus on the relatively stable ones. There are extremely hard
> to diagnose performance regressions, and extremely easy ones (i.e.,
> let's not wait on this lock, do this
All the gory details in bug 773120 [1].
Cheers,
Chris
[1] https://bugzilla.mozilla.org/show_bug.cgi?id=773120
Sorry, this should be https://bugzilla.mozilla.org/show_bug.cgi?id=775620
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https
Hi,
In the next day or so we're going to be changing the machines we're
using to do most of our fennec builds. This *should* be a no-op in terms
of functionality of the builds; we're using the same SDKs as before. The
new builds will be happening on slaves called 'bld-centos6-hp-XXX' or
'bld-
61 matches
Mail list logo