Re: Faster gecko builds with IceCC on Mac and Linux

2018-02-18 Thread Jean-Yves Avenard
Hi

So  I got this to work under all platforms (OS X , Ubuntu 17.10 and Windows 10)
Stock speed, no OC of any type.

macOS: 7m32s
Windows 10: 12m20s
Linux Ubuntu 17.10 (had to install kernel 4.15): 6m04s

So not much better than the iMac Pro 10 cores… 

> On 2 Feb 2018, at 7:54 pm, Jean-Yves Avenard  wrote:
> 
> Intel i9-7980XE
> Asus Prime X299-Deluxe
> Samsung 960 Pro SSD
> G.Skill F4-3200OC16Q-32GTZR x 2 (allowing 64GB in quad channels)
> Corsair AX1200i PSU
> Corsair H100i water cooloer
> Cooler Master Silencio 652S
> 
> Aim is for the fastest and most silent PC (if such thing exists)
> The price on Amazon is 4400 euros which is well below the iMac Pro cost (less 
> than half for similar core count) or the Lenovo P710.
> 
> The choice of the motherboard is that there’s successful report on the 
> hackintosh forum to run macOS High Sierra (though no wifi support)




smime.p7s
Description: S/MIME cryptographic signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2018-02-02 Thread Jean-Yves Avenard
Hi

> On 17 Jan 2018, at 12:38 am, Gregory Szorc  wrote:
> 
> On an EC2 c5.17xlarge (36+36 CPUs) running Ubuntu 17.10 and using Clang 5.0, 
> 9be7249e74fd does a clobber but configured `mach build` in 7:34. Rust is very 
> obviously the long pole in this build, with C++ compilation (not linking) 
> completing in ~2 minutes.
> 
> If I enable sccache for just Rust by setting "mk_add_options "export 
> RUSTC_WRAPPER=sccache" in my mozconfig, a clobber build with populated cache 
> for Rust completes in 3:18. And Rust is still the long pole - although only 
> by a few seconds. It's worth noting that CPU time for this build remains in 
> the same ballpark. But overall CPU utilization increases from ~28% to ~64%. 
> There's still work to do improving the efficiency of the overall build 
> system. But these are mostly in parts only touched by clobber builds. If you 
> do `mach build binaries` after touching compiled code, our CPU utilization is 
> terrific.
> 
> From a build system perspective, C/C++ scales up to dozens of cores just fine 
> (it's been this way for a few years). Rust is becoming a longer and longer 
> long tail (assuming you have enough CPU cores that the vast amount of C/C++ 
> completes before Rust does).

After playing with the iMac Pro and loving its performance (though I’ve 
returned it now)

I was thinking of testing this configuration

Intel i9-7980XE
Asus Prime X299-Deluxe
Samsung 960 Pro SSD
G.Skill F4-3200OC16Q-32GTZR x 2 (allowing 64GB in quad channels)
Corsair AX1200i PSU
Corsair H100i water cooloer
Cooler Master Silencio 652S

Aim is for the fastest and most silent PC (if such thing exists)
The price on Amazon is 4400 euros which is well below the iMac Pro cost (less 
than half for similar core count) or the Lenovo P710.

The choice of the motherboard is that there’s successful report on the 
hackintosh forum to run macOS High Sierra (though no wifi support)

Any ideas when the updated Lenovo P710 will come out?

Anandtech had a nice article about the i9-7980EX in regards to clock speed 
according to the number of core in use… It clearly shows that base frequency 
matters very little as the turbo frequencies almost make them all equal.

JY

smime.p7s
Description: S/MIME cryptographic signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2018-01-31 Thread Randell Jesup
>On 1/16/18 2:59 PM, smaug wrote:

>Would it be possible that when I do an hg pull of mozilla-central or
>mozilla-inbound, I can also choose to download the object files from the
>most recent ancestor that had an automation build? (It could be a separate
>command, or ./mach pull.) They would go into a local ccache (or probably
>sccache?) directory. The files would need to be atomically updated with
>respect to my own builds, so I could race my build against the
>download. And preferably the download would go roughly in the reverse order
>as my own build, so they would meet in the middle at some point, after
>which only the modified files would need to be compiled. It might require
>splitting debug info out of the object files for this to be practical,
>where the debug info could be downloaded asynchronously in the background
>after the main build is complete.

Stolen from a document on Workflow Efficiencies I worked on:

Some type of aggressive pull-and-rebuild in the background may help
by providing a ‘hot’ objdir that can be switched to in place of the
normal “hg pull -u; ./mach build” sequence.

Users would need to deal with reloading editor buffers after
switching, but that’s normal after a pull.  If the path changes it
might require more magic; Emacs could deal with that easily with an
elisp macro; not sure about other editors people use.  Keeping paths
to source the same after a pull is a win, though.

Opportunistic rebuilds as you edit source might help, but the win is
much smaller and would be more work.  Still worth looking at,
especially if you happen to touch something central.

We'd need to be careful how it interacts with things like hg pull,
witching branches, etc (defer starting builds slightly until source
has been unchanged for N seconds?)

I talked a fair bit about this with ted and others.  The main trick here
would be in dealing with cache directories, and with sccache we could
make it support a form of hierarchy for caches (local and remote), so
you could leverage either local rebuilds-in-background (triggered by
automatic pulls on repo updates), or remote build resources (such as
from the m-c build machines).

Note that *any* remote-cache utilization depends on a fixed (or at least
identical-and-checked) configuration *and* compiler and system
includes.  The easiest way to acheive this might be to leverage a local
VM instance of taskcluster, since system includes vary
machine-to-machine, even for the same OS version.  (Perhaps this is less
of an issue on Mac or Windows...).

This requirement greatly complicates things (and requires building a
"standard" config, which many do not).  Leveraging local background
builds would be much easier in many ways, though also less of a win.

-- 
Randell Jesup, Mozilla Corp
remove "news" for personal email
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2018-01-24 Thread Nicholas Alexander



> Would it be possible that when I do an hg pull of mozilla-central or
> mozilla-inbound, I can also choose to download the object files from the
> most recent ancestor that had an automation build? (It could be a separate
> command, or ./mach pull.) They would go into a local ccache (or probably
> sccache?) directory. The files would need to be atomically updated with
> respect to my own builds, so I could race my build against the download.
> And preferably the download would go roughly in the reverse order as my own
> build, so they would meet in the middle at some point, after which only the
> modified files would need to be compiled. It might require splitting debug
> info out of the object files for this to be practical, where the debug info
> could be downloaded asynchronously in the background after the main build
> is complete.
>

Just FYI, in Austin (December 2017, for the archives) the build peers
discussed something like this.  The idea would be to figure out how to
slurp (some part of) an object directory produced in automation, in order
to get cache hits locally.  We really don't have a sense for how much of an
improvement this might be in practice, and it's a non-trivial effort to
investigate enough to find out.  (I wanted to work on it but it doesn't fit
my current hats.)

My personal concern is that our current build system doesn't have a single
place that can encode policy about our build.  That is, there's nothing to
control the caching layers and to schedule jobs intelligently (i.e., push
Rust and SpiderMonkey forward, and work harder to get them from a remote
cache).  That could be a distributed job server, but it doesn't have to be:
it just needs to be able to control our build process.  None of the current
build infrastructure (sccache, the recursive make build backend, the
in-progress Tup build backend) is a good home for those kind of policy
choices.  So I'm concerned that we'd find that an object directory caching
strategy is a good idea... and then have a chasm when it comes to
implementing it and fine-tuning it.  (The chasm from artifact builds to a
compile environment build is a huge pain point, and we don't want to
replicate that.)

Or, a different idea: have Rust "artifact builds", where I can download
> prebuilt Rust bits when I'm only recompiling C++ code. (Tricky, I know,
> when we have code generation that communicates between Rust and C++.) This
> isn't fundamentally different from the previous idea, or distributed
> compilation in general, if you start to take the exact interdependencies
> into account.


In theory, caching Rust crate artifacts is easier than caching C++ object
files.  (At least, so I'm told.)  In practice, nobody has tried to push
through the issues we might see in the wild.  I'd love to see investigation
into this area, since it seems likely to be fruitful on a short time
scale.  In a different direction, I am aware of some work (cited in this
thread?) towards an icecream-like job server for distributed Rust
compilation.  Doesn't hit the artifact build style caching, but related.

Best,
Nick
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2018-01-17 Thread Jeff Gilbert
It's way cheaper to get build clusters rolling than to get beefy
hardware for every desk.
Distributed compilation or other direct build optimizations also allow
continued use of laptops for most devs, which definitely has value.

On Wed, Jan 17, 2018 at 11:22 AM, Jean-Yves Avenard
 wrote:
>
>
>> On 17 Jan 2018, at 8:14 pm, Ralph Giles  wrote:
>>
>> Something simple with the jobserver logic might work here, but I think we
>> want to complete the long-term project of getting a complete dependency
>> graph available before looking at that kind of optimization.
>
> Just get every person needing to work on mac an iMac Pro, and those on 
> Windows/Linux a P710 or better and off we go.
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2018-01-17 Thread Jean-Yves Avenard


> On 17 Jan 2018, at 8:14 pm, Ralph Giles  wrote:
> 
> Something simple with the jobserver logic might work here, but I think we
> want to complete the long-term project of getting a complete dependency
> graph available before looking at that kind of optimization.

Just get every person needing to work on mac an iMac Pro, and those on 
Windows/Linux a P710 or better and off we go.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2018-01-17 Thread Ralph Giles
On Wed, Jan 17, 2018 at 10:27 AM, Steve Fink  wrote:


> Would it be possible that when I do an hg pull of mozilla-central or
> mozilla-inbound, I can also choose to download the object files from the
> most recent ancestor that had an automation build?


You mention 'artifact builds' so I assume you know about `ac_add_options
--enable-artifact-builds` which does this for the final libXUL target,
greatly speeding up the first people for people working on the parts of
Firefox outside Gecko.

In the build team we've been discussing for a while if there's a way to
make this more granular. The most concrete plan is to use sccache again.
This tool already supports multi-level (local and remote) caches, so it
could certainly pull the latest object files from a CI build; it already
does this when running in automation. There are still some 'reproducible
build' issues which block general use of this: source directory prefixes
not matching, __FILE__ and __DATE__, different build flags between
automation and the default developer builds, that sort of thing. These
prevent cache hits when compiling the same code. There aren't too many
left; help would be welcome working out the last few if you're interested.

We've also discussed having sccache race local build and remote cache fetch
as you suggest, but not the kind of global scheduling you talk about.
Something simple with the jobserver logic might work here, but I think we
want to complete the long-term project of getting a complete dependency
graph available before looking at that kind of optimization.

FWIW,
 -r
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2018-01-17 Thread Simon Sapin

On 17/01/18 19:27, Steve Fink wrote:

Would it be possible that when I do an hg pull of mozilla-central or
mozilla-inbound, I can also choose to download the object files from the
most recent ancestor that had an automation build? (It could be a
separate command, or ./mach pull.) They would go into a local ccache (or
probably sccache?) directory.


I believe that sccache already has support for Amazon S3. I don’t know 
if we already enable that for our CI infra. Once we do, I imagine we 
could make that store world-readable and configure local builds to use it.


--
Simon Sapin
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2018-01-17 Thread Steve Fink

On 1/16/18 2:59 PM, smaug wrote:

On 01/16/2018 11:41 PM, Mike Hommey wrote:

On Tue, Jan 16, 2018 at 10:02:12AM -0800, Ralph Giles wrote:
On Tue, Jan 16, 2018 at 7:51 AM, Jean-Yves Avenard  


wrote:

But I would be interested in knowing how long that same Lenovo P710  
takes

to compile *today*….



On my Lenovo P710 (2x2x6 core Xeon E5-2643 v4), Fedora 27 Linux

debug -Og build with gcc: 12:34
debug -Og build with clang: 12:55
opt build with clang: 11:51

Interestingly, I can almost no longer get any benefits when using  
icecream,

with 36 cores it saves 11s, with 52 cores it saves 50s only…



Are you staturating all 52 cores during the buidls? Most of the  
increase in
build time is new Rust code, and icecream doesn't distribute Rust.  
So in
addition to some long compile times for final crates limiting the  
minimum
build time, icecream doesn't help much in the run-up either. This is  
why
I'm excited about the distributed build feature we're adding to  
sccache.


Distributed compilation of rust won't help unfortunately. That won't
solve the fact that the long pole of rust compilation is a series of
multiple long single-threaded processes that can't happen in parallel
because each of them depends on the output of the previous one.

Mike



Distributed compilation won't also help those remotees who may not  
have machines to setup

icecream or distributed sscache.
(I just got a new laptop because of rust compilation being so slow. )
I'm hoping rust compiler gets some heavy optimizations itself. 


I'm in the same situation, which reminds me of something I wrote long  
ago, shortly after joining Mozilla:  
https://wiki.mozilla.org/Sfink/Thought_Experiment_-_One_Minute_Builds  
(no need to read it, it's ancient history now. It's kind of a fun read  
IMO, though you have to remember that it long predates mozilla-inbound,  
autoland, linux64, and sccache, and was in the dawn of the Era of  
Sheriffing so build breakages were more frequent and more damaging.) But  
in there, I speculated about ways to get other machines' built object  
files into a local ccache. So here's my latest handwaving:


Would it be possible that when I do an hg pull of mozilla-central or  
mozilla-inbound, I can also choose to download the object files from the  
most recent ancestor that had an automation build? (It could be a  
separate command, or ./mach pull.) They would go into a local ccache (or  
probably sccache?) directory. The files would need to be atomically  
updated with respect to my own builds, so I could race my build against  
the download. And preferably the download would go roughly in the  
reverse order as my own build, so they would meet in the middle at some  
point, after which only the modified files would need to be compiled. It  
might require splitting debug info out of the object files for this to  
be practical, where the debug info could be downloaded asynchronously in  
the background after the main build is complete.


Or, a different idea: have Rust "artifact builds", where I can download  
prebuilt Rust bits when I'm only recompiling C++ code. (Tricky, I know,  
when we have code generation that communicates between Rust and C++.)  
This isn't fundamentally different from the previous idea, or  
distributed compilation in general, if you start to take the exact  
interdependencies into account.



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2018-01-16 Thread Gregory Szorc
On Tue, Jan 16, 2018 at 1:42 PM, Ted Mielczarek  wrote:

> On Tue, Jan 16, 2018, at 10:51 AM, Jean-Yves Avenard wrote:
> > Sorry for resuming an old thread.
> >
> > But I would be interested in knowing how long that same Lenovo P710
> > takes to compile *today*….> In the past 6 months, compilation times have
> certainly increased
> > massively.>
> > Anyhow, I’ve received yesterday the iMac Pro I ordered early December.
> > It’s a 10 cores Xeon-W (W-2150B) with 64GB RAM>
> > Here are the timings I measured, in comparison with the Mac Pro 2013 I
> > have (which until today was the fastest machines I had ever used)>
> > macOS 10.13.2:
> > Mac Pro late 2013 : 13m25s
> > iMac Pro : 7m20s
> >
> > Windows 10 fall creator
> > Mac Pro late 2013 : 24m32s (was 16 minutes less than a year ago!)
> > iMac Pro : 14m07s (16m10s with windows defender going)
> >
> > Interestingly, I can almost no longer get any benefits when using
> > icecream, with 36 cores it saves 11s, with 52 cores it saves 50s only…>
> > It’s a very sweet machine indeed
>
> I just did a couple of clobber builds against the tip of central
> (9be7249e74fd) on my P710 running Windows 10 Fall Creators Update
> and they took about 22 minutes each. Definitely slower than it
> used to be :-/
>
>
On an EC2 c5.17xlarge (36+36 CPUs) running Ubuntu 17.10 and using Clang
5.0, 9be7249e74fd does a clobber but configured `mach build` in 7:34. Rust
is very obviously the long pole in this build, with C++ compilation (not
linking) completing in ~2 minutes.

If I enable sccache for just Rust by setting "mk_add_options "export
RUSTC_WRAPPER=sccache" in my mozconfig, a clobber build with populated
cache for Rust completes in 3:18. And Rust is still the long pole -
although only by a few seconds. It's worth noting that CPU time for this
build remains in the same ballpark. But overall CPU utilization increases
from ~28% to ~64%. There's still work to do improving the efficiency of the
overall build system. But these are mostly in parts only touched by clobber
builds. If you do `mach build binaries` after touching compiled code, our
CPU utilization is terrific.

From a build system perspective, C/C++ scales up to dozens of cores just
fine (it's been this way for a few years). Rust is becoming a longer and
longer long tail (assuming you have enough CPU cores that the vast amount
of C/C++ completes before Rust does).
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2018-01-16 Thread smaug

On 01/16/2018 11:41 PM, Mike Hommey wrote:

On Tue, Jan 16, 2018 at 10:02:12AM -0800, Ralph Giles wrote:

On Tue, Jan 16, 2018 at 7:51 AM, Jean-Yves Avenard 
wrote:

But I would be interested in knowing how long that same Lenovo P710 takes

to compile *today*….



On my Lenovo P710 (2x2x6 core Xeon E5-2643 v4), Fedora 27 Linux

debug -Og build with gcc: 12:34
debug -Og build with clang: 12:55
opt build with clang: 11:51

Interestingly, I can almost no longer get any benefits when using icecream,

with 36 cores it saves 11s, with 52 cores it saves 50s only…



Are you staturating all 52 cores during the buidls? Most of the increase in
build time is new Rust code, and icecream doesn't distribute Rust. So in
addition to some long compile times for final crates limiting the minimum
build time, icecream doesn't help much in the run-up either. This is why
I'm excited about the distributed build feature we're adding to sccache.


Distributed compilation of rust won't help unfortunately. That won't
solve the fact that the long pole of rust compilation is a series of
multiple long single-threaded processes that can't happen in parallel
because each of them depends on the output of the previous one.

Mike



Distributed compilation won't also help those remotees who may not have 
machines to setup
icecream or distributed sscache.
(I just got a new laptop because of rust compilation being so slow. )
I'm hoping rust compiler gets some heavy optimizations itself.


-Olli
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2018-01-16 Thread Ted Mielczarek
On Tue, Jan 16, 2018, at 10:51 AM, Jean-Yves Avenard wrote:
> Sorry for resuming an old thread.
> 
> But I would be interested in knowing how long that same Lenovo P710
> takes to compile *today*….> In the past 6 months, compilation times have 
> certainly increased
> massively.> 
> Anyhow, I’ve received yesterday the iMac Pro I ordered early December.
> It’s a 10 cores Xeon-W (W-2150B) with 64GB RAM> 
> Here are the timings I measured, in comparison with the Mac Pro 2013 I
> have (which until today was the fastest machines I had ever used)> 
> macOS 10.13.2:
> Mac Pro late 2013 : 13m25s
> iMac Pro : 7m20s
> 
> Windows 10 fall creator
> Mac Pro late 2013 : 24m32s (was 16 minutes less than a year ago!)
> iMac Pro : 14m07s (16m10s with windows defender going)
> 
> Interestingly, I can almost no longer get any benefits when using
> icecream, with 36 cores it saves 11s, with 52 cores it saves 50s only…> 
> It’s a very sweet machine indeed

I just did a couple of clobber builds against the tip of central
(9be7249e74fd) on my P710 running Windows 10 Fall Creators Update
and they took about 22 minutes each. Definitely slower than it
used to be :-/
-Ted


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2018-01-16 Thread Mike Hommey
On Tue, Jan 16, 2018 at 10:02:12AM -0800, Ralph Giles wrote:
> On Tue, Jan 16, 2018 at 7:51 AM, Jean-Yves Avenard 
> wrote:
> 
> But I would be interested in knowing how long that same Lenovo P710 takes
> > to compile *today*….
> >
> 
> On my Lenovo P710 (2x2x6 core Xeon E5-2643 v4), Fedora 27 Linux
> 
> debug -Og build with gcc: 12:34
> debug -Og build with clang: 12:55
> opt build with clang: 11:51
> 
> Interestingly, I can almost no longer get any benefits when using icecream,
> > with 36 cores it saves 11s, with 52 cores it saves 50s only…
> >
> 
> Are you staturating all 52 cores during the buidls? Most of the increase in
> build time is new Rust code, and icecream doesn't distribute Rust. So in
> addition to some long compile times for final crates limiting the minimum
> build time, icecream doesn't help much in the run-up either. This is why
> I'm excited about the distributed build feature we're adding to sccache.

Distributed compilation of rust won't help unfortunately. That won't
solve the fact that the long pole of rust compilation is a series of
multiple long single-threaded processes that can't happen in parallel
because each of them depends on the output of the previous one.

Mike
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2018-01-16 Thread Jean-Yves Avenard


> On 16 Jan 2018, at 8:19 pm, Jean-Yves Avenard  wrote:
> 
> 
> 
>> On 16 Jan 2018, at 7:02 pm, Ralph Giles > > wrote:
>> 
>> On my Lenovo P710 (2x2x6 core Xeon E5-2643 v4), Fedora 27 Linux
>> 
>> debug -Og build with gcc: 12:34
>> debug -Og build with clang: 12:55
>> opt build with clang: 11:51
> 
> I didn’t succeed in booting linux unfortunately. so I can’t compare…
> 12 minutes sounds rather long, it’s about what the macpro is currently doing. 
> I typically get compilation times similar to mac…

so I didn’t manage to get linux to boot (tried all known main distributions)

But I ran a compilation inside VMWare on Mac, allocating “only” 16 cores as 
that’s the maximum and 32GB of RAM, it took 13m51s

No doubt it would go much lower once I manage to boot linux.

Damn fast machine !

JY

smime.p7s
Description: S/MIME cryptographic signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2018-01-16 Thread Gregory Szorc
Yes, most of the build time regressions in 2017 came from Rust. Leaning
more heavily on C++ features that require more processing or haven't been
optimized as much as C++ features that have been around for years is likely
also contributing.

Enabling sccache allows Rust compilations to be cached, which makes things
much faster on subsequent builds (since many Rust crates don't change that
often - but a few "large" crates like style do need to rebuild
semi-frequently).

We'll be transitioning workstations to the i9's because they are faster,
cheaper, and have more cores than the Xeons. But if you insist on having
ECC memory, you can still get the dual socket Xeons.

Last I heard Sophana was having trouble finding an OEM supplier for the
i9's (they are still relatively new). But if you want to put in a order for
the i9 before it is listed in the hardware catalog, contact Sophana (CCd)
and you can get the hook up.

While I'm here, we also have a contractor slated to add distributed
compilation to sccache [to replace icecream]. The contractor should start
in ~days. You can send questions, feature requests, etc through Ted for
now. We also had a meeting with IT and security last Friday about more
officially supporting distributed compilation in offices. We want people to
walk into any Mozilla office in the world and have distributed compilation
"just work." Hopefully we can deliver that in 2018.

On Tue, Jan 16, 2018 at 11:35 AM, Ralph Giles  wrote:

> On Tue, Jan 16, 2018 at 11:19 AM, Jean-Yves Avenard  >
> wrote:
>
> 12 minutes sounds rather long, it’s about what the macpro is currently
> > doing. I typically get compilation times similar to mac...
> >
>
> Yes, I'd like to see 7 minute build times again too! The E5-2643 has a
> higher clock speed than the Xeon W in the iMac Pro (3.4 vs 3.0 GHz) but a
> much lower peak frequency (3.7 vs 4.5 GHz) so maybe the iMac catches up
> during the single-process bottlenecks. Or it could be memory bandwidth.
>
>  -r
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2018-01-16 Thread Ralph Giles
On Tue, Jan 16, 2018 at 11:19 AM, Jean-Yves Avenard 
wrote:

12 minutes sounds rather long, it’s about what the macpro is currently
> doing. I typically get compilation times similar to mac...
>

Yes, I'd like to see 7 minute build times again too! The E5-2643 has a
higher clock speed than the Xeon W in the iMac Pro (3.4 vs 3.0 GHz) but a
much lower peak frequency (3.7 vs 4.5 GHz) so maybe the iMac catches up
during the single-process bottlenecks. Or it could be memory bandwidth.

 -r
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2018-01-16 Thread Jean-Yves Avenard


> On 16 Jan 2018, at 7:02 pm, Ralph Giles  wrote:
> 
> On my Lenovo P710 (2x2x6 core Xeon E5-2643 v4), Fedora 27 Linux
> 
> debug -Og build with gcc: 12:34
> debug -Og build with clang: 12:55
> opt build with clang: 11:51

I didn’t succeed in booting linux unfortunately. so I can’t compare…
12 minutes sounds rather long, it’s about what the macpro is currently doing. I 
typically get compilation times similar to mac...

> 
> Interestingly, I can almost no longer get any benefits when using icecream, 
> with 36 cores it saves 11s, with 52 cores it saves 50s only…
> 
> Are you staturating all 52 cores during the buidls? Most of the increase in 
> build time is new Rust code, and icecream doesn't distribute Rust. So in 
> addition to some long compile times for final crates limiting the minimum 
> build time, icecream doesn't help much in the run-up either. This is why I'm 
> excited about the distributed build feature we're adding to sccache.

icemon certainly shows all machines to be running (I ran it with -j36 and -j52)


> 
> I'd still expect some improvement from the C++ compilation though.
>  
> It’s a very sweet machine indeed
> 
> Glad you finally got one! :)
> 

probably will return it though, prefer to wait on the next mac pro.




smime.p7s
Description: S/MIME cryptographic signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2018-01-16 Thread Ralph Giles
On Tue, Jan 16, 2018 at 7:51 AM, Jean-Yves Avenard 
wrote:

But I would be interested in knowing how long that same Lenovo P710 takes
> to compile *today*….
>

On my Lenovo P710 (2x2x6 core Xeon E5-2643 v4), Fedora 27 Linux

debug -Og build with gcc: 12:34
debug -Og build with clang: 12:55
opt build with clang: 11:51

Interestingly, I can almost no longer get any benefits when using icecream,
> with 36 cores it saves 11s, with 52 cores it saves 50s only…
>

Are you staturating all 52 cores during the buidls? Most of the increase in
build time is new Rust code, and icecream doesn't distribute Rust. So in
addition to some long compile times for final crates limiting the minimum
build time, icecream doesn't help much in the run-up either. This is why
I'm excited about the distributed build feature we're adding to sccache.

I'd still expect some improvement from the C++ compilation though.


> It’s a very sweet machine indeed
>

Glad you finally got one! :)

 -r
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2018-01-16 Thread Jean-Yves Avenard
Sorry for resuming an old thread.

But I would be interested in knowing how long that same Lenovo P710 takes to 
compile *today*….
In the past 6 months, compilation times have certainly increased massively.

Anyhow, I’ve received yesterday the iMac Pro I ordered early December. It’s a 
10 cores Xeon-W (W-2150B) with 64GB RAM

Here are the timings I measured, in comparison with the Mac Pro 2013 I have 
(which until today was the fastest machines I had ever used)

macOS 10.13.2:
Mac Pro late 2013 : 13m25s
iMac Pro : 7m20s

Windows 10 fall creator
Mac Pro late 2013 : 24m32s (was 16 minutes less than a year ago!)
iMac Pro : 14m07s (16m10s with windows defender going)

Interestingly, I can almost no longer get any benefits when using icecream, 
with 36 cores it saves 11s, with 52 cores it saves 50s only…

It’s a very sweet machine indeed

Jean-Yves

> On 24 Mar 2017, at 11:32 am, Ted Mielczarek  wrote:
> 
> Just as a data point, I have one of those Lenovo P710 machines and I get
> 14-15 minute clobber builds on Windows.
> 
> -Ted



smime.p7s
Description: S/MIME cryptographic signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Bigger hard drives wanted (was Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux))

2017-11-10 Thread Randell Jesup
>On 11/7/17 4:13 PM, Sophana "Soap" Aik wrote:
>For the work I do (e.g. backporting security fixes every so often) I need a
>release tree, a beta tree, and ESR tree, and at least 3 tip trees.  That's
>at least 150GB.  If I want to have an effective ccache, that's about
>20-30GB (recall that each objdir is 9+GB!).  Call it 175GB.
>
>If I want to dual-boot or have a VM so I can do both Linux and Windows
>work, that's 350GB.  Plus the actual operating systems involved.  Plus any
>data files that might be being generated as part of work, etc.

I've "solved" this by having a 2T rotating disk for the stuff I don't
use constantly - release and ESR trees, local backups, if need be I'll
move other large things there (media files, RR storage which is
currently in ~/.rr)  I have 4 inbound trees (one dedicated to ASAN) and
head/beta trees, plus a couple of "mothball" trees for reference from
old instances of alder (those could be moved, though I trust rotating
disks far less than SSD.  That said, I have had a (personal/retail) SSD
die.)

Right now on my ~350GB Linux /home partition (there's a windows one too,
though I rarely use it) I have ~220GB used.  (there's also a 50GB /
partition).  src/mozilla is 120GB (including objdirs, though I kill them
fairly aggressively if they're out-of-date).  I should move my final
aurora repo to rotating disk..

I probably am not giving anywhere near enough space to ccache, though.

Rotating disks are cheap (and easy if you have a desktop; less so though
not horrible if you have a a laptop, especially with a dock).  They
don't necessarily solve Boris's problem, however.  He could really use a
1TB SSD I suspect.

When I got my current laptop, I asked for some options I saw on Lenovo's
site that weren't the default config.

-- 
Randell Jesup, Mozilla Corp
remove "news" for personal email
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Bigger hard drives wanted (was Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux))

2017-11-08 Thread Gregory Szorc
This thread is good feedback. I think changing the default to a 1TB SSD is
a reasonable request.

Please send any future comments regarding hardware to Sophana (
s...@mozilla.com) to increase the chances that feedback is acted on.

On Wed, Nov 8, 2017 at 9:09 AM, Julian Seward  wrote:

> On 08/11/17 17:28, Boris Zbarsky wrote:
>
> > The last desktop I was shipped came with a 512 GB drive.  [..]
> >
> > In practice, I routinely run out of disk space and have to delete
> > objdirs and rebuild them the next day, because I have to build
> > something else in a different srcdir...
>
> I totally agree.  I had a machine with a 512GB SSD and wound up in the
> same endless juggle/compress/delete-and-rebuild game.  I got a new machine
> with a 512GB SSD *and* a 1T HDD, and that helps a lot, although the perf
> hit from the HDD especially when linking libxul is terrible.
>
> J
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Bigger hard drives wanted (was Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux))

2017-11-08 Thread Julian Seward
On 08/11/17 17:28, Boris Zbarsky wrote:

> The last desktop I was shipped came with a 512 GB drive.  [..]
>
> In practice, I routinely run out of disk space and have to delete
> objdirs and rebuild them the next day, because I have to build
> something else in a different srcdir...

I totally agree.  I had a machine with a 512GB SSD and wound up in the
same endless juggle/compress/delete-and-rebuild game.  I got a new machine
with a 512GB SSD *and* a 1T HDD, and that helps a lot, although the perf
hit from the HDD especially when linking libxul is terrible.

J
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Bigger hard drives wanted (was Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux))

2017-11-08 Thread Michael de Boer
I’d like to add the VM multiplier: I’m working mainly on OSX and run a Windows 
and a Linux VM in there with their own checkouts and objdirs. Instead of 
allocating a comfortable size virtual disks, I end up resizing them quite 
frequently to avoid running out of space to save as much as possible for OSX.

Mike.

> On 8 Nov 2017, at 17:28, Boris Zbarsky  wrote:
> 
> On 11/7/17 4:13 PM, Sophana "Soap" Aik wrote:
>> Nothing is worse than hearing IT picked or chose hardware that nobody
>> actually wanted or will use.
> 
> If I could interject with a comment about the hardware we pick...
> 
> The last desktop I was shipped came with a 512 GB drive.  One of our srcdirs 
> is about 5-8GB nowadays (we seem to have mach commands that dump large stuff 
> in the srcdir).
> 
> Each objdir is 9+GB at least on Linux.  Figure 25GB for source + opt + debug.
> 
> For the work I do (e.g. backporting security fixes every so often) I need a 
> release tree, a beta tree, and ESR tree, and at least 3 tip trees.  That's at 
> least 150GB.  If I want to have an effective ccache, that's about 20-30GB 
> (recall that each objdir is 9+GB!).  Call it 175GB.
> 
> If I want to dual-boot or have a VM so I can do both Linux and Windows work, 
> that's 350GB.  Plus the actual operating systems involved.  Plus any data 
> files that might be being generated as part of work, etc.
> 
> In practice, I routinely run out of disk space and have to delete objdirs and 
> rebuild them the next day, because I have to build something else in a 
> different srcdir...
> 
> -Boris
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Bigger hard drives wanted (was Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux))

2017-11-08 Thread Boris Zbarsky

On 11/7/17 4:13 PM, Sophana "Soap" Aik wrote:

Nothing is worse than hearing IT picked or chose hardware that nobody
actually wanted or will use.


If I could interject with a comment about the hardware we pick...

The last desktop I was shipped came with a 512 GB drive.  One of our 
srcdirs is about 5-8GB nowadays (we seem to have mach commands that dump 
large stuff in the srcdir).


Each objdir is 9+GB at least on Linux.  Figure 25GB for source + opt + 
debug.


For the work I do (e.g. backporting security fixes every so often) I 
need a release tree, a beta tree, and ESR tree, and at least 3 tip 
trees.  That's at least 150GB.  If I want to have an effective ccache, 
that's about 20-30GB (recall that each objdir is 9+GB!).  Call it 175GB.


If I want to dual-boot or have a VM so I can do both Linux and Windows 
work, that's 350GB.  Plus the actual operating systems involved.  Plus 
any data files that might be being generated as part of work, etc.


In practice, I routinely run out of disk space and have to delete 
objdirs and rebuild them the next day, because I have to build something 
else in a different srcdir...


-Boris
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-08 Thread Sophana "Soap" Aik
Thanks Jeff, I understand your reasoning. 14 cores vs 10 is definitely
huge.

I will also add, there isn't anything to stop us to having more than one
config, just like we do with laptops.

I'm fortunate to be in this situation to finally help you all have
influence on the type of hardware that makes sense for your use cases.
Nothing is worse than hearing IT picked or chose hardware that nobody
actually wanted or will use.

I'll continue to pursue the Core i9 as an option, just currently there
aren't many OEM builders providing these yet.

On Tue, Nov 7, 2017 at 1:00 PM, Jeff Muizelaar 
wrote:

> The Core i9s are a quite a bit cheaper than the Xeon Ws:
> https://ark.intel.com/products/series/125035/Intel-Xeon-Processor-W-Family
> vs
> https://ark.intel.com/products/126695
>
> I wouldn't want to trade ECC for 4 cores.
>
> -Jeff
>
> On Tue, Nov 7, 2017 at 3:51 PM, Sophana "Soap" Aik 
> wrote:
> > Kris has touched on the many advantages of having a standard model. From
> > what I am seeing with most people's use case scenario, only the GPU is
> what
> > will determine what the machine is used for. IE: VR Research team may
> end up
> > only needing a GPU upgrade.
> >
> > Fortunately the new W-Series Xeon's seem to be equal or better to the
> Core
> > i9's but with ECC support. So there's no sacrifice to performance in
> single
> > threaded or multi-threaded workloads.
> >
> > With all that said, we'll move forward with the evaluation machine and
> find
> > out for sure in real world testing. :)
> >
> >
> >
> > On Tue, Nov 7, 2017 at 12:30 PM, Kris Maglione 
> > wrote:
> >>
> >> On Tue, Nov 07, 2017 at 03:07:55PM -0500, Jeff Muizelaar wrote:
> >>>
> >>> On Mon, Nov 6, 2017 at 1:32 PM, Sophana "Soap" Aik 
> >>> wrote:
> 
>  Hi All,
> 
>  I'm in the middle of getting another evaluation machine with a 10-core
>  W-Series Xeon Processor (that is similar to the 7900X in terms of
> clock
>  speed and performance) but with ECC memory support.
> 
>  I'm trying to make sure this is a "one size fits all" machine as much
> as
>  possible.
> >>>
> >>>
> >>> What's the advantage of having a "one size fits all" machine? I
> >>> imagine there's quite a range of uses and preferences for these
> >>> machines. e.g some people are going to be spending more time waiting
> >>> for a single core and so would prefer a smaller core count and higher
> >>> clock, other people want a machine that's as wide as possible. Some
> >>> people would value performance over correctness and so would likely
> >>> not want ECC. etc. I've heard a number of horror stories of people
> >>> ending up with hardware that's not well suited to their tasks just
> >>> because that was the only hardware on the list.
> >>
> >>
> >> High core count Xeons will divert power from idle cores to increase the
> >> clock speed of saturated cores during mostly single-threaded workloads.
> >>
> >> The advantage of a one-size-fits-all machine is that it means more of us
> >> have the same hardware configuration, which means fewer of us running
> into
> >> independent issues, more of us being able to share software
> configurations
> >> that work well, easier purchasing and stocking of upgrades and
> accessories,
> >> ... I own a personal high-end Xeon workstation, and if every developer
> at
> >> the company had to go through the same teething and configuration
> troubles
> >> that I did while breaking it in, we would not be in a good place.
> >>
> >> And I don't really want to get into the weeds on ECC again, but the
> >> performance of load-reduced ECC is quite good, and the additional cost
> of
> >> ECC is very low compared to the cost of developer time over the two
> years
> >> that they're expected to use it.
> >
> >
> >
> >
> > --
> > moz://a
> > Sophana "Soap" Aik
> > IT Vendor Management Analyst
> > IRC/Slack: soap
>



-- 
moz://a
Sophana "Soap" Aik
IT Vendor Management Analyst
IRC/Slack: soap
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-08 Thread Sophana "Soap" Aik
Kris has touched on the many advantages of having a standard model. From
what I am seeing with most people's use case scenario, only the GPU is what
will determine what the machine is used for. IE: VR Research team may end
up only needing a GPU upgrade.

Fortunately the new W-Series Xeon's seem to be equal or better to the Core
i9's but with ECC support. So there's no sacrifice to performance in single
threaded or multi-threaded workloads.

With all that said, we'll move forward with the evaluation machine and find
out for sure in real world testing. :)



On Tue, Nov 7, 2017 at 12:30 PM, Kris Maglione 
wrote:

> On Tue, Nov 07, 2017 at 03:07:55PM -0500, Jeff Muizelaar wrote:
>
>> On Mon, Nov 6, 2017 at 1:32 PM, Sophana "Soap" Aik 
>> wrote:
>>
>>> Hi All,
>>>
>>> I'm in the middle of getting another evaluation machine with a 10-core
>>> W-Series Xeon Processor (that is similar to the 7900X in terms of clock
>>> speed and performance) but with ECC memory support.
>>>
>>> I'm trying to make sure this is a "one size fits all" machine as much as
>>> possible.
>>>
>>
>> What's the advantage of having a "one size fits all" machine? I
>> imagine there's quite a range of uses and preferences for these
>> machines. e.g some people are going to be spending more time waiting
>> for a single core and so would prefer a smaller core count and higher
>> clock, other people want a machine that's as wide as possible. Some
>> people would value performance over correctness and so would likely
>> not want ECC. etc. I've heard a number of horror stories of people
>> ending up with hardware that's not well suited to their tasks just
>> because that was the only hardware on the list.
>>
>
> High core count Xeons will divert power from idle cores to increase the
> clock speed of saturated cores during mostly single-threaded workloads.
>
> The advantage of a one-size-fits-all machine is that it means more of us
> have the same hardware configuration, which means fewer of us running into
> independent issues, more of us being able to share software configurations
> that work well, easier purchasing and stocking of upgrades and accessories,
> ... I own a personal high-end Xeon workstation, and if every developer at
> the company had to go through the same teething and configuration troubles
> that I did while breaking it in, we would not be in a good place.
>
> And I don't really want to get into the weeds on ECC again, but the
> performance of load-reduced ECC is quite good, and the additional cost of
> ECC is very low compared to the cost of developer time over the two years
> that they're expected to use it.
>



-- 
moz://a
Sophana "Soap" Aik
IT Vendor Management Analyst
IRC/Slack: soap
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-08 Thread Jean-Yves Avenard
With all this talk…

I’m eagerly waiting for the iMac Pro.

Best of all worlds really:
- High core count
- ECC RAM
- 5K 27” display
- Great graphic card
- Super silent…

I’ve been using a Mac Pro 2013 (the trash can one), Xeon E5 8 cores, 32 GB ECC 
RAM, connected to two 27” screens (one 5K with DPI set at 200%, the other a 
2560x1440 Apple thunderbolt)

It runs flawlessly Windows, Mac and Linux (though under Linux I never managed 
to get more than one screen working at a time).

It compiles on mac, even with stylo under 12 minutes and on Windows in 19 
minutes (used to be 6 minutes and 12 minutes respectively before all this rust 
thing came in)…. And that’s using mach with 14 jobs only so that I continue to 
work on the machine without noticing it’s doing a CPU intensive task. The UI 
stays ultra responsive.

And best of all, it’s sitting 60cm from my hear and I can’t hear anything at 
all…

This has been my primary machine since 2014, I’ve had no desire to upgrade as 
no other machine will allow me such comfortable development environment under 
all platforms we support.

It had been difficult to choose at the beginning between the higher frequency 6 
cores or the 8 cores. But that turned out to be a moot issue as the 8 cores, 
when only 6 cores are run will go as high as the 6 cores version…

The mac pro was an expensive machine, but seeing that it will last me longer 
than your usual machine, I do believe that in the long term it will be best 
value for money.

My $0.02

> On 8 Nov 2017, at 8:43 am, Henri Sivonen  wrote:
> 
> I agree that workstation GPUs should be avoided. Even if they were as
> well supported by Linux distro-provided Open Source drivers as
> consumer GPUs, it's at the very least more difficult to find
> information about what's true about them.
> 
> We don't need the GPU to be at max spec like we need the CPU to be.
> The GPU doesn't affect build times, and for running Firefox it seems
> more useful to see how it runs with a consumer GPU.
> 
> I think we also shouldn't overdo multi-monitor *connectors* at the
> expense of Linux-compatibility, especially considering that
> DisplayPort is supposed to support monitor chaining behind one port on
> the graphics card. The Quadro M2000 that caused trouble for me had
> *four* DisplayPort connectors. Considering the number of ports vs.
> Linux distros Just Working, I'd expect the prioritizing Linux distros
> Just Working to be more useful (as in letting developers write code
> instead of troubleshoot GPU issues) than having a "professional"
> number of connectors as the configuration offered to people who don't
> ask for a lot of connectors. (The specs for the older generation
> consumer-grade Radeon RX 460 claim 5 DisplayPort screens behind the
> one DisplayPort connector on the card, but I haven't verified it
> empirically, since I don't have that many screens to test with.)
> 
> On Tue, Nov 7, 2017 at 10:27 PM, Jeff Gilbert  > wrote:
>> Avoid workstation GPUs if you can. At best, they're just a more
>> expensive consumer GPU. At worst, they may sacrifice performance we
>> care about in their optimization for CAD and modelling workloads, in
>> addition to moving us further away from testing what our users use. We
>> have no need for workstation GPUs, so we should avoid them if we can.
>> 
>> On Mon, Nov 6, 2017 at 10:32 AM, Sophana "Soap" Aik  wrote:
>>> Hi All,
>>> 
>>> I'm in the middle of getting another evaluation machine with a 10-core
>>> W-Series Xeon Processor (that is similar to the 7900X in terms of clock
>>> speed and performance) but with ECC memory support.
>>> 
>>> I'm trying to make sure this is a "one size fits all" machine as much as
>>> possible.
>>> 
>>> Also there are some AMD Radeon workstation GPU's that look interesting to
>>> me. The one I was thinking to include was a Radeon Pro WX2100, 2GB, FH
>>> (5820T) so we can start testing that as well.
>>> 
>>> Stay tuned...
>>> 
>>> On Mon, Nov 6, 2017 at 12:46 AM, Henri Sivonen  wrote:
>>> 
 Thank you for including an AMD card among the ones to be tested.
 
 - -
 
 The Radeon RX 460 mentioned earlier in this thread arrived. There was
 again enough weirdness that I think it's worth sharing in case it
 saves time for someone else:
 
 Initially, for multiple rounds of booting with different cable
 configurations, the Lenovo UEFI consistenly displayed nothing if a
 cable with a powered-on screen was plugged into the DisplayPort
 connector on the RX 460. To see the boot password prompt or anything
 else displayed by the Lenovo UEFI, I needed to connect a screen to the
 DVI port and *not* have a powered-on screen connected to DisplayPort.
 However, Lenovo UEFI started displaying on a DisplayPort-connected
 screen (with or without DVI also connected) after one time I had had a
 powered-on screen connected to DVI and a powered-off screen connected
 to DisplayPort at the start of t

Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-08 Thread Jean-Yves Avenard
With all this talk…

I’m eagerly waiting for the iMac Pro.

Best of all worlds really:
- High core count
- ECC RAM
- 5K 27” display
- Great graphic card
- Super silent…

I’ve been using a Mac Pro 2013 (the trash can one), Xeon E5 8 cores, 32 GB ECC 
RAM, connected to two 27” screens (one 5K with DPI set at 200%, the other a 
2560x1440 Apple thunderbolt)

It runs flawlessly Windows, Mac and Linux (though under Linux I never managed 
to get more than one screen working at a time).

It compiles on mac, even with stylo under 12 minutes and on Windows in 19 
minutes (used to be 6 minutes and 12 minutes respectively before all this rust 
thing came in)…. And that’s using mach with 14 jobs only so that I continue to 
work on the machine without noticing it’s doing a CPU intensive task. The UI 
stays ultra responsive.

And best of all, it’s sitting 60cm from my hear and I can’t hear anything at 
all…

This has been my primary machine since 2014, I’ve had no desire to upgrade as 
no other machine will allow me such comfortable development environment under 
all platforms we support.

It had been difficult to choose at the beginning between the higher frequency 6 
cores or the 8 cores. But that turned out to be a moot issue as the 8 cores, 
when only 6 cores are run will go as high as the 6 cores version…

The mac pro was an expensive machine, but seeing that it will last me longer 
than your usual machine, I do believe that in the long term it will be best 
value for money.

My $0.02

> On 8 Nov 2017, at 8:43 am, Henri Sivonen  wrote:
> 
> I agree that workstation GPUs should be avoided. Even if they were as
> well supported by Linux distro-provided Open Source drivers as
> consumer GPUs, it's at the very least more difficult to find
> information about what's true about them.
> 
> We don't need the GPU to be at max spec like we need the CPU to be.
> The GPU doesn't affect build times, and for running Firefox it seems
> more useful to see how it runs with a consumer GPU.
> 
> I think we also shouldn't overdo multi-monitor *connectors* at the
> expense of Linux-compatibility, especially considering that
> DisplayPort is supposed to support monitor chaining behind one port on
> the graphics card. The Quadro M2000 that caused trouble for me had
> *four* DisplayPort connectors. Considering the number of ports vs.
> Linux distros Just Working, I'd expect the prioritizing Linux distros
> Just Working to be more useful (as in letting developers write code
> instead of troubleshoot GPU issues) than having a "professional"
> number of connectors as the configuration offered to people who don't
> ask for a lot of connectors. (The specs for the older generation
> consumer-grade Radeon RX 460 claim 5 DisplayPort screens behind the
> one DisplayPort connector on the card, but I haven't verified it
> empirically, since I don't have that many screens to test with.)
> 
> On Tue, Nov 7, 2017 at 10:27 PM, Jeff Gilbert  > wrote:
>> Avoid workstation GPUs if you can. At best, they're just a more
>> expensive consumer GPU. At worst, they may sacrifice performance we
>> care about in their optimization for CAD and modelling workloads, in
>> addition to moving us further away from testing what our users use. We
>> have no need for workstation GPUs, so we should avoid them if we can.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-08 Thread Mike Hommey
On Wed, Nov 08, 2017 at 09:43:29AM +0200, Henri Sivonen wrote:
> I agree that workstation GPUs should be avoided. Even if they were as
> well supported by Linux distro-provided Open Source drivers as
> consumer GPUs, it's at the very least more difficult to find
> information about what's true about them.
> 
> We don't need the GPU to be at max spec like we need the CPU to be.
> The GPU doesn't affect build times, and for running Firefox it seems
> more useful to see how it runs with a consumer GPU.
> 
> I think we also shouldn't overdo multi-monitor *connectors* at the
> expense of Linux-compatibility, especially considering that
> DisplayPort is supposed to support monitor chaining behind one port on
> the graphics card. The Quadro M2000 that caused trouble for me had
> *four* DisplayPort connectors. Considering the number of ports vs.
> Linux distros Just Working, I'd expect the prioritizing Linux distros
> Just Working to be more useful (as in letting developers write code
> instead of troubleshoot GPU issues) than having a "professional"
> number of connectors as the configuration offered to people who don't
> ask for a lot of connectors. (The specs for the older generation
> consumer-grade Radeon RX 460 claim 5 DisplayPort screens behind the
> one DisplayPort connector on the card, but I haven't verified it
> empirically, since I don't have that many screens to test with.)

Yes, you can daisy-chain many monitors with DisplayPort, but there's a
bandwidth limit you need to be aware of.

DP 1.2 can only handle 4 HD screens at 60Hz, and *one* 4K screen at 60Hz
DP 1.3 and 1.4 can "only" handle two 4K screens at 60Hz.

Also, support for multi-screen over DP is usually flaky wrt hot-plug. At
least that's been my experience on both Linux and Windows, and I hear
Windows is actually worse. Also, I usually get my monitors set in a
different order when I upgrade the kernel. (And I'm only using two HD
monitors)

Mike
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-07 Thread Henri Sivonen
I agree that workstation GPUs should be avoided. Even if they were as
well supported by Linux distro-provided Open Source drivers as
consumer GPUs, it's at the very least more difficult to find
information about what's true about them.

We don't need the GPU to be at max spec like we need the CPU to be.
The GPU doesn't affect build times, and for running Firefox it seems
more useful to see how it runs with a consumer GPU.

I think we also shouldn't overdo multi-monitor *connectors* at the
expense of Linux-compatibility, especially considering that
DisplayPort is supposed to support monitor chaining behind one port on
the graphics card. The Quadro M2000 that caused trouble for me had
*four* DisplayPort connectors. Considering the number of ports vs.
Linux distros Just Working, I'd expect the prioritizing Linux distros
Just Working to be more useful (as in letting developers write code
instead of troubleshoot GPU issues) than having a "professional"
number of connectors as the configuration offered to people who don't
ask for a lot of connectors. (The specs for the older generation
consumer-grade Radeon RX 460 claim 5 DisplayPort screens behind the
one DisplayPort connector on the card, but I haven't verified it
empirically, since I don't have that many screens to test with.)

On Tue, Nov 7, 2017 at 10:27 PM, Jeff Gilbert  wrote:
> Avoid workstation GPUs if you can. At best, they're just a more
> expensive consumer GPU. At worst, they may sacrifice performance we
> care about in their optimization for CAD and modelling workloads, in
> addition to moving us further away from testing what our users use. We
> have no need for workstation GPUs, so we should avoid them if we can.
>
> On Mon, Nov 6, 2017 at 10:32 AM, Sophana "Soap" Aik  wrote:
>> Hi All,
>>
>> I'm in the middle of getting another evaluation machine with a 10-core
>> W-Series Xeon Processor (that is similar to the 7900X in terms of clock
>> speed and performance) but with ECC memory support.
>>
>> I'm trying to make sure this is a "one size fits all" machine as much as
>> possible.
>>
>> Also there are some AMD Radeon workstation GPU's that look interesting to
>> me. The one I was thinking to include was a Radeon Pro WX2100, 2GB, FH
>> (5820T) so we can start testing that as well.
>>
>> Stay tuned...
>>
>> On Mon, Nov 6, 2017 at 12:46 AM, Henri Sivonen  wrote:
>>
>>> Thank you for including an AMD card among the ones to be tested.
>>>
>>> - -
>>>
>>> The Radeon RX 460 mentioned earlier in this thread arrived. There was
>>> again enough weirdness that I think it's worth sharing in case it
>>> saves time for someone else:
>>>
>>> Initially, for multiple rounds of booting with different cable
>>> configurations, the Lenovo UEFI consistenly displayed nothing if a
>>> cable with a powered-on screen was plugged into the DisplayPort
>>> connector on the RX 460. To see the boot password prompt or anything
>>> else displayed by the Lenovo UEFI, I needed to connect a screen to the
>>> DVI port and *not* have a powered-on screen connected to DisplayPort.
>>> However, Lenovo UEFI started displaying on a DisplayPort-connected
>>> screen (with or without DVI also connected) after one time I had had a
>>> powered-on screen connected to DVI and a powered-off screen connected
>>> to DisplayPort at the start of the boot and I turned on the
>>> DisplayPort screen while the DVI screen was displaying the UEFI
>>> password prompt. However, during that same boot, I happened to not to
>>> have a keyboard connected, because it was connected via the screen
>>> that was powered off, and this caused an UEFI error, so I don't know
>>> which of the DisplayPort device powering on during the UEFI phase or
>>> UEFI going through an error phase due to missing keyboard jolted it to
>>> use the DisplayPort screen properly subsequently. Weird.
>>>
>>> On the Linux side, the original Ubuntu 16.04 kernel (4.4) supported
>>> only a low resolution fallback mode. Rolling the hardware enablement
>>> stack forward (to 4.10 series kernel using the incantation given at
>>> https://wiki.ubuntu.com/Kernel/LTSEnablementStack ) fixed this and
>>> resulted in Firefox reporting WebGL2 and all. The fix for
>>> https://bugzilla.kernel.org/show_bug.cgi?id=191281 hasn't propagated
>>> to Ubuntu 16.04's latest HWE stack, which looks distressing during
>>> boot, but it seems harmless so far.
>>>
>>> I got the 4 GB model, since it was available at roughly the same price
>>> as the 2 GB model. It supports both screens I have available for
>>> testing at their full resolution simultaneously (2560x1440 plugged
>>> into DisplayPort and 1920x1200 plugged into DVI).
>>>
>>> The card is significantly larger than the Quadro M2000. It takes the
>>> space of two card slots (connects to one, but the heat sink and the
>>> dual fans take the space of another slot). The fans don't appear to
>>> make an audible difference compared to the Quadro M2000.
>>>
>>> On Fri, Oct 27, 2017 at 6:19 PM, Sophana "Soap" Aik 
>>> wrote

Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-07 Thread Jeff Muizelaar
The Core i9s are a quite a bit cheaper than the Xeon Ws:
https://ark.intel.com/products/series/125035/Intel-Xeon-Processor-W-Family vs
https://ark.intel.com/products/126695

I wouldn't want to trade ECC for 4 cores.

-Jeff

On Tue, Nov 7, 2017 at 3:51 PM, Sophana "Soap" Aik  wrote:
> Kris has touched on the many advantages of having a standard model. From
> what I am seeing with most people's use case scenario, only the GPU is what
> will determine what the machine is used for. IE: VR Research team may end up
> only needing a GPU upgrade.
>
> Fortunately the new W-Series Xeon's seem to be equal or better to the Core
> i9's but with ECC support. So there's no sacrifice to performance in single
> threaded or multi-threaded workloads.
>
> With all that said, we'll move forward with the evaluation machine and find
> out for sure in real world testing. :)
>
>
>
> On Tue, Nov 7, 2017 at 12:30 PM, Kris Maglione 
> wrote:
>>
>> On Tue, Nov 07, 2017 at 03:07:55PM -0500, Jeff Muizelaar wrote:
>>>
>>> On Mon, Nov 6, 2017 at 1:32 PM, Sophana "Soap" Aik 
>>> wrote:

 Hi All,

 I'm in the middle of getting another evaluation machine with a 10-core
 W-Series Xeon Processor (that is similar to the 7900X in terms of clock
 speed and performance) but with ECC memory support.

 I'm trying to make sure this is a "one size fits all" machine as much as
 possible.
>>>
>>>
>>> What's the advantage of having a "one size fits all" machine? I
>>> imagine there's quite a range of uses and preferences for these
>>> machines. e.g some people are going to be spending more time waiting
>>> for a single core and so would prefer a smaller core count and higher
>>> clock, other people want a machine that's as wide as possible. Some
>>> people would value performance over correctness and so would likely
>>> not want ECC. etc. I've heard a number of horror stories of people
>>> ending up with hardware that's not well suited to their tasks just
>>> because that was the only hardware on the list.
>>
>>
>> High core count Xeons will divert power from idle cores to increase the
>> clock speed of saturated cores during mostly single-threaded workloads.
>>
>> The advantage of a one-size-fits-all machine is that it means more of us
>> have the same hardware configuration, which means fewer of us running into
>> independent issues, more of us being able to share software configurations
>> that work well, easier purchasing and stocking of upgrades and accessories,
>> ... I own a personal high-end Xeon workstation, and if every developer at
>> the company had to go through the same teething and configuration troubles
>> that I did while breaking it in, we would not be in a good place.
>>
>> And I don't really want to get into the weeds on ECC again, but the
>> performance of load-reduced ECC is quite good, and the additional cost of
>> ECC is very low compared to the cost of developer time over the two years
>> that they're expected to use it.
>
>
>
>
> --
> moz://a
> Sophana "Soap" Aik
> IT Vendor Management Analyst
> IRC/Slack: soap
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-07 Thread Jeff Gilbert
If you don't want to get into the weeds on ECC again, please do not
reinitiate discussion. I do not agree that "the additional cost of ECC
is very low compared to the cost of developer time over the two years
that they're expected to use it", but I will restrict my disagreement
to the forked thread that you created. Please repost there.

On Tue, Nov 7, 2017 at 12:30 PM, Kris Maglione  wrote:
> On Tue, Nov 07, 2017 at 03:07:55PM -0500, Jeff Muizelaar wrote:
>>
>> On Mon, Nov 6, 2017 at 1:32 PM, Sophana "Soap" Aik 
>> wrote:
>>>
>>> Hi All,
>>>
>>> I'm in the middle of getting another evaluation machine with a 10-core
>>> W-Series Xeon Processor (that is similar to the 7900X in terms of clock
>>> speed and performance) but with ECC memory support.
>>>
>>> I'm trying to make sure this is a "one size fits all" machine as much as
>>> possible.
>>
>>
>> What's the advantage of having a "one size fits all" machine? I
>> imagine there's quite a range of uses and preferences for these
>> machines. e.g some people are going to be spending more time waiting
>> for a single core and so would prefer a smaller core count and higher
>> clock, other people want a machine that's as wide as possible. Some
>> people would value performance over correctness and so would likely
>> not want ECC. etc. I've heard a number of horror stories of people
>> ending up with hardware that's not well suited to their tasks just
>> because that was the only hardware on the list.
>
>
> High core count Xeons will divert power from idle cores to increase the
> clock speed of saturated cores during mostly single-threaded workloads.
>
> The advantage of a one-size-fits-all machine is that it means more of us
> have the same hardware configuration, which means fewer of us running into
> independent issues, more of us being able to share software configurations
> that work well, easier purchasing and stocking of upgrades and accessories,
> ... I own a personal high-end Xeon workstation, and if every developer at
> the company had to go through the same teething and configuration troubles
> that I did while breaking it in, we would not be in a good place.
>
> And I don't really want to get into the weeds on ECC again, but the
> performance of load-reduced ECC is quite good, and the additional cost of
> ECC is very low compared to the cost of developer time over the two years
> that they're expected to use it.
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-07 Thread Kris Maglione

On Tue, Nov 07, 2017 at 03:07:55PM -0500, Jeff Muizelaar wrote:

On Mon, Nov 6, 2017 at 1:32 PM, Sophana "Soap" Aik  wrote:

Hi All,

I'm in the middle of getting another evaluation machine with a 10-core
W-Series Xeon Processor (that is similar to the 7900X in terms of clock
speed and performance) but with ECC memory support.

I'm trying to make sure this is a "one size fits all" machine as much as
possible.


What's the advantage of having a "one size fits all" machine? I
imagine there's quite a range of uses and preferences for these
machines. e.g some people are going to be spending more time waiting
for a single core and so would prefer a smaller core count and higher
clock, other people want a machine that's as wide as possible. Some
people would value performance over correctness and so would likely
not want ECC. etc. I've heard a number of horror stories of people
ending up with hardware that's not well suited to their tasks just
because that was the only hardware on the list.


High core count Xeons will divert power from idle cores to increase the 
clock speed of saturated cores during mostly single-threaded workloads.


The advantage of a one-size-fits-all machine is that it means more of us 
have the same hardware configuration, which means fewer of us running 
into independent issues, more of us being able to share software 
configurations that work well, easier purchasing and stocking of 
upgrades and accessories, ... I own a personal high-end Xeon 
workstation, and if every developer at the company had to go through the 
same teething and configuration troubles that I did while breaking it 
in, we would not be in a good place.


And I don't really want to get into the weeds on ECC again, but the 
performance of load-reduced ECC is quite good, and the additional cost 
of ECC is very low compared to the cost of developer time over the two 
years that they're expected to use it.

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-07 Thread Jeff Gilbert
Avoid workstation GPUs if you can. At best, they're just a more
expensive consumer GPU. At worst, they may sacrifice performance we
care about in their optimization for CAD and modelling workloads, in
addition to moving us further away from testing what our users use. We
have no need for workstation GPUs, so we should avoid them if we can.

On Mon, Nov 6, 2017 at 10:32 AM, Sophana "Soap" Aik  wrote:
> Hi All,
>
> I'm in the middle of getting another evaluation machine with a 10-core
> W-Series Xeon Processor (that is similar to the 7900X in terms of clock
> speed and performance) but with ECC memory support.
>
> I'm trying to make sure this is a "one size fits all" machine as much as
> possible.
>
> Also there are some AMD Radeon workstation GPU's that look interesting to
> me. The one I was thinking to include was a Radeon Pro WX2100, 2GB, FH
> (5820T) so we can start testing that as well.
>
> Stay tuned...
>
> On Mon, Nov 6, 2017 at 12:46 AM, Henri Sivonen  wrote:
>
>> Thank you for including an AMD card among the ones to be tested.
>>
>> - -
>>
>> The Radeon RX 460 mentioned earlier in this thread arrived. There was
>> again enough weirdness that I think it's worth sharing in case it
>> saves time for someone else:
>>
>> Initially, for multiple rounds of booting with different cable
>> configurations, the Lenovo UEFI consistenly displayed nothing if a
>> cable with a powered-on screen was plugged into the DisplayPort
>> connector on the RX 460. To see the boot password prompt or anything
>> else displayed by the Lenovo UEFI, I needed to connect a screen to the
>> DVI port and *not* have a powered-on screen connected to DisplayPort.
>> However, Lenovo UEFI started displaying on a DisplayPort-connected
>> screen (with or without DVI also connected) after one time I had had a
>> powered-on screen connected to DVI and a powered-off screen connected
>> to DisplayPort at the start of the boot and I turned on the
>> DisplayPort screen while the DVI screen was displaying the UEFI
>> password prompt. However, during that same boot, I happened to not to
>> have a keyboard connected, because it was connected via the screen
>> that was powered off, and this caused an UEFI error, so I don't know
>> which of the DisplayPort device powering on during the UEFI phase or
>> UEFI going through an error phase due to missing keyboard jolted it to
>> use the DisplayPort screen properly subsequently. Weird.
>>
>> On the Linux side, the original Ubuntu 16.04 kernel (4.4) supported
>> only a low resolution fallback mode. Rolling the hardware enablement
>> stack forward (to 4.10 series kernel using the incantation given at
>> https://wiki.ubuntu.com/Kernel/LTSEnablementStack ) fixed this and
>> resulted in Firefox reporting WebGL2 and all. The fix for
>> https://bugzilla.kernel.org/show_bug.cgi?id=191281 hasn't propagated
>> to Ubuntu 16.04's latest HWE stack, which looks distressing during
>> boot, but it seems harmless so far.
>>
>> I got the 4 GB model, since it was available at roughly the same price
>> as the 2 GB model. It supports both screens I have available for
>> testing at their full resolution simultaneously (2560x1440 plugged
>> into DisplayPort and 1920x1200 plugged into DVI).
>>
>> The card is significantly larger than the Quadro M2000. It takes the
>> space of two card slots (connects to one, but the heat sink and the
>> dual fans take the space of another slot). The fans don't appear to
>> make an audible difference compared to the Quadro M2000.
>>
>> On Fri, Oct 27, 2017 at 6:19 PM, Sophana "Soap" Aik 
>> wrote:
>> > Thank you Henri for the feedback.
>> >
>> > How about this, we can order some graphics cards and put them in the
>> > evaluation/test machine that is with Greg, to make sure it has good
>> > compatibility.
>> >
>> > We could do:
>> > Nvidia GTX 1060 3GB
>> > AMD Radeon RX570
>> >
>> > These two options will ensure it can drive multi displays.
>> >
>> > Other suggestions welcomed.
>> >
>> > Greg, is that something you think we should do?
>> >
>> > On Thu, Oct 26, 2017 at 11:33 PM, Henri Sivonen 
>> > wrote:
>> >>
>> >> On Fri, Oct 27, 2017 at 4:48 AM, Sophana "Soap" Aik 
>> >> wrote:
>> >> > Hello everyone, great feedback that I will keep in mind and continue
>> to
>> >> > work
>> >> > with our vendors to find the best solution with. One of the cards
>> that I
>> >> > was
>> >> > looking at is fairly cheap and can at least drive multi-displays (even
>> >> > 4K
>> >> > 60hz) was the Nvidia Quadro P600.
>> >>
>> >> Is that GPU known to be well-supported by Nouveau of Ubuntu 16.04
>> vintage?
>> >>
>> >> I don't want to deny a single-GPU multi-monitor setup to anyone for
>> >> whom that's the priority, but considering how much damage the Quadro
>> >> M2000 has done to my productivity (and from what I've heard from other
>> >> people on the DOM team, I gather I'm not the only one who has had
>> >> trouble with it), the four DisplayPort connectors on it look like very
>> >> bad economics.
>> >>
>> >> I 

Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-07 Thread Jeff Muizelaar
On Mon, Nov 6, 2017 at 1:32 PM, Sophana "Soap" Aik  wrote:
> Hi All,
>
> I'm in the middle of getting another evaluation machine with a 10-core
> W-Series Xeon Processor (that is similar to the 7900X in terms of clock
> speed and performance) but with ECC memory support.
>
> I'm trying to make sure this is a "one size fits all" machine as much as
> possible.

What's the advantage of having a "one size fits all" machine? I
imagine there's quite a range of uses and preferences for these
machines. e.g some people are going to be spending more time waiting
for a single core and so would prefer a smaller core count and higher
clock, other people want a machine that's as wide as possible. Some
people would value performance over correctness and so would likely
not want ECC. etc. I've heard a number of horror stories of people
ending up with hardware that's not well suited to their tasks just
because that was the only hardware on the list.

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-07 Thread Sophana "Soap" Aik
Hi All,

I'm in the middle of getting another evaluation machine with a 10-core
W-Series Xeon Processor (that is similar to the 7900X in terms of clock
speed and performance) but with ECC memory support.

I'm trying to make sure this is a "one size fits all" machine as much as
possible.

Also there are some AMD Radeon workstation GPU's that look interesting to
me. The one I was thinking to include was a Radeon Pro WX2100, 2GB, FH
(5820T) so we can start testing that as well.

Stay tuned...

On Mon, Nov 6, 2017 at 12:46 AM, Henri Sivonen  wrote:

> Thank you for including an AMD card among the ones to be tested.
>
> - -
>
> The Radeon RX 460 mentioned earlier in this thread arrived. There was
> again enough weirdness that I think it's worth sharing in case it
> saves time for someone else:
>
> Initially, for multiple rounds of booting with different cable
> configurations, the Lenovo UEFI consistenly displayed nothing if a
> cable with a powered-on screen was plugged into the DisplayPort
> connector on the RX 460. To see the boot password prompt or anything
> else displayed by the Lenovo UEFI, I needed to connect a screen to the
> DVI port and *not* have a powered-on screen connected to DisplayPort.
> However, Lenovo UEFI started displaying on a DisplayPort-connected
> screen (with or without DVI also connected) after one time I had had a
> powered-on screen connected to DVI and a powered-off screen connected
> to DisplayPort at the start of the boot and I turned on the
> DisplayPort screen while the DVI screen was displaying the UEFI
> password prompt. However, during that same boot, I happened to not to
> have a keyboard connected, because it was connected via the screen
> that was powered off, and this caused an UEFI error, so I don't know
> which of the DisplayPort device powering on during the UEFI phase or
> UEFI going through an error phase due to missing keyboard jolted it to
> use the DisplayPort screen properly subsequently. Weird.
>
> On the Linux side, the original Ubuntu 16.04 kernel (4.4) supported
> only a low resolution fallback mode. Rolling the hardware enablement
> stack forward (to 4.10 series kernel using the incantation given at
> https://wiki.ubuntu.com/Kernel/LTSEnablementStack ) fixed this and
> resulted in Firefox reporting WebGL2 and all. The fix for
> https://bugzilla.kernel.org/show_bug.cgi?id=191281 hasn't propagated
> to Ubuntu 16.04's latest HWE stack, which looks distressing during
> boot, but it seems harmless so far.
>
> I got the 4 GB model, since it was available at roughly the same price
> as the 2 GB model. It supports both screens I have available for
> testing at their full resolution simultaneously (2560x1440 plugged
> into DisplayPort and 1920x1200 plugged into DVI).
>
> The card is significantly larger than the Quadro M2000. It takes the
> space of two card slots (connects to one, but the heat sink and the
> dual fans take the space of another slot). The fans don't appear to
> make an audible difference compared to the Quadro M2000.
>
> On Fri, Oct 27, 2017 at 6:19 PM, Sophana "Soap" Aik 
> wrote:
> > Thank you Henri for the feedback.
> >
> > How about this, we can order some graphics cards and put them in the
> > evaluation/test machine that is with Greg, to make sure it has good
> > compatibility.
> >
> > We could do:
> > Nvidia GTX 1060 3GB
> > AMD Radeon RX570
> >
> > These two options will ensure it can drive multi displays.
> >
> > Other suggestions welcomed.
> >
> > Greg, is that something you think we should do?
> >
> > On Thu, Oct 26, 2017 at 11:33 PM, Henri Sivonen 
> > wrote:
> >>
> >> On Fri, Oct 27, 2017 at 4:48 AM, Sophana "Soap" Aik 
> >> wrote:
> >> > Hello everyone, great feedback that I will keep in mind and continue
> to
> >> > work
> >> > with our vendors to find the best solution with. One of the cards
> that I
> >> > was
> >> > looking at is fairly cheap and can at least drive multi-displays (even
> >> > 4K
> >> > 60hz) was the Nvidia Quadro P600.
> >>
> >> Is that GPU known to be well-supported by Nouveau of Ubuntu 16.04
> vintage?
> >>
> >> I don't want to deny a single-GPU multi-monitor setup to anyone for
> >> whom that's the priority, but considering how much damage the Quadro
> >> M2000 has done to my productivity (and from what I've heard from other
> >> people on the DOM team, I gather I'm not the only one who has had
> >> trouble with it), the four DisplayPort connectors on it look like very
> >> bad economics.
> >>
> >> I suggest these two criteria be considered for developer workstations
> >> in addition to build performance:
> >>  1) The CPU is compatible with rr (at present, this means that the CPU
> >> has to be from Intel and not from AMD)
> >>  2) The GPU offered by default (again, I don't want to deny multiple
> >> DisplayPort connectors on a single GPU to people who request them)
> >> works well in OpenGL mode (i.e. without llvmpipe activating) without
> >> freezes using the Open Source drivers included in Ubuntu LTS and
> >

Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-06 Thread Jeff Gilbert
My understanding of current policy is that ECC is not required. (and
not even an option with MacBook Pros) Given the volume of development
that happens unhindered on our developers' many, many non-ECC
machines, I believe the burden of proof-of-burden is on the pro-ECC
argument to show that it's likely to be a worthwhile investment for
our use-cases.

As for evidence for lack of ECC being a non-issue, I call to witness
the vast majority of Firefox development, most applicably that portion
done in the last ten years, and especially all MacOS development
excluding the very few Mac Pros we have.

If we've given developers ECC machines already when non-ECC was an
option, absent a positive request for ECC from the developer, I would
consider this to have been a minor mistake.

On Mon, Nov 6, 2017 at 3:03 PM, Gabriele Svelto  wrote:
> On 06/11/2017 22:44, Jeff Gilbert wrote:
>> Price matters, since every dollar we spend chasing ECC would be a
>> dollar we can't allocate towards perf improvements, hardware refresh
>> rate, or simply more machines for any build clusters we may want.
>
> And every day our developers or IT staff waste chasing apparently random
> issues is a waste of both money and time.
>
>> The paper linked above addresses massive compute clusters, which seems
>> to have limited implications for our use-cases.
>
> The clusters are 6000 and 8500 nodes respectively, quite small by
> today's standards. How many developers do we have? Hundreds for sure, it
> could be a thousand looking at our current headcount so we're in the
> same ballpark.
>
>> Nearly every machine we do development on does not currently use ECC.
>> I don't see why that should change now.
>
> Not true. The current Xeon E5-based ThinkStation P710 available from
> Service Now has ECC memory and so did the previous models in the last
> five years. Having a workstation available w/o ECC would actually be a
> step backwards.
>
>> To me, ECC for desktop compute
>> workloads crosses the line into jumping at shadows, since "restart
>> your machine slightly more often than otherwise" is not onerous.
> Do you have data to prove that this is not an issue?
>
>  Gabriele
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-06 Thread Gabriele Svelto
On 06/11/2017 22:44, Jeff Gilbert wrote:
> Price matters, since every dollar we spend chasing ECC would be a
> dollar we can't allocate towards perf improvements, hardware refresh
> rate, or simply more machines for any build clusters we may want.

And every day our developers or IT staff waste chasing apparently random
issues is a waste of both money and time.

> The paper linked above addresses massive compute clusters, which seems
> to have limited implications for our use-cases.

The clusters are 6000 and 8500 nodes respectively, quite small by
today's standards. How many developers do we have? Hundreds for sure, it
could be a thousand looking at our current headcount so we're in the
same ballpark.

> Nearly every machine we do development on does not currently use ECC.
> I don't see why that should change now.

Not true. The current Xeon E5-based ThinkStation P710 available from
Service Now has ECC memory and so did the previous models in the last
five years. Having a workstation available w/o ECC would actually be a
step backwards.

> To me, ECC for desktop compute
> workloads crosses the line into jumping at shadows, since "restart
> your machine slightly more often than otherwise" is not onerous.
Do you have data to prove that this is not an issue?

 Gabriele



signature.asc
Description: OpenPGP digital signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-06 Thread Jeff Gilbert
Price matters, since every dollar we spend chasing ECC would be a
dollar we can't allocate towards perf improvements, hardware refresh
rate, or simply more machines for any build clusters we may want.

The paper linked above addresses massive compute clusters, which seems
to have limited implications for our use-cases.

Nearly every machine we do development on does not currently use ECC.
I don't see why that should change now. To me, ECC for desktop compute
workloads crosses the line into jumping at shadows, since "restart
your machine slightly more often than otherwise" is not onerous.

On Mon, Nov 6, 2017 at 9:19 AM, Gregory Szorc  wrote:
>
>
>> On Nov 6, 2017, at 05:19, Gabriele Svelto  wrote:
>>
>>> On 04/11/2017 01:10, Jeff Gilbert wrote:
>>> Clock speed and core count matter much more than ECC. I wouldn't chase
>>> ECC support for general dev machines.
>>
>> The Xeon-W SKUs I posted in the previous thread all had identical or
>> higher clock speeds than equivalent Core i9 SKUs and ECC support with
>> the sole exception of the i9-7980XE which has slightly higher (100MHz)
>> peak turbo clock than the Xeon W-2195.
>>
>> There is IMHO no performance-related reason to skimp on ECC support
>> especially for machines that will sport a significant amount of memory.
>>
>> Importance of ECC memory is IMHO underestimated mostly because it's not
>> common and thus users do not realize they may be hitting memory errors
>> more frequently than they realize. My main workstation is now 5 years
>> old and has accumulated 24 memory errors; that may not seem much but if
>> it happens at a bad time, or in a bad place, they can ruin your day or
>> permanently corrupt your data.
>>
>> As another example of ECC importance my laptop (obviously) doesn't have
>> ECC support and two years ago had a single bit that went bad in the
>> second DIMM. The issue manifested itself as internal compiler errors
>> while building Fennec. The first time I just pulled again from central
>> thinking it was a fluke, the second I updated the build dependencies
>> which I hadn't done in a while thinking that an old GCC might have been
>> the cause. It was not until the third day with a failure that I realized
>> what was happening. A 2-hours long memory test showed me the second DIMM
>> was bad so I removed it, ordered a new one and went on to check my
>> machine. I had to purge my compilation cache because garbage had
>> accumulated in there, run an hg verify on my repo as well as verifying
>> all the installed packages for errors. Since I didn't have access to my
>> main workstation at the time I had wasted 3 days chasing the issue and
>> my workflow was slowed down by a cold compilation cache and a gimped
>> machine (until I could replace the DIMM).
>>
>> This is not common, but it's not rare either and we now have hundreds of
>> developers within Mozilla so people are going to run into issues that
>> can be easily prevented by having ECC memory.
>>
>> That being said ECC memory also makes machines less susceptible to
>> Rowhammer-like attacks and makes them detectable while they are happening.
>>
>> For a more in-depth reading on the matter I suggest reading "Memory
>> Errors in Modern Systems - The Good, The Bad, and The Ugly" [1] in which
>> the authors analyze memory errors on live systems over two years and
>> argue that SEC-DED ECC (the type of protection you usually get on
>> workstations) is often insufficient and even chipkill ECC (now common on
>> most servers) is not enough to catch all errors happening during real
>> world use.
>>
>> Gabriele
>>
>> [1] https://www.cs.virginia.edu/~gurumurthi/papers/asplos15.pdf
>>
>
> The Xeon-W’s are basically the i9’s (both Skylake-X) with support for ECC, 
> more vPRO, and AMT. The Xeon-W’s lack Turbo 3.0 (preferred core). However, 
> Turbo 2.0 apparently reaches the same MHz, so I don’t think it matters much. 
> There are some other differences with regards to PCIe lanes, chipset, etc.
>
> Another big difference is price. The Xeon’s cost a lot more.
>
> For building Firefox, the i9’s and Xeon-W are probably very similar (and is 
> something we should test). It likely comes down to whether you want to pay a 
> premium for ECC and other Xeon-W features. I’m not in a position to answer 
> that.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-06 Thread Gregory Szorc


> On Nov 6, 2017, at 05:19, Gabriele Svelto  wrote:
> 
>> On 04/11/2017 01:10, Jeff Gilbert wrote:
>> Clock speed and core count matter much more than ECC. I wouldn't chase
>> ECC support for general dev machines.
> 
> The Xeon-W SKUs I posted in the previous thread all had identical or
> higher clock speeds than equivalent Core i9 SKUs and ECC support with
> the sole exception of the i9-7980XE which has slightly higher (100MHz)
> peak turbo clock than the Xeon W-2195.
> 
> There is IMHO no performance-related reason to skimp on ECC support
> especially for machines that will sport a significant amount of memory.
> 
> Importance of ECC memory is IMHO underestimated mostly because it's not
> common and thus users do not realize they may be hitting memory errors
> more frequently than they realize. My main workstation is now 5 years
> old and has accumulated 24 memory errors; that may not seem much but if
> it happens at a bad time, or in a bad place, they can ruin your day or
> permanently corrupt your data.
> 
> As another example of ECC importance my laptop (obviously) doesn't have
> ECC support and two years ago had a single bit that went bad in the
> second DIMM. The issue manifested itself as internal compiler errors
> while building Fennec. The first time I just pulled again from central
> thinking it was a fluke, the second I updated the build dependencies
> which I hadn't done in a while thinking that an old GCC might have been
> the cause. It was not until the third day with a failure that I realized
> what was happening. A 2-hours long memory test showed me the second DIMM
> was bad so I removed it, ordered a new one and went on to check my
> machine. I had to purge my compilation cache because garbage had
> accumulated in there, run an hg verify on my repo as well as verifying
> all the installed packages for errors. Since I didn't have access to my
> main workstation at the time I had wasted 3 days chasing the issue and
> my workflow was slowed down by a cold compilation cache and a gimped
> machine (until I could replace the DIMM).
> 
> This is not common, but it's not rare either and we now have hundreds of
> developers within Mozilla so people are going to run into issues that
> can be easily prevented by having ECC memory.
> 
> That being said ECC memory also makes machines less susceptible to
> Rowhammer-like attacks and makes them detectable while they are happening.
> 
> For a more in-depth reading on the matter I suggest reading "Memory
> Errors in Modern Systems - The Good, The Bad, and The Ugly" [1] in which
> the authors analyze memory errors on live systems over two years and
> argue that SEC-DED ECC (the type of protection you usually get on
> workstations) is often insufficient and even chipkill ECC (now common on
> most servers) is not enough to catch all errors happening during real
> world use.
> 
> Gabriele
> 
> [1] https://www.cs.virginia.edu/~gurumurthi/papers/asplos15.pdf
> 

The Xeon-W’s are basically the i9’s (both Skylake-X) with support for ECC, more 
vPRO, and AMT. The Xeon-W’s lack Turbo 3.0 (preferred core). However, Turbo 2.0 
apparently reaches the same MHz, so I don’t think it matters much. There are 
some other differences with regards to PCIe lanes, chipset, etc.

Another big difference is price. The Xeon’s cost a lot more.

For building Firefox, the i9’s and Xeon-W are probably very similar (and is 
something we should test). It likely comes down to whether you want to pay a 
premium for ECC and other Xeon-W features. I’m not in a position to answer that.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-06 Thread Gabriele Svelto
On 04/11/2017 01:10, Jeff Gilbert wrote:
> Clock speed and core count matter much more than ECC. I wouldn't chase
> ECC support for general dev machines.

The Xeon-W SKUs I posted in the previous thread all had identical or
higher clock speeds than equivalent Core i9 SKUs and ECC support with
the sole exception of the i9-7980XE which has slightly higher (100MHz)
peak turbo clock than the Xeon W-2195.

There is IMHO no performance-related reason to skimp on ECC support
especially for machines that will sport a significant amount of memory.

Importance of ECC memory is IMHO underestimated mostly because it's not
common and thus users do not realize they may be hitting memory errors
more frequently than they realize. My main workstation is now 5 years
old and has accumulated 24 memory errors; that may not seem much but if
it happens at a bad time, or in a bad place, they can ruin your day or
permanently corrupt your data.

As another example of ECC importance my laptop (obviously) doesn't have
ECC support and two years ago had a single bit that went bad in the
second DIMM. The issue manifested itself as internal compiler errors
while building Fennec. The first time I just pulled again from central
thinking it was a fluke, the second I updated the build dependencies
which I hadn't done in a while thinking that an old GCC might have been
the cause. It was not until the third day with a failure that I realized
what was happening. A 2-hours long memory test showed me the second DIMM
was bad so I removed it, ordered a new one and went on to check my
machine. I had to purge my compilation cache because garbage had
accumulated in there, run an hg verify on my repo as well as verifying
all the installed packages for errors. Since I didn't have access to my
main workstation at the time I had wasted 3 days chasing the issue and
my workflow was slowed down by a cold compilation cache and a gimped
machine (until I could replace the DIMM).

This is not common, but it's not rare either and we now have hundreds of
developers within Mozilla so people are going to run into issues that
can be easily prevented by having ECC memory.

That being said ECC memory also makes machines less susceptible to
Rowhammer-like attacks and makes them detectable while they are happening.

For a more in-depth reading on the matter I suggest reading "Memory
Errors in Modern Systems - The Good, The Bad, and The Ugly" [1] in which
the authors analyze memory errors on live systems over two years and
argue that SEC-DED ECC (the type of protection you usually get on
workstations) is often insufficient and even chipkill ECC (now common on
most servers) is not enough to catch all errors happening during real
world use.

 Gabriele

[1] https://www.cs.virginia.edu/~gurumurthi/papers/asplos15.pdf



signature.asc
Description: OpenPGP digital signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-06 Thread Henri Sivonen
Thank you for including an AMD card among the ones to be tested.

- -

The Radeon RX 460 mentioned earlier in this thread arrived. There was
again enough weirdness that I think it's worth sharing in case it
saves time for someone else:

Initially, for multiple rounds of booting with different cable
configurations, the Lenovo UEFI consistenly displayed nothing if a
cable with a powered-on screen was plugged into the DisplayPort
connector on the RX 460. To see the boot password prompt or anything
else displayed by the Lenovo UEFI, I needed to connect a screen to the
DVI port and *not* have a powered-on screen connected to DisplayPort.
However, Lenovo UEFI started displaying on a DisplayPort-connected
screen (with or without DVI also connected) after one time I had had a
powered-on screen connected to DVI and a powered-off screen connected
to DisplayPort at the start of the boot and I turned on the
DisplayPort screen while the DVI screen was displaying the UEFI
password prompt. However, during that same boot, I happened to not to
have a keyboard connected, because it was connected via the screen
that was powered off, and this caused an UEFI error, so I don't know
which of the DisplayPort device powering on during the UEFI phase or
UEFI going through an error phase due to missing keyboard jolted it to
use the DisplayPort screen properly subsequently. Weird.

On the Linux side, the original Ubuntu 16.04 kernel (4.4) supported
only a low resolution fallback mode. Rolling the hardware enablement
stack forward (to 4.10 series kernel using the incantation given at
https://wiki.ubuntu.com/Kernel/LTSEnablementStack ) fixed this and
resulted in Firefox reporting WebGL2 and all. The fix for
https://bugzilla.kernel.org/show_bug.cgi?id=191281 hasn't propagated
to Ubuntu 16.04's latest HWE stack, which looks distressing during
boot, but it seems harmless so far.

I got the 4 GB model, since it was available at roughly the same price
as the 2 GB model. It supports both screens I have available for
testing at their full resolution simultaneously (2560x1440 plugged
into DisplayPort and 1920x1200 plugged into DVI).

The card is significantly larger than the Quadro M2000. It takes the
space of two card slots (connects to one, but the heat sink and the
dual fans take the space of another slot). The fans don't appear to
make an audible difference compared to the Quadro M2000.

On Fri, Oct 27, 2017 at 6:19 PM, Sophana "Soap" Aik  wrote:
> Thank you Henri for the feedback.
>
> How about this, we can order some graphics cards and put them in the
> evaluation/test machine that is with Greg, to make sure it has good
> compatibility.
>
> We could do:
> Nvidia GTX 1060 3GB
> AMD Radeon RX570
>
> These two options will ensure it can drive multi displays.
>
> Other suggestions welcomed.
>
> Greg, is that something you think we should do?
>
> On Thu, Oct 26, 2017 at 11:33 PM, Henri Sivonen 
> wrote:
>>
>> On Fri, Oct 27, 2017 at 4:48 AM, Sophana "Soap" Aik 
>> wrote:
>> > Hello everyone, great feedback that I will keep in mind and continue to
>> > work
>> > with our vendors to find the best solution with. One of the cards that I
>> > was
>> > looking at is fairly cheap and can at least drive multi-displays (even
>> > 4K
>> > 60hz) was the Nvidia Quadro P600.
>>
>> Is that GPU known to be well-supported by Nouveau of Ubuntu 16.04 vintage?
>>
>> I don't want to deny a single-GPU multi-monitor setup to anyone for
>> whom that's the priority, but considering how much damage the Quadro
>> M2000 has done to my productivity (and from what I've heard from other
>> people on the DOM team, I gather I'm not the only one who has had
>> trouble with it), the four DisplayPort connectors on it look like very
>> bad economics.
>>
>> I suggest these two criteria be considered for developer workstations
>> in addition to build performance:
>>  1) The CPU is compatible with rr (at present, this means that the CPU
>> has to be from Intel and not from AMD)
>>  2) The GPU offered by default (again, I don't want to deny multiple
>> DisplayPort connectors on a single GPU to people who request them)
>> works well in OpenGL mode (i.e. without llvmpipe activating) without
>> freezes using the Open Source drivers included in Ubuntu LTS and
>> Fedora.
>>
>> On Fri, Oct 27, 2017 at 2:36 AM, Gregory Szorc  wrote:
>> > Host OS matters for finding UI bugs and issues with add-ons (since lots
>> > of
>> > add-on developers are also on Linux or MacOS).
>>
>> I think it's a bad tradeoff to trade off the productivity of
>> developers working on the cross-platform core of Firefox in order to
>> get them to report Windows-specific bugs. We have people in the
>> organization who aren't developing the cross-platform core and who are
>> running Windows anyway. I'd prefer the energy currently put into
>> getting developers of the cross-platform core to use Windows to be put
>> into getting the people who use Windows anyway to use Nightly. (It
>> saddens me to hear fear of Nightly f

Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-03 Thread Jeff Gilbert
Clock speed and core count matter much more than ECC. I wouldn't chase
ECC support for general dev machines.

On Thu, Nov 2, 2017 at 6:46 PM, Gregory Szorc  wrote:
> On Thu, Nov 2, 2017 at 3:43 PM, Nico Grunbaum  wrote:
>
>> For rr I have an i7 desktop with a base clock of 4.0 Ghz, and for building
>> I use icecc to distribute the load (or rather I will be again when bug
>> 1412240[0] is closed).  The i9 series has lower base clocks (2.8 Ghz, and
>> 2.6Ghz for the top SKUs)[1], but high boost clocks of 4.2 Ghz.  If I were
>> to switch over to an i9 for everything, would I see a notable difference in
>> performance in rr?
>>
>
> Which i7? You should get better CPU efficiency with newer
> microarchitectures. The i9's we're talking about are based on Skylake-X
> which is based on Skylake which are the i7-6XXX models in the consumer
> lines. It isn't enough to compare MHz: you need to also consider
> microarchitectures, memory, and workload.
>
> https://arstechnica.com/gadgets/2017/09/intel-core-i9-7960x-review/2/ has
> some single-threaded benchmarks. The i7-7700K (Kaby Lake) seems to "win"
> for single-threaded performance. But the i9's aren't far behind. Not far
> enough behind to cancel out the benefits of the extra cores IMO.
>
> This is because the i9's are pretty aggressive about using turbo. More
> aggressive than the Xeons. As long as cooling can keep up, the top-end GHz
> is great and you aren't sacrificing that much perf to have more cores on
> die. You can counter by arguing that the consumer-grade i7's can yield more
> speedups via overclocking. But for enterprise uses, having this all built
> into the chip so it "just works" without voiding warranty is a nice trait :)
>
> FWIW, the choice to go with Xeons always bothered me because we had to make
> an explicit clock vs core trade-off. Building Firefox requires both many
> cores for compiling and fast cores for linking. Since the i9's turbo so
> well, we get the best of both worlds. And at a much lower price. Aside from
> the loss of ECC, it is a pretty easy decision to switch.
>
>
>> -Nico
>>
>> [0] https://bugzilla.mozilla.org/show_bug.cgi?id=1412240 Build failure in
>> libavutil (missing atomic definitions), when building with clang and icecc
>>
>> [1] https://ark.intel.com/products/series/123588/Intel-Core-X-
>> series-Processors
>>
>> On 10/27/17 7:50 PM, Robert O'Callahan wrote:
>>
>>> BTW can someone forward this entire thread to their friends at AMD so AMD
>>> will fix their CPUs to run rr? They're tantalizingly close :-/.
>>>
>>> Rob
>>>
>>
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-02 Thread Gregory Szorc
On Thu, Nov 2, 2017 at 3:43 PM, Nico Grunbaum  wrote:

> For rr I have an i7 desktop with a base clock of 4.0 Ghz, and for building
> I use icecc to distribute the load (or rather I will be again when bug
> 1412240[0] is closed).  The i9 series has lower base clocks (2.8 Ghz, and
> 2.6Ghz for the top SKUs)[1], but high boost clocks of 4.2 Ghz.  If I were
> to switch over to an i9 for everything, would I see a notable difference in
> performance in rr?
>

Which i7? You should get better CPU efficiency with newer
microarchitectures. The i9's we're talking about are based on Skylake-X
which is based on Skylake which are the i7-6XXX models in the consumer
lines. It isn't enough to compare MHz: you need to also consider
microarchitectures, memory, and workload.

https://arstechnica.com/gadgets/2017/09/intel-core-i9-7960x-review/2/ has
some single-threaded benchmarks. The i7-7700K (Kaby Lake) seems to "win"
for single-threaded performance. But the i9's aren't far behind. Not far
enough behind to cancel out the benefits of the extra cores IMO.

This is because the i9's are pretty aggressive about using turbo. More
aggressive than the Xeons. As long as cooling can keep up, the top-end GHz
is great and you aren't sacrificing that much perf to have more cores on
die. You can counter by arguing that the consumer-grade i7's can yield more
speedups via overclocking. But for enterprise uses, having this all built
into the chip so it "just works" without voiding warranty is a nice trait :)

FWIW, the choice to go with Xeons always bothered me because we had to make
an explicit clock vs core trade-off. Building Firefox requires both many
cores for compiling and fast cores for linking. Since the i9's turbo so
well, we get the best of both worlds. And at a much lower price. Aside from
the loss of ECC, it is a pretty easy decision to switch.


> -Nico
>
> [0] https://bugzilla.mozilla.org/show_bug.cgi?id=1412240 Build failure in
> libavutil (missing atomic definitions), when building with clang and icecc
>
> [1] https://ark.intel.com/products/series/123588/Intel-Core-X-
> series-Processors
>
> On 10/27/17 7:50 PM, Robert O'Callahan wrote:
>
>> BTW can someone forward this entire thread to their friends at AMD so AMD
>> will fix their CPUs to run rr? They're tantalizingly close :-/.
>>
>> Rob
>>
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-11-02 Thread Nico Grunbaum
For rr I have an i7 desktop with a base clock of 4.0 Ghz, and for 
building I use icecc to distribute the load (or rather I will be again 
when bug 1412240[0] is closed).  The i9 series has lower base clocks 
(2.8 Ghz, and 2.6Ghz for the top SKUs)[1], but high boost clocks of 4.2 
Ghz.  If I were to switch over to an i9 for everything, would I see a 
notable difference in performance in rr?


-Nico

[0] https://bugzilla.mozilla.org/show_bug.cgi?id=1412240 Build failure 
in libavutil (missing atomic definitions), when building with clang and 
icecc


[1] 
https://ark.intel.com/products/series/123588/Intel-Core-X-series-Processors


On 10/27/17 7:50 PM, Robert O'Callahan wrote:

BTW can someone forward this entire thread to their friends at AMD so AMD
will fix their CPUs to run rr? They're tantalizingly close :-/.

Rob


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-10-28 Thread Sophana "Soap" Aik
Thanks Gabriele, that poses a problem then for the system build we have in
mind here as the i9's do not support ECC memory. That may have to be a
separate system with a Xeon.

On Fri, Oct 27, 2017 at 3:58 PM, Gabriele Svelto 
wrote:

> On 27/10/2017 01:02, Gregory Szorc wrote:
> > Sophana (CCd) is working on a new system build right now. It will be
> based
> > on the i9's instead of dual socket Xeons and should be faster and
> cheaper.
>
> ... and lacking ECC memory. Please whatever CPU is chosen make sure it
> has ECC support and the machine comes loaded with ECC memory. Developer
> boxes usually ship with plenty of memory, and they can stay on for days
> without a reboot churning at builds and tests. Memory errors happen and
> they can ruin days of work if they hit you at the wrong time.
>
>  Gabriele
>
>
>


-- 
moz://a
Sophana "Soap" Aik
IT Vendor Management Analyst
IRC/Slack: soap
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-10-28 Thread Sophana "Soap" Aik
Thank you Henri for the feedback.

How about this, we can order some graphics cards and put them in the
evaluation/test machine that is with Greg, to make sure it has good
compatibility.

We could do:
Nvidia GTX 1060 3GB
AMD Radeon RX570

These two options will ensure it can drive multi displays.

Other suggestions welcomed.

Greg, is that something you think we should do?

On Thu, Oct 26, 2017 at 11:33 PM, Henri Sivonen 
wrote:

> On Fri, Oct 27, 2017 at 4:48 AM, Sophana "Soap" Aik 
> wrote:
> > Hello everyone, great feedback that I will keep in mind and continue to
> work
> > with our vendors to find the best solution with. One of the cards that I
> was
> > looking at is fairly cheap and can at least drive multi-displays (even 4K
> > 60hz) was the Nvidia Quadro P600.
>
> Is that GPU known to be well-supported by Nouveau of Ubuntu 16.04 vintage?
>
> I don't want to deny a single-GPU multi-monitor setup to anyone for
> whom that's the priority, but considering how much damage the Quadro
> M2000 has done to my productivity (and from what I've heard from other
> people on the DOM team, I gather I'm not the only one who has had
> trouble with it), the four DisplayPort connectors on it look like very
> bad economics.
>
> I suggest these two criteria be considered for developer workstations
> in addition to build performance:
>  1) The CPU is compatible with rr (at present, this means that the CPU
> has to be from Intel and not from AMD)
>  2) The GPU offered by default (again, I don't want to deny multiple
> DisplayPort connectors on a single GPU to people who request them)
> works well in OpenGL mode (i.e. without llvmpipe activating) without
> freezes using the Open Source drivers included in Ubuntu LTS and
> Fedora.
>
> On Fri, Oct 27, 2017 at 2:36 AM, Gregory Szorc  wrote:
> > Host OS matters for finding UI bugs and issues with add-ons (since lots
> of
> > add-on developers are also on Linux or MacOS).
>
> I think it's a bad tradeoff to trade off the productivity of
> developers working on the cross-platform core of Firefox in order to
> get them to report Windows-specific bugs. We have people in the
> organization who aren't developing the cross-platform core and who are
> running Windows anyway. I'd prefer the energy currently put into
> getting developers of the cross-platform core to use Windows to be put
> into getting the people who use Windows anyway to use Nightly. (It
> saddens me to hear fear of Nightly from within Mozilla.)
>
> > Unless you have requirements that prohibit using a VM, I encourage using
> this setup.
>
> For some three-four years, I developed in a Linux VM hosted on
> Windows. I'm not too worried about the performance overhead of a VM.
> However, rr is such an awesome tool that it justifies running Linux as
> the host OS.
>
> > I concede that performance testing on i9s and Xeons is not at all
> indicative
> > of the typical user :)
>
> Indeed. Still, we don't need Nvidia professional GPUs for build times,
> so boring well-supported consumer-grade GPUs would also be in the
> interest of "using what our users use" even if paired with a CPU that
> isn't representative of typical users' computers.
>
> On Fri, Oct 27, 2017 at 1:13 AM, Thomas Daede  wrote:
> > I have a RX 460 in a desktop with F26 and can confirm that it works
> > out-of-the-box at 4K with the open source drivers, and will happily run
> > Pathfinder demos at <16ms frame time.* It also seems to run Servo's
> > Webrender just fine.
> >
> > It's been superseded by the RX 560, which is a faster clock of the same
> > chip. It should work just as well, but might need a slightly newer
> > kernel than the 4xx to pick up the pci ids (maybe a problem with LTS
> > ubuntu?) The RX 570 and 580 should be fine too, but require power
> > connectors. The Vega models are waiting on a kernel-side driver rewrite
> > (by AMD) that will land in 4.15 (hopefully with new features and
> > regressions to the RX 5xx series...)
>
> Thank you. I placed an order for an RX 460.
>
> --
> Henri Sivonen
> hsivo...@hsivonen.fi
> https://hsivonen.fi/
>



-- 
moz://a
Sophana "Soap" Aik
IT Vendor Management Analyst
IRC/Slack: soap
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-10-28 Thread Sophana "Soap" Aik
Hello everyone, great feedback that I will keep in mind and continue to
work with our vendors to find the best solution with. One of the cards that
I was looking at is fairly cheap and can at least drive multi-displays
(even 4K 60hz) was the Nvidia Quadro P600. I feel especially based on the
work that Greg has been doing, the processor, storage, and RAM is more
important than graphics. So I will lean more towards that type of build. I
will provide an update as soon as we have something more concrete regarding
some final specifications that I hope to have soon. Thanks

On Thu, Oct 26, 2017 at 4:36 PM, Gregory Szorc  wrote:

> On Thu, Oct 26, 2017 at 4:31 PM, Mike Hommey  wrote:
>
>> On Thu, Oct 26, 2017 at 04:02:20PM -0700, Gregory Szorc wrote:
>> > Also, the machines come with Windows by default. That's by design:
>> that's
>> > where the bulk of Firefox users are. We will develop better products if
>> the
>> > machines we use every day resemble what actual users use. I would
>> encourage
>> > developers to keep Windows on the new machines when they are issued.
>>
>> Except actual users are not using i9s or dual xeons. Yes, we have
>> slower reference hardware, but that also makes the argument of using the
>> same thing as actual users less relevant: you can't develop on machines
>> that actually look like what users have. So, as long as you have the
>> slower reference hardware to test, it doesn't seem to me it should
>> matter what OS you're running on your development machine.
>
>
> Host OS matters for finding UI bugs and issues with add-ons (since lots of
> add-on developers are also on Linux or MacOS).
>
> I concede that performance testing on i9s and Xeons is not at all
> indicative of the typical user :)
>



-- 
moz://a
Sophana "Soap" Aik
IT Vendor Management Analyst
IRC/Slack: soap
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-10-27 Thread Robert O'Callahan
BTW can someone forward this entire thread to their friends at AMD so AMD
will fix their CPUs to run rr? They're tantalizingly close :-/.

Rob
-- 
lbir ye,ea yer.tnietoehr  rdn rdsme,anea lurpr  edna e hnysnenh hhe uresyf
toD
selthor  stor  edna  siewaoeodm  or v sstvr  esBa  kbvted,t
rdsme,aoreseoouoto
o l euetiuruewFa  kbn e hnystoivateweh uresyf tulsa rehr  rdm  or rnea
lurpr
.a war hsrer holsa rodvted,t  nenh hneireseoouot.tniesiewaoeivatewt sstvr
esn
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-10-27 Thread Gabriele Svelto
On 28/10/2017 01:08, Sophana "Soap" Aik wrote:
> Thanks Gabriele, that poses a problem then for the system build we have
> in mind here as the i9's do not support ECC memory. That may have to be
> a separate system with a Xeon.

Xeon-W processors are identical to the i9 but come with more
workstation/server-oriented features such as ECC memory support; they
are also offered with slightly higher peak clock speed to equivalent
i9s. Here's a side-by-side comparison of the top 4 SKUs in both families:

https://ark.intel.com/compare/123589,126709,123767,126707,125042,123613,126793,126699

 Gabriele



signature.asc
Description: OpenPGP digital signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-10-27 Thread Gregory Szorc
Yeah. Only the Xeons and ThreadRipper (as our potential high core count
machines) support ECC. rr, ECC, or reasonable costs: pick at most two :/

On Fri, Oct 27, 2017 at 4:08 PM, Sophana "Soap" Aik 
wrote:

> Thanks Gabriele, that poses a problem then for the system build we have in
> mind here as the i9's do not support ECC memory. That may have to be a
> separate system with a Xeon.
>
> On Fri, Oct 27, 2017 at 3:58 PM, Gabriele Svelto 
> wrote:
>
>> On 27/10/2017 01:02, Gregory Szorc wrote:
>> > Sophana (CCd) is working on a new system build right now. It will be
>> based
>> > on the i9's instead of dual socket Xeons and should be faster and
>> cheaper.
>>
>> ... and lacking ECC memory. Please whatever CPU is chosen make sure it
>> has ECC support and the machine comes loaded with ECC memory. Developer
>> boxes usually ship with plenty of memory, and they can stay on for days
>> without a reboot churning at builds and tests. Memory errors happen and
>> they can ruin days of work if they hit you at the wrong time.
>>
>>  Gabriele
>>
>>
>>
>
>
> --
> moz://a
> Sophana "Soap" Aik
> IT Vendor Management Analyst
> IRC/Slack: soap
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-10-27 Thread Gabriele Svelto
On 27/10/2017 01:02, Gregory Szorc wrote:
> Sophana (CCd) is working on a new system build right now. It will be based
> on the i9's instead of dual socket Xeons and should be faster and cheaper.

... and lacking ECC memory. Please whatever CPU is chosen make sure it
has ECC support and the machine comes loaded with ECC memory. Developer
boxes usually ship with plenty of memory, and they can stay on for days
without a reboot churning at builds and tests. Memory errors happen and
they can ruin days of work if they hit you at the wrong time.

 Gabriele




signature.asc
Description: OpenPGP digital signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-10-27 Thread Steve Fink
Not necessarily relevant to this specific discussion, but I'm on a 
Lenovo P50 running Linux, and wanted to offer up my setup as a 
datapoint. (It's not quite either a recommendation or a word of warning. 
A combination.)


I use Linux (Fedora 25) as the host OS, with two external monitors plus 
the laptop screen. Windows is installed natively on a separate 
partition. For a while, I used VirtualBox to run the native Windows 
installation in a VM, rebooting into Windows on the rare occasions when 
I needed the extra performance or I wanted to diagnose whether something 
was virtualization-specific. The machine has an Intel HD Graphics P530 
and a Quadro M2000M. One external monitor is hooked up via DP, the other 
HDMI. I have a single desktop spread across all three (as in, I can drag 
windows between them). I use the nouveau driver. Videoconferencing works.


It all works well enough. There are large caveats and drawbacks. It took 
an insane amount of configuration attempts to get it to where it is, and 
again: there are large caveats and drawbacks.


Whichever monitor is on HDMI is at the wrong resolution (1920x1080 
instead of its native 1920x1200). I am running X11 because Wayland 
doesn't work. (Though I'm fine with that, because I'm old school and I 
run xfce4.) The laptop screen is HiDPI and when I disconnect from the 
external screens, I have to zoom everything in, which only partially 
works (eg Firefox's chrome is still small). I used to use xrandr with 
--scale 0.5x0.5 to expand everything, but that caused too many issues. 
When I turn my external monitors back on in the morning, one of them 
comes up fine and the other does not display anything until I do 
ctrl-alt-f3 alt-f2 to switch to VT3 then back to VT2. When I reconnect 
my monitors, it will often mirror a single image to all 3 displays, and 
I have to turn mirroring on and then back off again then drag my 
monitors back to the right relative positioning.


I use the nouveau driver now. I started out with nouveau and it was 
causing lots of random lockups, so I switched to the proprietary nvidia 
driver. It did not work well when I disconnected and reconnected the 
external monitors. Nor sometimes if I suspended and resumed. I have no 
idea why nouveau has magically become stable; probably some update or other.


My Windows setup broke when I switched from an HDD to an SSD. First, it 
stopped booting natively and I could only run it through the VM. Now it 
hangs on boot even with the VM unless I boot into safe mode. I have sunk 
more time than I'm willing to admit in trying to fix it and failed. My 
plan is to start over with a disk with Windows preinstalled and clone my 
Linux partitions over to it, but I can't muster the energy to dive back 
into the nightmare and I don't really need Windows very often anyway.



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-10-27 Thread Robert O'Callahan
On Fri, Oct 27, 2017 at 2:34 AM, Henri Sivonen  wrote:

> And the downsides don't even end there. rr didn't work. Plus other
> stuff not worth mentioning here.
>

Turns out that rr not working with Nvidia on Ubuntu 17.10 was actually an
rr issue triggered by the Ubuntu libc upgrade, not Nvidia's fault. I just
fixed it in rr master. We'll do an rr release soon, because the libc update
required a number of rr fixes.

Rob
-- 
lbir ye,ea yer.tnietoehr  rdn rdsme,anea lurpr  edna e hnysnenh hhe uresyf
toD
selthor  stor  edna  siewaoeodm  or v sstvr  esBa  kbvted,t
rdsme,aoreseoouoto
o l euetiuruewFa  kbn e hnystoivateweh uresyf tulsa rehr  rdm  or rnea
lurpr
.a war hsrer holsa rodvted,t  nenh hneireseoouot.tniesiewaoeivatewt sstvr
esn
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-10-26 Thread Henri Sivonen
On Fri, Oct 27, 2017 at 4:48 AM, Sophana "Soap" Aik  wrote:
> Hello everyone, great feedback that I will keep in mind and continue to work
> with our vendors to find the best solution with. One of the cards that I was
> looking at is fairly cheap and can at least drive multi-displays (even 4K
> 60hz) was the Nvidia Quadro P600.

Is that GPU known to be well-supported by Nouveau of Ubuntu 16.04 vintage?

I don't want to deny a single-GPU multi-monitor setup to anyone for
whom that's the priority, but considering how much damage the Quadro
M2000 has done to my productivity (and from what I've heard from other
people on the DOM team, I gather I'm not the only one who has had
trouble with it), the four DisplayPort connectors on it look like very
bad economics.

I suggest these two criteria be considered for developer workstations
in addition to build performance:
 1) The CPU is compatible with rr (at present, this means that the CPU
has to be from Intel and not from AMD)
 2) The GPU offered by default (again, I don't want to deny multiple
DisplayPort connectors on a single GPU to people who request them)
works well in OpenGL mode (i.e. without llvmpipe activating) without
freezes using the Open Source drivers included in Ubuntu LTS and
Fedora.

On Fri, Oct 27, 2017 at 2:36 AM, Gregory Szorc  wrote:
> Host OS matters for finding UI bugs and issues with add-ons (since lots of
> add-on developers are also on Linux or MacOS).

I think it's a bad tradeoff to trade off the productivity of
developers working on the cross-platform core of Firefox in order to
get them to report Windows-specific bugs. We have people in the
organization who aren't developing the cross-platform core and who are
running Windows anyway. I'd prefer the energy currently put into
getting developers of the cross-platform core to use Windows to be put
into getting the people who use Windows anyway to use Nightly. (It
saddens me to hear fear of Nightly from within Mozilla.)

> Unless you have requirements that prohibit using a VM, I encourage using this 
> setup.

For some three-four years, I developed in a Linux VM hosted on
Windows. I'm not too worried about the performance overhead of a VM.
However, rr is such an awesome tool that it justifies running Linux as
the host OS.

> I concede that performance testing on i9s and Xeons is not at all indicative
> of the typical user :)

Indeed. Still, we don't need Nvidia professional GPUs for build times,
so boring well-supported consumer-grade GPUs would also be in the
interest of "using what our users use" even if paired with a CPU that
isn't representative of typical users' computers.

On Fri, Oct 27, 2017 at 1:13 AM, Thomas Daede  wrote:
> I have a RX 460 in a desktop with F26 and can confirm that it works
> out-of-the-box at 4K with the open source drivers, and will happily run
> Pathfinder demos at <16ms frame time.* It also seems to run Servo's
> Webrender just fine.
>
> It's been superseded by the RX 560, which is a faster clock of the same
> chip. It should work just as well, but might need a slightly newer
> kernel than the 4xx to pick up the pci ids (maybe a problem with LTS
> ubuntu?) The RX 570 and 580 should be fine too, but require power
> connectors. The Vega models are waiting on a kernel-side driver rewrite
> (by AMD) that will land in 4.15 (hopefully with new features and
> regressions to the RX 5xx series...)

Thank you. I placed an order for an RX 460.

-- 
Henri Sivonen
hsivo...@hsivonen.fi
https://hsivonen.fi/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-10-26 Thread Gregory Szorc
On Thu, Oct 26, 2017 at 4:31 PM, Mike Hommey  wrote:

> On Thu, Oct 26, 2017 at 04:02:20PM -0700, Gregory Szorc wrote:
> > Also, the machines come with Windows by default. That's by design: that's
> > where the bulk of Firefox users are. We will develop better products if
> the
> > machines we use every day resemble what actual users use. I would
> encourage
> > developers to keep Windows on the new machines when they are issued.
>
> Except actual users are not using i9s or dual xeons. Yes, we have
> slower reference hardware, but that also makes the argument of using the
> same thing as actual users less relevant: you can't develop on machines
> that actually look like what users have. So, as long as you have the
> slower reference hardware to test, it doesn't seem to me it should
> matter what OS you're running on your development machine.


Host OS matters for finding UI bugs and issues with add-ons (since lots of
add-on developers are also on Linux or MacOS).

I concede that performance testing on i9s and Xeons is not at all
indicative of the typical user :)
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-10-26 Thread Mike Hommey
On Thu, Oct 26, 2017 at 04:02:20PM -0700, Gregory Szorc wrote:
> Also, the machines come with Windows by default. That's by design: that's
> where the bulk of Firefox users are. We will develop better products if the
> machines we use every day resemble what actual users use. I would encourage
> developers to keep Windows on the new machines when they are issued.

Except actual users are not using i9s or dual xeons. Yes, we have
slower reference hardware, but that also makes the argument of using the
same thing as actual users less relevant: you can't develop on machines
that actually look like what users have. So, as long as you have the
slower reference hardware to test, it doesn't seem to me it should
matter what OS you're running on your development machine.

Mike
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-10-26 Thread Jeff Muizelaar
On Thu, Oct 26, 2017 at 7:02 PM, Gregory Szorc  wrote:
> I also share your desire to not issue fancy video cards in these machines
> by default. If there are suggestions for a default video card, now is the
> time to make noise :)

Intel GPUs are the best choice if you want to be like bulk of our
users. Otherwise any cheap AMD GPU is going to be good enough.
Probably the number and kind of display outputs are what matters most.

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-10-26 Thread Jeff Muizelaar
On Thu, Oct 26, 2017 at 7:02 PM, Gregory Szorc  wrote:
> Unless you have requirements that prohibit using a
> VM, I encourage using this setup.

rr doesn't work in hyper-v. AFAIK the only Windows VM it works in is VMWare

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-10-26 Thread Gregory Szorc
On Thu, Oct 26, 2017 at 6:34 AM, Henri Sivonen  wrote:

> On Thu, Oct 26, 2017 at 9:15 AM, Henri Sivonen 
> wrote:
> > There's a huge downside, though:
> > If the screen stops consuming the DisplayPort data stream, the
> > graphical session gets killed! So if you do normal things like turn
> > the screen off or switch input on a multi-input screen, your graphical
> > session is no longer there when you come back and you get a login
> > screen instead! (I haven't yet formed an opinion on whether this
> > behavior can be lived with or not.)
>
> And the downsides don't even end there. rr didn't work. Plus other
> stuff not worth mentioning here.
>
> I guess going back to 16.04.1 is a better deal than 17.10.
>
> > P.S. It would be good for productivity if Mozilla issued slightly less
> > cutting-edge Nvidia GPUs to developers to increase the probability
> > that support in nouveau has had time to bake.
>
> This Mozilla-issued Quadro M2000 has been a very significant harm to
> my productivity. Considering how good rr is, I think it makes sense to
> continue to run Linux to develop Firefox. However, I think it doesn't
> make sense to issue fancy cutting-edge Nvidia GPUs to developers who
> aren't specifically working on Nvidia-specific bugs and, instead, it
> would make sense to issue GPUs that are boring as possible in terms of
> Linux driver support (i.e. Just Works with distro-bundled Free
> Software drivers). Going forward, perhaps Mozilla could issue AMD GPUs
> with computers that don't have Intel GPUs?
>
> As for the computer at hand, I want to put an end to this Nvidia
> obstacle to getting stuff done. It's been suggested to me that Radeon
> RX 560 would be well supported by distro-provided drivers, but the
> "*2" footnote at https://help.ubuntu.com/community/AMDGPU-Driver
> doesn't look too good. Based on that table it seems one should get
> Radeon RX 460. Is this the correct conclusion? Does Radeon RX 460 Just
> Work with Ubuntu 16.04? Is Radeon RX 460 going to be
> WebRender-compatible?
>

Sophana (CCd) is working on a new system build right now. It will be based
on the i9's instead of dual socket Xeons and should be faster and cheaper.
We can all thank AMD for introducing competition in the CPU market to
enable this to happen :)

I also share your desire to not issue fancy video cards in these machines
by default. If there are suggestions for a default video card, now is the
time to make noise :)

Also, the machines come with Windows by default. That's by design: that's
where the bulk of Firefox users are. We will develop better products if the
machines we use every day resemble what actual users use. I would encourage
developers to keep Windows on the new machines when they are issued.

I concede that developing Firefox on Linux is better than on Windows for a
myriad of reasons. However, that doesn't mean you have to forego Linux. I
use Hyper-V under Windows 10 to run Linux. I do most of my development
(editors, builds, etc) in that local Linux VM. I use an X server for
connecting to graphic Linux applications. The overhead of Hyper-V as
compared to native Linux is negligible. Unless I need fast graphics in
Linux (which is rare), I pretty much get the advantages of Windows *and*
Linux simultaneously. Unless you have requirements that prohibit using a
VM, I encourage using this setup.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-10-26 Thread Thomas Daede
On 10/26/2017 06:34 AM, Henri Sivonen wrote:
> As for the computer at hand, I want to put an end to this Nvidia
> obstacle to getting stuff done. It's been suggested to me that Radeon
> RX 560 would be well supported by distro-provided drivers, but the
> "*2" footnote at https://help.ubuntu.com/community/AMDGPU-Driver
> doesn't look too good. Based on that table it seems one should get
> Radeon RX 460. Is this the correct conclusion? Does Radeon RX 460 Just
> Work with Ubuntu 16.04? Is Radeon RX 460 going to be
> WebRender-compatible?
> 

I have a RX 460 in a desktop with F26 and can confirm that it works
out-of-the-box at 4K with the open source drivers, and will happily run
Pathfinder demos at <16ms frame time.* It also seems to run Servo's
Webrender just fine.

It's been superseded by the RX 560, which is a faster clock of the same
chip. It should work just as well, but might need a slightly newer
kernel than the 4xx to pick up the pci ids (maybe a problem with LTS
ubuntu?) The RX 570 and 580 should be fine too, but require power
connectors. The Vega models are waiting on a kernel-side driver rewrite
(by AMD) that will land in 4.15 (hopefully with new features and
regressions to the RX 5xx series...)

Intel graphics are also nice but only available on the E3 xeons AFAIK.
And nouveau is stuck, because new cards require signed firmware that
nVidia is unwilling to distribute.

* While Pathfinder happily renders at 60fps, Firefox draws frames slower
because of its WebGL readback path. That is not the fault of the GPU,
however.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-10-26 Thread Nathan Froyd
On Thu, Oct 26, 2017 at 9:34 AM, Henri Sivonen  wrote:
> As for the computer at hand, I want to put an end to this Nvidia
> obstacle to getting stuff done. It's been suggested to me that Radeon
> RX 560 would be well supported by distro-provided drivers, but the
> "*2" footnote at https://help.ubuntu.com/community/AMDGPU-Driver
> doesn't look too good. Based on that table it seems one should get
> Radeon RX 460. Is this the correct conclusion? Does Radeon RX 460 Just
> Work with Ubuntu 16.04? Is Radeon RX 460 going to be
> WebRender-compatible?

Can't speak to the WebRender compatibility issue, but I have a Radeon
R270 and a Radeon RX 470 in my Linux machine, and Ubuntu 16.04 seems
to be pretty happy with both of them.

-Nathan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-10-26 Thread Jeff Muizelaar
Yeah. I'd suggest anyone who's running Linux on these machines just go
out and buy a $100 AMD GPU to replace the Quadro. Even if you don't
expense the new GPU and just throw the Quadro in the trash you'll
probably be happier.

-Jeff

On Thu, Oct 26, 2017 at 9:34 AM, Henri Sivonen  wrote:
> On Thu, Oct 26, 2017 at 9:15 AM, Henri Sivonen  wrote:
>> There's a huge downside, though:
>> If the screen stops consuming the DisplayPort data stream, the
>> graphical session gets killed! So if you do normal things like turn
>> the screen off or switch input on a multi-input screen, your graphical
>> session is no longer there when you come back and you get a login
>> screen instead! (I haven't yet formed an opinion on whether this
>> behavior can be lived with or not.)
>
> And the downsides don't even end there. rr didn't work. Plus other
> stuff not worth mentioning here.
>
> I guess going back to 16.04.1 is a better deal than 17.10.
>
>> P.S. It would be good for productivity if Mozilla issued slightly less
>> cutting-edge Nvidia GPUs to developers to increase the probability
>> that support in nouveau has had time to bake.
>
> This Mozilla-issued Quadro M2000 has been a very significant harm to
> my productivity. Considering how good rr is, I think it makes sense to
> continue to run Linux to develop Firefox. However, I think it doesn't
> make sense to issue fancy cutting-edge Nvidia GPUs to developers who
> aren't specifically working on Nvidia-specific bugs and, instead, it
> would make sense to issue GPUs that are boring as possible in terms of
> Linux driver support (i.e. Just Works with distro-bundled Free
> Software drivers). Going forward, perhaps Mozilla could issue AMD GPUs
> with computers that don't have Intel GPUs?
>
> As for the computer at hand, I want to put an end to this Nvidia
> obstacle to getting stuff done. It's been suggested to me that Radeon
> RX 560 would be well supported by distro-provided drivers, but the
> "*2" footnote at https://help.ubuntu.com/community/AMDGPU-Driver
> doesn't look too good. Based on that table it seems one should get
> Radeon RX 460. Is this the correct conclusion? Does Radeon RX 460 Just
> Work with Ubuntu 16.04? Is Radeon RX 460 going to be
> WebRender-compatible?
>
> --
> Henri Sivonen
> hsivo...@hsivonen.fi
> https://hsivonen.fi/
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-10-26 Thread Henri Sivonen
On Thu, Oct 26, 2017 at 9:15 AM, Henri Sivonen  wrote:
> There's a huge downside, though:
> If the screen stops consuming the DisplayPort data stream, the
> graphical session gets killed! So if you do normal things like turn
> the screen off or switch input on a multi-input screen, your graphical
> session is no longer there when you come back and you get a login
> screen instead! (I haven't yet formed an opinion on whether this
> behavior can be lived with or not.)

And the downsides don't even end there. rr didn't work. Plus other
stuff not worth mentioning here.

I guess going back to 16.04.1 is a better deal than 17.10.

> P.S. It would be good for productivity if Mozilla issued slightly less
> cutting-edge Nvidia GPUs to developers to increase the probability
> that support in nouveau has had time to bake.

This Mozilla-issued Quadro M2000 has been a very significant harm to
my productivity. Considering how good rr is, I think it makes sense to
continue to run Linux to develop Firefox. However, I think it doesn't
make sense to issue fancy cutting-edge Nvidia GPUs to developers who
aren't specifically working on Nvidia-specific bugs and, instead, it
would make sense to issue GPUs that are boring as possible in terms of
Linux driver support (i.e. Just Works with distro-bundled Free
Software drivers). Going forward, perhaps Mozilla could issue AMD GPUs
with computers that don't have Intel GPUs?

As for the computer at hand, I want to put an end to this Nvidia
obstacle to getting stuff done. It's been suggested to me that Radeon
RX 560 would be well supported by distro-provided drivers, but the
"*2" footnote at https://help.ubuntu.com/community/AMDGPU-Driver
doesn't look too good. Based on that table it seems one should get
Radeon RX 460. Is this the correct conclusion? Does Radeon RX 460 Just
Work with Ubuntu 16.04? Is Radeon RX 460 going to be
WebRender-compatible?

-- 
Henri Sivonen
hsivo...@hsivonen.fi
https://hsivonen.fi/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


More ThinkStation P710 Nvidia tips (was Re: Faster gecko builds with IceCC on Mac and Linux)

2017-10-25 Thread Henri Sivonen
On Thu, Mar 23, 2017 at 3:43 PM, Henri Sivonen  wrote:
> On Wed, Jul 6, 2016 at 2:42 AM, Gregory Szorc  wrote:
>> The Lenovo ThinkStation P710 is a good starting point (
>> http://shop.lenovo.com/us/en/workstations/thinkstation/p-series/p710/).
>
> To help others who follow the above advice save some time:
>
> Xeons don't have Intel integrated GPUs, so one has to figure how to
> get this up and running with a discrete GPU. In the case of Nvidia
> Quadro M2000, the latest Ubuntu and Fedora install images don't work.
>
> This works:
> Disable or enable the TPM. (By default, it's in a mode where the
> kernel can see it but it doesn't work. It should either be hidden or
> be allowed to work.)
> Disable secure boot. (Nvidia's proprietary drivers don't work with
> secure boot enabled.)
> Use the Ubuntu 16.04.1 install image (i.e. intentionally old
> image--you can upgrade later)
> After installing, edit /etc/default/grub and set
> GRUB_CMDLINE_LINUX_DEFAULT="" (i.e. make the string empty; without
> this, the nvidia proprietary driver conflicts with LUKS pass phrase
> input).
> update-initramfs -u
> update-grub
> apt install nvidia-375
> Then upgrade the rest. Even rolling forward the HWE stack works
> *after* the above steps.

Xenial set up according to the above steps managed to make itself
unbootable. I don't know why, but I suspect the nvidia proprietary
driver somehow fell out of use and nouveau froze.

The symptom is that a warning triangle (triangle with an exclamation
mark) shows up in the upper right part of the front panel and the
light of the topmost USB port in the front panel starts blinking.

Turning the computer off isn't enough to get rid of the warning
triangle and the blinking USB port light. To get rid of those,
disconnect the power cord for a while and then plug it back in.

After the warning triangle is gone, it's possible to boot Ubuntu
16.04.1 or 17.10 from USB to mount the root volume and make a backup
of the files onto an external disk.

Ubuntu 17.10 now boots on the hardware with nouveau with 3D enabled
(whereas 16.04.1 was 2D-only and the versions in between were broken).
However, before the boot completes, it seems to hang with the text:
[Firmware Bug]: TSC_DEADLINE disabled due to Errata: please update
microcode to version: 0xb20 (or later)
nouveau :01:00.0: bus: MMIO write of 012c FAULT at 10eb14
[ IBUS ]

Wait for a while. (I didn't time it, but the wait time is on the order
of half a minute to a couple of minutes.) Then the boot resumes.

The BIOS update from 2017-09-05 does not update the microcode to the
version the kernel wants to see. However, once Ubuntu 17.10 has been
installed, the intel-microcode package does. (It's probably a good
idea to update the BIOS for AMT and TPM bug fixes anyway.)

I left the box for installing proprietary drivers during installation
unchecked. I'm not sure it checking the box would install the nvidia
proprietary drivers, but the point of going with 17.10 instead
starting with 16.04.1 again is to use nouveau for OpenGL and avoid the
integration problems with the nvidia propriatery drivers.

The wait time during boot repeats with the installed system, but
during the wait, there's no text on the screen by default. Just wait.

On this system, with Ubuntu 17.10, nouveau seems to even qualify for
WebGL2 in Firefox.

There's a huge downside, though:
If the screen stops consuming the DisplayPort data stream, the
graphical session gets killed! So if you do normal things like turn
the screen off or switch input on a multi-input screen, your graphical
session is no longer there when you come back and you get a login
screen instead! (I haven't yet formed an opinion on whether this
behavior can be lived with or not.)

This applies to the live session on the install media, too. Therefore,
it's best to use another virtual console (ctrl-alt-F3) for restoring
backups. (GUI is now some weird dual existence in ctrl-alt-F1 and
ctrl-alt-F2.)

(Fedora 26 still doesn't boot on this hardware. I didn't try Fedora 27 beta.)

P.S. It would be good for productivity if Mozilla issued slightly less
cutting-edge Nvidia GPUs to developers to increase the probability
that support in nouveau has had time to bake.

-- 
Henri Sivonen
hsivo...@hsivonen.fi
https://hsivonen.fi/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2017-03-24 Thread Ted Mielczarek
On Fri, Mar 24, 2017, at 12:10 AM, Jeff Muizelaar wrote:
> I have a Ryzen 7 1800 X and it does a Windows clobber builds in ~20min
> (3 min of that is configure which seems higher than what I've seen on
> other machines). This compares pretty favorably to the Lenovo p710
> machines that people are getting which do 18min clobber builds and
> cost more than twice the price.

Just as a data point, I have one of those Lenovo P710 machines and I get
14-15 minute clobber builds on Windows.

-Ted
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2017-03-24 Thread Gabriele Svelto
On 24/03/2017 05:39, Gregory Szorc wrote:
> The introduction of Ryzen has literally changed the landscape
> and the calculus that determines what hardware engineers should have.
> Before I disappeared for ~1 month, I was working with IT and management to
> define an optimal hardware load out for Firefox engineers. I need to resume
> that work and fully evaluate Ryzen...

The fact that with the appropriate motherboard they also support ECC
memory (*) made a lot of Xeon offerings a lot less appealing. Especially
the workstation-oriented ones.

 Gabriele

*) Which is useful to those of us who keep their machines on for weeks
w/o rebooting or just want to have a more reliable setup



signature.asc
Description: OpenPGP digital signature
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2017-03-23 Thread Gregory Szorc
On Thu, Mar 23, 2017 at 9:10 PM, Jeff Muizelaar 
wrote:

> I have a Ryzen 7 1800 X and it does a Windows clobber builds in ~20min
> (3 min of that is configure which seems higher than what I've seen on
> other machines).


Make sure your power settings are aggressive. Configure and its single-core
usage is where Xeons and their conservative clocking really slowed down
compared to consumer CPUs (bug 1323106). Also, configure time can vary
significantly depending on page cache hits. So please run multiple times.

On Windows, I measure `mach configure` separately from `mach build` because
configure on Windows is just so slow and skews results. For Gecko
developers, I feel we want to optimize for compile time, so I tend to give
less weight to configure performance.


> This compares pretty favorably to the Lenovo p710
> machines that people are getting which do 18min clobber builds and
> cost more than twice the price.
>

I assume this is Windows and VS2015?

FWIW, I've been very interested in getting my hands on a Ryzen. I wouldn't
at all be surprised for the Ryzens to have better value than dual socket
Xeons. The big question is whether it is unquestionably better. For some
people (like remote employees who don't have access to an icecream
cluster), you can probably justify the extreme cost of a dual socket Xeon
over a Ryzen, even if the difference is only like 20%. Of course, the
counterargument is you can probably buy 2 Ryzen machines in place of a dual
socket Xeon. The introduction of Ryzen has literally changed the landscape
and the calculus that determines what hardware engineers should have.
Before I disappeared for ~1 month, I was working with IT and management to
define an optimal hardware load out for Firefox engineers. I need to resume
that work and fully evaluate Ryzen...



>
> -Jeff
>
> On Thu, Mar 23, 2017 at 7:51 PM, Jeff Gilbert 
> wrote:
> > They're basically out of stock now, but if you can find them, old
> > refurbished 2x Intel Xeon E5-2670 (2.6GHz Eight Core) machines were
> > bottoming out under $1000/ea. It happily does GCC builds in 8m, and I
> > have clang builds down to 5.5. As the v2s leave warranty, similar
> > machines may hit the market again.
> >
> > I'm interested to find out how the new Ryzen chips do. It should fit
> > their niche well. I have one at home now, so I'll test when I get a
> > chance.
> >
> > On Wed, Jul 6, 2016 at 12:06 PM, Trevor Saunders
> >  wrote:
> >> On Tue, Jul 05, 2016 at 04:42:09PM -0700, Gregory Szorc wrote:
> >>> On Tue, Jul 5, 2016 at 3:58 PM, Ralph Giles  wrote:
> >>>
> >>> > On Tue, Jul 5, 2016 at 3:36 PM, Gregory Szorc 
> wrote:
> >>> >
> >>> > > * `mach build binaries` (touch network/dns/DNS.cpp): 14.1s
> >>> >
> >>> > 24s here. So faster link times and significantly faster clobber
> times. I'm
> >>> > sold!
> >>> >
> >>> > Any motherboard recommendations? If we want developers to use
> machines
> >>> > like this, maintaining a current config in ServiceNow would probably
> >>> > help.
> >>>
> >>>
> >>> Until the ServiceNow catalog is updated...
> >>>
> >>> The Lenovo ThinkStation P710 is a good starting point (
> >>> http://shop.lenovo.com/us/en/workstations/thinkstation/p-series/p710/
> ).
> >>> From the default config:
> >>>
> >>> * Choose a 2 x E5-2637v4 or a 2 x E5-2643v4
> >>> * Select at least 4 x 8 GB ECC memory sticks (for at least 32 GB)
> >>> * Under "Non-RAID Hard Drives" select whatever works for you. I
> recommend a
> >>> 512 GB SSD as the primary HD. Throw in more drives if you need them.
> >>>
> >>> Should be ~$4400 for the 2xE5-2637v4 and ~$5600 for the 2xE5-2643v4
> >>> (plus/minus a few hundred depending on configuration specific).
> >>>
> >>> FWIW, I priced out similar specs for a HP Z640 and the markup on the
> CPUs
> >>> is absurd (costs >$2000 more when fully configured). Lenovo's
> >>> markup/pricing seems reasonable by comparison. Although I'm sure
> someone
> >>> somewhere will sell the same thing for cheaper.
> >>>
> >>> If you don't need the dual socket Xeons, go for an i7-6700K at the
> least. I
> >>> got the
> >>> http://store.hp.com/us/en/pdp/cto-dynamic-kits--1/hp-envy-
> 750se-windows-7-desktop-p5q80av-aba-1
> >>> a few months ago and like it. At ~$1500 for an i7-6700K, 32 GB RAM,
> and a
> >>> 512 GB SSD, the price was very reasonable compared to similar
> >>> configurations at Dell, HP, others.
> >>>
> >>> The just-released Broadwell-E processors with 6-10 cores are also nice
> >>> (i7-6850K, i7-6900K). Although I haven't yet priced any of these out
> so I
> >>> have no links to share. They should be <$2600 fully configured. That's
> a
> >>> good price point between the i7-6700K and a dual socket Xeon. Although
> if
> >>> you do lots of C++ compiling, you should get the dual socket Xeons
> (unless
> >>> you have access to more cores in an office or a remote machine).
> >>
> >>  The other week I built a machine with a 6800k, 32gb of ram, and a 2 tb
> >>  hdd for $1525 cad so probably just under $1000 usd.  With j

Re: Faster gecko builds with IceCC on Mac and Linux

2017-03-23 Thread Jeff Muizelaar
On Thu, Mar 23, 2017 at 11:42 PM, Robert O'Callahan
 wrote:
> On Fri, Mar 24, 2017 at 1:12 PM, Ehsan Akhgari  
> wrote:
>> On Thu, Mar 23, 2017 at 7:51 PM, Jeff Gilbert  wrote:
>>
>>> I'm interested to find out how the new Ryzen chips do. It should fit
>>> their niche well. I have one at home now, so I'll test when I get a
>>> chance.
>>>
>>
>> Ryzen currently on Linux implies no rr, so beware of that.
>
> A contributor almost got Piledriver working with rr, but that was
> based on "LWP" features that apparently are not in Ryzen. If anyone
> finds any detailed documentation of the hardware performance counters
> in Ryzen, let us know! All I can find is PR material.

I have NDA access to at least some of the Ryzen documentation and I
haven't been able to find anything more on the performance counters
other than:

AMD64 Architecture Programmer’s Manual
Volume 2: System Programming
3.27 December 2016

This document is already publicly available.

I also have one of the chips so I can test code. If there are specific
questions I can also forward them through our AMD contacts.

-Jeff
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2017-03-23 Thread Jeff Muizelaar
I have a Ryzen 7 1800 X and it does a Windows clobber builds in ~20min
(3 min of that is configure which seems higher than what I've seen on
other machines). This compares pretty favorably to the Lenovo p710
machines that people are getting which do 18min clobber builds and
cost more than twice the price.

-Jeff

On Thu, Mar 23, 2017 at 7:51 PM, Jeff Gilbert  wrote:
> They're basically out of stock now, but if you can find them, old
> refurbished 2x Intel Xeon E5-2670 (2.6GHz Eight Core) machines were
> bottoming out under $1000/ea. It happily does GCC builds in 8m, and I
> have clang builds down to 5.5. As the v2s leave warranty, similar
> machines may hit the market again.
>
> I'm interested to find out how the new Ryzen chips do. It should fit
> their niche well. I have one at home now, so I'll test when I get a
> chance.
>
> On Wed, Jul 6, 2016 at 12:06 PM, Trevor Saunders
>  wrote:
>> On Tue, Jul 05, 2016 at 04:42:09PM -0700, Gregory Szorc wrote:
>>> On Tue, Jul 5, 2016 at 3:58 PM, Ralph Giles  wrote:
>>>
>>> > On Tue, Jul 5, 2016 at 3:36 PM, Gregory Szorc  wrote:
>>> >
>>> > > * `mach build binaries` (touch network/dns/DNS.cpp): 14.1s
>>> >
>>> > 24s here. So faster link times and significantly faster clobber times. I'm
>>> > sold!
>>> >
>>> > Any motherboard recommendations? If we want developers to use machines
>>> > like this, maintaining a current config in ServiceNow would probably
>>> > help.
>>>
>>>
>>> Until the ServiceNow catalog is updated...
>>>
>>> The Lenovo ThinkStation P710 is a good starting point (
>>> http://shop.lenovo.com/us/en/workstations/thinkstation/p-series/p710/).
>>> From the default config:
>>>
>>> * Choose a 2 x E5-2637v4 or a 2 x E5-2643v4
>>> * Select at least 4 x 8 GB ECC memory sticks (for at least 32 GB)
>>> * Under "Non-RAID Hard Drives" select whatever works for you. I recommend a
>>> 512 GB SSD as the primary HD. Throw in more drives if you need them.
>>>
>>> Should be ~$4400 for the 2xE5-2637v4 and ~$5600 for the 2xE5-2643v4
>>> (plus/minus a few hundred depending on configuration specific).
>>>
>>> FWIW, I priced out similar specs for a HP Z640 and the markup on the CPUs
>>> is absurd (costs >$2000 more when fully configured). Lenovo's
>>> markup/pricing seems reasonable by comparison. Although I'm sure someone
>>> somewhere will sell the same thing for cheaper.
>>>
>>> If you don't need the dual socket Xeons, go for an i7-6700K at the least. I
>>> got the
>>> http://store.hp.com/us/en/pdp/cto-dynamic-kits--1/hp-envy-750se-windows-7-desktop-p5q80av-aba-1
>>> a few months ago and like it. At ~$1500 for an i7-6700K, 32 GB RAM, and a
>>> 512 GB SSD, the price was very reasonable compared to similar
>>> configurations at Dell, HP, others.
>>>
>>> The just-released Broadwell-E processors with 6-10 cores are also nice
>>> (i7-6850K, i7-6900K). Although I haven't yet priced any of these out so I
>>> have no links to share. They should be <$2600 fully configured. That's a
>>> good price point between the i7-6700K and a dual socket Xeon. Although if
>>> you do lots of C++ compiling, you should get the dual socket Xeons (unless
>>> you have access to more cores in an office or a remote machine).
>>
>>  The other week I built a machine with a 6800k, 32gb of ram, and a 2 tb
>>  hdd for $1525 cad so probably just under $1000 usd.  With just that
>>  machine I can do a 10 minute linux debug build.  For less than the
>>  price of the e3 machine quoted above I can buy 4 of those machines
>>  which I expect would produce build times under 5:00.
>>
>> I believe with 32gb of ram there's enough fs cache disk performance
>> doesn't actually matter, but it might be worth investigating moving a
>> ssd to that machine at some point.
>>
>> So I would tend to conclude Xeons are not a great deal unless you really
>> need to build for windows a lot before someone gets icecc working there.
>>
>> Trev
>>
>>> If you buy a machine today, watch out for Windows 7. The free Windows 10
>>> upgrade from Microsoft is ending soon. Try to get a Windows 10 Pro license
>>> out of the box. And, yes, you should use Windows 10 as your primary OS
>>> because that's what our users mostly use. I run Hyper-V under Windows 10
>>> and have at least 1 Linux VM running at all times. With 32 GB in the
>>> system, there's plenty of RAM to go around and Linux performance under the
>>> VM is excellent. It feels like I'm dual booting without the rebooting part.
>>> ___
>>> dev-platform mailing list
>>> dev-platform@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-platform
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list

Re: Faster gecko builds with IceCC on Mac and Linux

2017-03-23 Thread Robert O'Callahan
On Fri, Mar 24, 2017 at 1:12 PM, Ehsan Akhgari  wrote:
> On Thu, Mar 23, 2017 at 7:51 PM, Jeff Gilbert  wrote:
>
>> I'm interested to find out how the new Ryzen chips do. It should fit
>> their niche well. I have one at home now, so I'll test when I get a
>> chance.
>>
>
> Ryzen currently on Linux implies no rr, so beware of that.

A contributor almost got Piledriver working with rr, but that was
based on "LWP" features that apparently are not in Ryzen. If anyone
finds any detailed documentation of the hardware performance counters
in Ryzen, let us know! All I can find is PR material.

Rob
-- 
lbir ye,ea yer.tnietoehr  rdn rdsme,anea lurpr  edna e hnysnenh hhe uresyf toD
selthor  stor  edna  siewaoeodm  or v sstvr  esBa  kbvted,t rdsme,aoreseoouoto
o l euetiuruewFa  kbn e hnystoivateweh uresyf tulsa rehr  rdm  or rnea lurpr
.a war hsrer holsa rodvted,t  nenh hneireseoouot.tniesiewaoeivatewt sstvr  esn
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2017-03-23 Thread Ehsan Akhgari
On Thu, Mar 23, 2017 at 7:51 PM, Jeff Gilbert  wrote:

> I'm interested to find out how the new Ryzen chips do. It should fit
> their niche well. I have one at home now, so I'll test when I get a
> chance.
>

Ryzen currently on Linux implies no rr, so beware of that.


> On Wed, Jul 6, 2016 at 12:06 PM, Trevor Saunders
>  wrote:
> > On Tue, Jul 05, 2016 at 04:42:09PM -0700, Gregory Szorc wrote:
> >> On Tue, Jul 5, 2016 at 3:58 PM, Ralph Giles  wrote:
> >>
> >> > On Tue, Jul 5, 2016 at 3:36 PM, Gregory Szorc 
> wrote:
> >> >
> >> > > * `mach build binaries` (touch network/dns/DNS.cpp): 14.1s
> >> >
> >> > 24s here. So faster link times and significantly faster clobber
> times. I'm
> >> > sold!
> >> >
> >> > Any motherboard recommendations? If we want developers to use machines
> >> > like this, maintaining a current config in ServiceNow would probably
> >> > help.
> >>
> >>
> >> Until the ServiceNow catalog is updated...
> >>
> >> The Lenovo ThinkStation P710 is a good starting point (
> >> http://shop.lenovo.com/us/en/workstations/thinkstation/p-series/p710/).
> >> From the default config:
> >>
> >> * Choose a 2 x E5-2637v4 or a 2 x E5-2643v4
> >> * Select at least 4 x 8 GB ECC memory sticks (for at least 32 GB)
> >> * Under "Non-RAID Hard Drives" select whatever works for you. I
> recommend a
> >> 512 GB SSD as the primary HD. Throw in more drives if you need them.
> >>
> >> Should be ~$4400 for the 2xE5-2637v4 and ~$5600 for the 2xE5-2643v4
> >> (plus/minus a few hundred depending on configuration specific).
> >>
> >> FWIW, I priced out similar specs for a HP Z640 and the markup on the
> CPUs
> >> is absurd (costs >$2000 more when fully configured). Lenovo's
> >> markup/pricing seems reasonable by comparison. Although I'm sure someone
> >> somewhere will sell the same thing for cheaper.
> >>
> >> If you don't need the dual socket Xeons, go for an i7-6700K at the
> least. I
> >> got the
> >> http://store.hp.com/us/en/pdp/cto-dynamic-kits--1/hp-envy-
> 750se-windows-7-desktop-p5q80av-aba-1
> >> a few months ago and like it. At ~$1500 for an i7-6700K, 32 GB RAM, and
> a
> >> 512 GB SSD, the price was very reasonable compared to similar
> >> configurations at Dell, HP, others.
> >>
> >> The just-released Broadwell-E processors with 6-10 cores are also nice
> >> (i7-6850K, i7-6900K). Although I haven't yet priced any of these out so
> I
> >> have no links to share. They should be <$2600 fully configured. That's a
> >> good price point between the i7-6700K and a dual socket Xeon. Although
> if
> >> you do lots of C++ compiling, you should get the dual socket Xeons
> (unless
> >> you have access to more cores in an office or a remote machine).
> >
> >  The other week I built a machine with a 6800k, 32gb of ram, and a 2 tb
> >  hdd for $1525 cad so probably just under $1000 usd.  With just that
> >  machine I can do a 10 minute linux debug build.  For less than the
> >  price of the e3 machine quoted above I can buy 4 of those machines
> >  which I expect would produce build times under 5:00.
> >
> > I believe with 32gb of ram there's enough fs cache disk performance
> > doesn't actually matter, but it might be worth investigating moving a
> > ssd to that machine at some point.
> >
> > So I would tend to conclude Xeons are not a great deal unless you really
> > need to build for windows a lot before someone gets icecc working there.
> >
> > Trev
> >
> >> If you buy a machine today, watch out for Windows 7. The free Windows 10
> >> upgrade from Microsoft is ending soon. Try to get a Windows 10 Pro
> license
> >> out of the box. And, yes, you should use Windows 10 as your primary OS
> >> because that's what our users mostly use. I run Hyper-V under Windows 10
> >> and have at least 1 Linux VM running at all times. With 32 GB in the
> >> system, there's plenty of RAM to go around and Linux performance under
> the
> >> VM is excellent. It feels like I'm dual booting without the rebooting
> part.
> >> ___
> >> dev-platform mailing list
> >> dev-platform@lists.mozilla.org
> >> https://lists.mozilla.org/listinfo/dev-platform
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>



-- 
Ehsan
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2017-03-23 Thread Jeff Gilbert
They're basically out of stock now, but if you can find them, old
refurbished 2x Intel Xeon E5-2670 (2.6GHz Eight Core) machines were
bottoming out under $1000/ea. It happily does GCC builds in 8m, and I
have clang builds down to 5.5. As the v2s leave warranty, similar
machines may hit the market again.

I'm interested to find out how the new Ryzen chips do. It should fit
their niche well. I have one at home now, so I'll test when I get a
chance.

On Wed, Jul 6, 2016 at 12:06 PM, Trevor Saunders
 wrote:
> On Tue, Jul 05, 2016 at 04:42:09PM -0700, Gregory Szorc wrote:
>> On Tue, Jul 5, 2016 at 3:58 PM, Ralph Giles  wrote:
>>
>> > On Tue, Jul 5, 2016 at 3:36 PM, Gregory Szorc  wrote:
>> >
>> > > * `mach build binaries` (touch network/dns/DNS.cpp): 14.1s
>> >
>> > 24s here. So faster link times and significantly faster clobber times. I'm
>> > sold!
>> >
>> > Any motherboard recommendations? If we want developers to use machines
>> > like this, maintaining a current config in ServiceNow would probably
>> > help.
>>
>>
>> Until the ServiceNow catalog is updated...
>>
>> The Lenovo ThinkStation P710 is a good starting point (
>> http://shop.lenovo.com/us/en/workstations/thinkstation/p-series/p710/).
>> From the default config:
>>
>> * Choose a 2 x E5-2637v4 or a 2 x E5-2643v4
>> * Select at least 4 x 8 GB ECC memory sticks (for at least 32 GB)
>> * Under "Non-RAID Hard Drives" select whatever works for you. I recommend a
>> 512 GB SSD as the primary HD. Throw in more drives if you need them.
>>
>> Should be ~$4400 for the 2xE5-2637v4 and ~$5600 for the 2xE5-2643v4
>> (plus/minus a few hundred depending on configuration specific).
>>
>> FWIW, I priced out similar specs for a HP Z640 and the markup on the CPUs
>> is absurd (costs >$2000 more when fully configured). Lenovo's
>> markup/pricing seems reasonable by comparison. Although I'm sure someone
>> somewhere will sell the same thing for cheaper.
>>
>> If you don't need the dual socket Xeons, go for an i7-6700K at the least. I
>> got the
>> http://store.hp.com/us/en/pdp/cto-dynamic-kits--1/hp-envy-750se-windows-7-desktop-p5q80av-aba-1
>> a few months ago and like it. At ~$1500 for an i7-6700K, 32 GB RAM, and a
>> 512 GB SSD, the price was very reasonable compared to similar
>> configurations at Dell, HP, others.
>>
>> The just-released Broadwell-E processors with 6-10 cores are also nice
>> (i7-6850K, i7-6900K). Although I haven't yet priced any of these out so I
>> have no links to share. They should be <$2600 fully configured. That's a
>> good price point between the i7-6700K and a dual socket Xeon. Although if
>> you do lots of C++ compiling, you should get the dual socket Xeons (unless
>> you have access to more cores in an office or a remote machine).
>
>  The other week I built a machine with a 6800k, 32gb of ram, and a 2 tb
>  hdd for $1525 cad so probably just under $1000 usd.  With just that
>  machine I can do a 10 minute linux debug build.  For less than the
>  price of the e3 machine quoted above I can buy 4 of those machines
>  which I expect would produce build times under 5:00.
>
> I believe with 32gb of ram there's enough fs cache disk performance
> doesn't actually matter, but it might be worth investigating moving a
> ssd to that machine at some point.
>
> So I would tend to conclude Xeons are not a great deal unless you really
> need to build for windows a lot before someone gets icecc working there.
>
> Trev
>
>> If you buy a machine today, watch out for Windows 7. The free Windows 10
>> upgrade from Microsoft is ending soon. Try to get a Windows 10 Pro license
>> out of the box. And, yes, you should use Windows 10 as your primary OS
>> because that's what our users mostly use. I run Hyper-V under Windows 10
>> and have at least 1 Linux VM running at all times. With 32 GB in the
>> system, there's plenty of RAM to go around and Linux performance under the
>> VM is excellent. It feels like I'm dual booting without the rebooting part.
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2017-03-23 Thread Henri Sivonen
On Wed, Jul 6, 2016 at 2:42 AM, Gregory Szorc  wrote:
> The Lenovo ThinkStation P710 is a good starting point (
> http://shop.lenovo.com/us/en/workstations/thinkstation/p-series/p710/).

To help others who follow the above advice save some time:

Xeons don't have Intel integrated GPUs, so one has to figure how to
get this up and running with a discrete GPU. In the case of Nvidia
Quadro M2000, the latest Ubuntu and Fedora install images don't work.

This works:
Disable or enable the TPM. (By default, it's in a mode where the
kernel can see it but it doesn't work. It should either be hidden or
be allowed to work.)
Disable secure boot. (Nvidia's proprietary drivers don't work with
secure boot enabled.)
Use the Ubuntu 16.04.1 install image (i.e. intentionally old
image--you can upgrade later)
After installing, edit /etc/default/grub and set
GRUB_CMDLINE_LINUX_DEFAULT="" (i.e. make the string empty; without
this, the nvidia proprietary driver conflicts with LUKS pass phrase
input).
update-initramfs -u
update-grub
apt install nvidia-375
Then upgrade the rest. Even rolling forward the HWE stack works
*after* the above steps.

(For a Free Software alternative, install Ubuntu 16.04.1, stick to 2D
graphics from nouveau with llvmpipe for 3D and be sure never to roll
the HWE stack forward.)

-- 
Henri Sivonen
hsivo...@hsivonen.fi
https://hsivonen.fi/
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-06 Thread Trevor Saunders
On Tue, Jul 05, 2016 at 04:42:09PM -0700, Gregory Szorc wrote:
> On Tue, Jul 5, 2016 at 3:58 PM, Ralph Giles  wrote:
> 
> > On Tue, Jul 5, 2016 at 3:36 PM, Gregory Szorc  wrote:
> >
> > > * `mach build binaries` (touch network/dns/DNS.cpp): 14.1s
> >
> > 24s here. So faster link times and significantly faster clobber times. I'm
> > sold!
> >
> > Any motherboard recommendations? If we want developers to use machines
> > like this, maintaining a current config in ServiceNow would probably
> > help.
> 
> 
> Until the ServiceNow catalog is updated...
> 
> The Lenovo ThinkStation P710 is a good starting point (
> http://shop.lenovo.com/us/en/workstations/thinkstation/p-series/p710/).
> From the default config:
> 
> * Choose a 2 x E5-2637v4 or a 2 x E5-2643v4
> * Select at least 4 x 8 GB ECC memory sticks (for at least 32 GB)
> * Under "Non-RAID Hard Drives" select whatever works for you. I recommend a
> 512 GB SSD as the primary HD. Throw in more drives if you need them.
> 
> Should be ~$4400 for the 2xE5-2637v4 and ~$5600 for the 2xE5-2643v4
> (plus/minus a few hundred depending on configuration specific).
> 
> FWIW, I priced out similar specs for a HP Z640 and the markup on the CPUs
> is absurd (costs >$2000 more when fully configured). Lenovo's
> markup/pricing seems reasonable by comparison. Although I'm sure someone
> somewhere will sell the same thing for cheaper.
> 
> If you don't need the dual socket Xeons, go for an i7-6700K at the least. I
> got the
> http://store.hp.com/us/en/pdp/cto-dynamic-kits--1/hp-envy-750se-windows-7-desktop-p5q80av-aba-1
> a few months ago and like it. At ~$1500 for an i7-6700K, 32 GB RAM, and a
> 512 GB SSD, the price was very reasonable compared to similar
> configurations at Dell, HP, others.
> 
> The just-released Broadwell-E processors with 6-10 cores are also nice
> (i7-6850K, i7-6900K). Although I haven't yet priced any of these out so I
> have no links to share. They should be <$2600 fully configured. That's a
> good price point between the i7-6700K and a dual socket Xeon. Although if
> you do lots of C++ compiling, you should get the dual socket Xeons (unless
> you have access to more cores in an office or a remote machine).

 The other week I built a machine with a 6800k, 32gb of ram, and a 2 tb
 hdd for $1525 cad so probably just under $1000 usd.  With just that
 machine I can do a 10 minute linux debug build.  For less than the
 price of the e3 machine quoted above I can buy 4 of those machines
 which I expect would produce build times under 5:00.

I believe with 32gb of ram there's enough fs cache disk performance
doesn't actually matter, but it might be worth investigating moving a
ssd to that machine at some point.

So I would tend to conclude Xeons are not a great deal unless you really
need to build for windows a lot before someone gets icecc working there.

Trev

> If you buy a machine today, watch out for Windows 7. The free Windows 10
> upgrade from Microsoft is ending soon. Try to get a Windows 10 Pro license
> out of the box. And, yes, you should use Windows 10 as your primary OS
> because that's what our users mostly use. I run Hyper-V under Windows 10
> and have at least 1 Linux VM running at all times. With 32 GB in the
> system, there's plenty of RAM to go around and Linux performance under the
> VM is excellent. It feels like I'm dual booting without the rebooting part.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-06 Thread Chris H-C
> We're actively looking into a Telemetry-like system for mach and the
build system.

I heartily endorse this event or product and would like to subscribe to
your newsletter.

Chris

On Wed, Jul 6, 2016 at 2:41 PM, Gregory Szorc  wrote:

> We're actively looking into a Telemetry-like system for mach and the build
> system.
>
>
> On Wed, Jul 6, 2016 at 9:00 AM, Chris H-C  wrote:
>
>> Are there any scripts for reporting, analysing build times reported by
>> mach? I think this would be really useful data to have, especially to track
>> build system improvements (and regressions) as well as poorly-supported
>> configurations.
>>
>> Chris
>>
>> On Tue, Jul 5, 2016 at 7:42 PM, Gregory Szorc  wrote:
>>
>>> On Tue, Jul 5, 2016 at 3:58 PM, Ralph Giles  wrote:
>>>
>>> > On Tue, Jul 5, 2016 at 3:36 PM, Gregory Szorc  wrote:
>>> >
>>> > > * `mach build binaries` (touch network/dns/DNS.cpp): 14.1s
>>> >
>>> > 24s here. So faster link times and significantly faster clobber times.
>>> I'm
>>> > sold!
>>> >
>>> > Any motherboard recommendations? If we want developers to use machines
>>> > like this, maintaining a current config in ServiceNow would probably
>>> > help.
>>>
>>>
>>> Until the ServiceNow catalog is updated...
>>>
>>> The Lenovo ThinkStation P710 is a good starting point (
>>> http://shop.lenovo.com/us/en/workstations/thinkstation/p-series/p710/).
>>> From the default config:
>>>
>>> * Choose a 2 x E5-2637v4 or a 2 x E5-2643v4
>>> * Select at least 4 x 8 GB ECC memory sticks (for at least 32 GB)
>>> * Under "Non-RAID Hard Drives" select whatever works for you. I
>>> recommend a
>>> 512 GB SSD as the primary HD. Throw in more drives if you need them.
>>>
>>> Should be ~$4400 for the 2xE5-2637v4 and ~$5600 for the 2xE5-2643v4
>>> (plus/minus a few hundred depending on configuration specific).
>>>
>>> FWIW, I priced out similar specs for a HP Z640 and the markup on the CPUs
>>> is absurd (costs >$2000 more when fully configured). Lenovo's
>>> markup/pricing seems reasonable by comparison. Although I'm sure someone
>>> somewhere will sell the same thing for cheaper.
>>>
>>> If you don't need the dual socket Xeons, go for an i7-6700K at the
>>> least. I
>>> got the
>>>
>>> http://store.hp.com/us/en/pdp/cto-dynamic-kits--1/hp-envy-750se-windows-7-desktop-p5q80av-aba-1
>>> a few months ago and like it. At ~$1500 for an i7-6700K, 32 GB RAM, and a
>>> 512 GB SSD, the price was very reasonable compared to similar
>>> configurations at Dell, HP, others.
>>>
>>> The just-released Broadwell-E processors with 6-10 cores are also nice
>>> (i7-6850K, i7-6900K). Although I haven't yet priced any of these out so I
>>> have no links to share. They should be <$2600 fully configured. That's a
>>> good price point between the i7-6700K and a dual socket Xeon. Although if
>>> you do lots of C++ compiling, you should get the dual socket Xeons
>>> (unless
>>> you have access to more cores in an office or a remote machine).
>>>
>>> If you buy a machine today, watch out for Windows 7. The free Windows 10
>>> upgrade from Microsoft is ending soon. Try to get a Windows 10 Pro
>>> license
>>> out of the box. And, yes, you should use Windows 10 as your primary OS
>>> because that's what our users mostly use. I run Hyper-V under Windows 10
>>> and have at least 1 Linux VM running at all times. With 32 GB in the
>>> system, there's plenty of RAM to go around and Linux performance under
>>> the
>>> VM is excellent. It feels like I'm dual booting without the rebooting
>>> part.
>>> ___
>>> dev-platform mailing list
>>> dev-platform@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-platform
>>>
>>
>>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-06 Thread Gregory Szorc
We're actively looking into a Telemetry-like system for mach and the build
system.

On Wed, Jul 6, 2016 at 9:00 AM, Chris H-C  wrote:

> Are there any scripts for reporting, analysing build times reported by
> mach? I think this would be really useful data to have, especially to track
> build system improvements (and regressions) as well as poorly-supported
> configurations.
>
> Chris
>
> On Tue, Jul 5, 2016 at 7:42 PM, Gregory Szorc  wrote:
>
>> On Tue, Jul 5, 2016 at 3:58 PM, Ralph Giles  wrote:
>>
>> > On Tue, Jul 5, 2016 at 3:36 PM, Gregory Szorc  wrote:
>> >
>> > > * `mach build binaries` (touch network/dns/DNS.cpp): 14.1s
>> >
>> > 24s here. So faster link times and significantly faster clobber times.
>> I'm
>> > sold!
>> >
>> > Any motherboard recommendations? If we want developers to use machines
>> > like this, maintaining a current config in ServiceNow would probably
>> > help.
>>
>>
>> Until the ServiceNow catalog is updated...
>>
>> The Lenovo ThinkStation P710 is a good starting point (
>> http://shop.lenovo.com/us/en/workstations/thinkstation/p-series/p710/).
>> From the default config:
>>
>> * Choose a 2 x E5-2637v4 or a 2 x E5-2643v4
>> * Select at least 4 x 8 GB ECC memory sticks (for at least 32 GB)
>> * Under "Non-RAID Hard Drives" select whatever works for you. I recommend
>> a
>> 512 GB SSD as the primary HD. Throw in more drives if you need them.
>>
>> Should be ~$4400 for the 2xE5-2637v4 and ~$5600 for the 2xE5-2643v4
>> (plus/minus a few hundred depending on configuration specific).
>>
>> FWIW, I priced out similar specs for a HP Z640 and the markup on the CPUs
>> is absurd (costs >$2000 more when fully configured). Lenovo's
>> markup/pricing seems reasonable by comparison. Although I'm sure someone
>> somewhere will sell the same thing for cheaper.
>>
>> If you don't need the dual socket Xeons, go for an i7-6700K at the least.
>> I
>> got the
>>
>> http://store.hp.com/us/en/pdp/cto-dynamic-kits--1/hp-envy-750se-windows-7-desktop-p5q80av-aba-1
>> a few months ago and like it. At ~$1500 for an i7-6700K, 32 GB RAM, and a
>> 512 GB SSD, the price was very reasonable compared to similar
>> configurations at Dell, HP, others.
>>
>> The just-released Broadwell-E processors with 6-10 cores are also nice
>> (i7-6850K, i7-6900K). Although I haven't yet priced any of these out so I
>> have no links to share. They should be <$2600 fully configured. That's a
>> good price point between the i7-6700K and a dual socket Xeon. Although if
>> you do lots of C++ compiling, you should get the dual socket Xeons (unless
>> you have access to more cores in an office or a remote machine).
>>
>> If you buy a machine today, watch out for Windows 7. The free Windows 10
>> upgrade from Microsoft is ending soon. Try to get a Windows 10 Pro license
>> out of the box. And, yes, you should use Windows 10 as your primary OS
>> because that's what our users mostly use. I run Hyper-V under Windows 10
>> and have at least 1 Linux VM running at all times. With 32 GB in the
>> system, there's plenty of RAM to go around and Linux performance under the
>> VM is excellent. It feels like I'm dual booting without the rebooting
>> part.
>> ___
>> dev-platform mailing list
>> dev-platform@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-platform
>>
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-06 Thread Chris H-C
Are there any scripts for reporting, analysing build times reported by
mach? I think this would be really useful data to have, especially to track
build system improvements (and regressions) as well as poorly-supported
configurations.

Chris

On Tue, Jul 5, 2016 at 7:42 PM, Gregory Szorc  wrote:

> On Tue, Jul 5, 2016 at 3:58 PM, Ralph Giles  wrote:
>
> > On Tue, Jul 5, 2016 at 3:36 PM, Gregory Szorc  wrote:
> >
> > > * `mach build binaries` (touch network/dns/DNS.cpp): 14.1s
> >
> > 24s here. So faster link times and significantly faster clobber times.
> I'm
> > sold!
> >
> > Any motherboard recommendations? If we want developers to use machines
> > like this, maintaining a current config in ServiceNow would probably
> > help.
>
>
> Until the ServiceNow catalog is updated...
>
> The Lenovo ThinkStation P710 is a good starting point (
> http://shop.lenovo.com/us/en/workstations/thinkstation/p-series/p710/).
> From the default config:
>
> * Choose a 2 x E5-2637v4 or a 2 x E5-2643v4
> * Select at least 4 x 8 GB ECC memory sticks (for at least 32 GB)
> * Under "Non-RAID Hard Drives" select whatever works for you. I recommend a
> 512 GB SSD as the primary HD. Throw in more drives if you need them.
>
> Should be ~$4400 for the 2xE5-2637v4 and ~$5600 for the 2xE5-2643v4
> (plus/minus a few hundred depending on configuration specific).
>
> FWIW, I priced out similar specs for a HP Z640 and the markup on the CPUs
> is absurd (costs >$2000 more when fully configured). Lenovo's
> markup/pricing seems reasonable by comparison. Although I'm sure someone
> somewhere will sell the same thing for cheaper.
>
> If you don't need the dual socket Xeons, go for an i7-6700K at the least. I
> got the
>
> http://store.hp.com/us/en/pdp/cto-dynamic-kits--1/hp-envy-750se-windows-7-desktop-p5q80av-aba-1
> a few months ago and like it. At ~$1500 for an i7-6700K, 32 GB RAM, and a
> 512 GB SSD, the price was very reasonable compared to similar
> configurations at Dell, HP, others.
>
> The just-released Broadwell-E processors with 6-10 cores are also nice
> (i7-6850K, i7-6900K). Although I haven't yet priced any of these out so I
> have no links to share. They should be <$2600 fully configured. That's a
> good price point between the i7-6700K and a dual socket Xeon. Although if
> you do lots of C++ compiling, you should get the dual socket Xeons (unless
> you have access to more cores in an office or a remote machine).
>
> If you buy a machine today, watch out for Windows 7. The free Windows 10
> upgrade from Microsoft is ending soon. Try to get a Windows 10 Pro license
> out of the box. And, yes, you should use Windows 10 as your primary OS
> because that's what our users mostly use. I run Hyper-V under Windows 10
> and have at least 1 Linux VM running at all times. With 32 GB in the
> system, there's plenty of RAM to go around and Linux performance under the
> VM is excellent. It feels like I'm dual booting without the rebooting part.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-05 Thread Gregory Szorc
On Tue, Jul 5, 2016 at 3:58 PM, Ralph Giles  wrote:

> On Tue, Jul 5, 2016 at 3:36 PM, Gregory Szorc  wrote:
>
> > * `mach build binaries` (touch network/dns/DNS.cpp): 14.1s
>
> 24s here. So faster link times and significantly faster clobber times. I'm
> sold!
>
> Any motherboard recommendations? If we want developers to use machines
> like this, maintaining a current config in ServiceNow would probably
> help.


Until the ServiceNow catalog is updated...

The Lenovo ThinkStation P710 is a good starting point (
http://shop.lenovo.com/us/en/workstations/thinkstation/p-series/p710/).
>From the default config:

* Choose a 2 x E5-2637v4 or a 2 x E5-2643v4
* Select at least 4 x 8 GB ECC memory sticks (for at least 32 GB)
* Under "Non-RAID Hard Drives" select whatever works for you. I recommend a
512 GB SSD as the primary HD. Throw in more drives if you need them.

Should be ~$4400 for the 2xE5-2637v4 and ~$5600 for the 2xE5-2643v4
(plus/minus a few hundred depending on configuration specific).

FWIW, I priced out similar specs for a HP Z640 and the markup on the CPUs
is absurd (costs >$2000 more when fully configured). Lenovo's
markup/pricing seems reasonable by comparison. Although I'm sure someone
somewhere will sell the same thing for cheaper.

If you don't need the dual socket Xeons, go for an i7-6700K at the least. I
got the
http://store.hp.com/us/en/pdp/cto-dynamic-kits--1/hp-envy-750se-windows-7-desktop-p5q80av-aba-1
a few months ago and like it. At ~$1500 for an i7-6700K, 32 GB RAM, and a
512 GB SSD, the price was very reasonable compared to similar
configurations at Dell, HP, others.

The just-released Broadwell-E processors with 6-10 cores are also nice
(i7-6850K, i7-6900K). Although I haven't yet priced any of these out so I
have no links to share. They should be <$2600 fully configured. That's a
good price point between the i7-6700K and a dual socket Xeon. Although if
you do lots of C++ compiling, you should get the dual socket Xeons (unless
you have access to more cores in an office or a remote machine).

If you buy a machine today, watch out for Windows 7. The free Windows 10
upgrade from Microsoft is ending soon. Try to get a Windows 10 Pro license
out of the box. And, yes, you should use Windows 10 as your primary OS
because that's what our users mostly use. I run Hyper-V under Windows 10
and have at least 1 Linux VM running at all times. With 32 GB in the
system, there's plenty of RAM to go around and Linux performance under the
VM is excellent. It feels like I'm dual booting without the rebooting part.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-05 Thread Lawrence Mandel
On Tue, Jul 5, 2016 at 6:58 PM, Ralph Giles  wrote:

> On Tue, Jul 5, 2016 at 3:36 PM, Gregory Szorc  wrote:
>
> > * `mach build binaries` (touch network/dns/DNS.cpp): 14.1s
>
> 24s here. So faster link times and significantly faster clobber times. I'm
> sold!
>
> Any motherboard recommendations? If we want developers to use machines
> like this, maintaining a current config in ServiceNow would probably
> help.
>

Completely agree. You should not have to figure this out for yourself. We
should provide good recommendations in ServiceNow. I'm looking into
updating the ServiceNow listings with gps.

Lawrence
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-05 Thread Xidorn Quan
On Wed, Jul 6, 2016, at 05:12 AM, Gregory Szorc wrote:
> On Tue, Jul 5, 2016 at 11:08 AM, Steve Fink  wrote:
> 
> > I work remotely, normally from my laptop, and I have a single (fairly
> > slow) desktop usable as a compile server.
> 
> Gecko developers should have access to 8+ modern cores to compile Gecko.
> Full stop. The cores can be local (from a home office), on a machine in a
> data center you SSH or remote desktop into, or via a compiler farm (like
> IceCC running in an office).

I use my 4-core laptop for building as well... mainly because I found it
inconvenient to maintain development environment on multiple machines.
I've almost stopped writing patches in my personal MBP due to that.

That said, if I can distribute the build to other machines, I'll happily
buy a new desktop machine and use it as a compiler farm to boost the
build.

- Xidorn
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-05 Thread Ralph Giles
On Tue, Jul 5, 2016 at 3:36 PM, Gregory Szorc  wrote:

> * `mach build binaries` (touch network/dns/DNS.cpp): 14.1s

24s here. So faster link times and significantly faster clobber times. I'm sold!

Any motherboard recommendations? If we want developers to use machines
like this, maintaining a current config in ServiceNow would probably
help.

 -r
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-05 Thread Gregory Szorc
On Tue, Jul 5, 2016 at 2:37 PM, Ralph Giles  wrote:

> On Tue, Jul 5, 2016 at 12:12 PM, Gregory Szorc  wrote:
>
> >  I recommend 2x Xeon E5-2637v4 or E5-2643v4.
>
> For comparison's sake, what kind of routine and clobber build times do
> you see on a system like this? How much does the extra cache on Xeon
> help vs something like a 4 GHz i7?
>
> My desktop machine is five years old, but it's still faster than my
> MacBook Pro, so I've never bothered upgrading beyond newer SSDs. If
> there's a substantial improvement available in build times it would be
> easier to justify new hardware.
>
> A nop build on my desktop is 22s currently. Touching a cpp file (so
> re-linking xul) is 46s. A clobber build is something like 17 minutes.
>

Let's put it this way: I've built on AWS c4.8xlarge instances (Xeon E5-2666
v3 with 36 vCPUs) and achieved clobber build times comparable to the best
numbers the Toronto office has reported with icecc (between 3.5 and 4
minutes). That's 36 vCPUs @ 2.9-3.2/3.5GHz (base vs turbo single/all cores
frequency).

I don't have access to a 2xE5-2643v4 machine, but I do have access to a 2 x
E5-2637v4 with 32 GB RAM and an SSD running CentOS 7 (Clang 3.4.2 + gold
linker):

* clobber (minus configure): 368s (6:08)
* `mach build` (no-op): 24s
* `mach build binaries` (no-op): 3.4s
* `mach build binaries` (touch network/dns/DNS.cpp): 14.1s

I'm pretty sure the clobber time would be a little faster with a newer
Clang (also, GCC is generally faster than Clang).

That's 8 physical cores + hyperthreading (16 reported CPUs) @ 3.5 GHz. A
2643v4 would be 12 physical cores @ 3.4 GHz. So 28 GHz vs 40.8 GHz. That
should at least translate to 90s clobber build time savings. So 4-4.5
minutes. Not too shabby. And I'm sure they make good space heaters too.

FWIW, my i7-6700K (4+4 cores @ 4.0 GHz) is currently taking ~840s (~14:00)
for clobber builds (with Ubuntu 16.04 and a different toolchain however).
Those extra cores (even at lower clock speeds) really do matter.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-05 Thread Gregory Szorc
On Tue, Jul 5, 2016 at 2:27 PM, Chris Pearce  wrote:

> It would be cool if, once distributed compilation is reliable, if `./mach
> mercurial-setup` could 1. prompt you enable using the local network's
> infrastructure for compilation, and 2. prompt you to enable sharing your
> CPUs with the local network for compilation.
>
>
We've already discussed this in build system meetings. There are a number
of optimizations around detection of your build environment that can be
made. Unfortunately I don't think we have any bugs on file yet.


> Distributing a Windows-friendly version inside the MozillaBuild package
> would be nice too.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-05 Thread Gregory Szorc
On Tue, Jul 5, 2016 at 2:33 PM, Masatoshi Kimura 
wrote:

> Oh, my laptop has only 4 core and I won't buy a machine or a compiler
> farm account only to develope Gecko because my machine works perfectly
> for all my other puoposes.
>
> This is not the first time you blame my poor hardware. Mozilla (you are
> a Mozilla employee, aren't you?) does not want my contribution? Thank
> you very much!
>

My last comment was aimed mostly at Mozilla employees. We still support
building Firefox/Gecko on older machines. Of course, it takes longer unless
you have fast internet to access caches or a modern machine. That's the sad
reality of large software projects. Your contributions are welcome no
matter what machine you use. But having a faster machine should allow you
to contribute more/faster, which is why Mozilla (the company) wants its
employees to have fast machines.

FWIW, Mozilla has been known to send community contributors hardware so
they can have a better development experience. Send an email to
mh...@mozilla.com to inquire.


>
> On 2016/07/06 4:12, Gregory Szorc wrote:
> > On Tue, Jul 5, 2016 at 11:08 AM, Steve Fink  wrote:
> >
> >> I work remotely, normally from my laptop, and I have a single (fairly
> >> slow) desktop usable as a compile server.
> >>
> >
> > Gecko developers should have access to 8+ modern cores to compile Gecko.
> > Full stop. The cores can be local (from a home office), on a machine in a
> > data center you SSH or remote desktop into, or via a compiler farm (like
> > IceCC running in an office).
> >
> > If you work from home full time, you should probably have a modern and
> > beefy desktop at home. I recommend 2x Xeon E5-2637v4 or E5-2643v4. Go
> with
> > the E5 v4's, as the v3's are already obsolete. If you go with the higher
> > core count Xeons, watch out for clock speed: parts of the build like
> > linking libxul are still bound by the speed of a single core and the
> Xeons
> > with higher core counts tend to drop off in CPU frequency pretty fast.
> That
> > means slower libxul links and slower builds.
> >
> > Yes, dual socket Xeons will be expensive and more than you would pay for
> a
> > personal machine. But the cost is insignificant compared to your cost as
> an
> > employee paid to work on Gecko. So don't let the cost of something that
> > would allow you to do your job better discourage you from asking for
> > something! If you hit resistance buying a dual core Xeon machine, ping
> > Lawrence Mandel, as he possesses jars of developer productivity
> lubrication
> > that have the magic power of unblocking purchase requests.
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-05 Thread Ralph Giles
On Tue, Jul 5, 2016 at 12:12 PM, Gregory Szorc  wrote:

>  I recommend 2x Xeon E5-2637v4 or E5-2643v4.

For comparison's sake, what kind of routine and clobber build times do
you see on a system like this? How much does the extra cache on Xeon
help vs something like a 4 GHz i7?

My desktop machine is five years old, but it's still faster than my
MacBook Pro, so I've never bothered upgrading beyond newer SSDs. If
there's a substantial improvement available in build times it would be
easier to justify new hardware.

A nop build on my desktop is 22s currently. Touching a cpp file (so
re-linking xul) is 46s. A clobber build is something like 17 minutes.

 -r
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-05 Thread Masatoshi Kimura
Oh, my laptop has only 4 core and I won't buy a machine or a compiler
farm account only to develope Gecko because my machine works perfectly
for all my other puoposes.

This is not the first time you blame my poor hardware. Mozilla (you are
a Mozilla employee, aren't you?) does not want my contribution? Thank
you very much!

On 2016/07/06 4:12, Gregory Szorc wrote:
> On Tue, Jul 5, 2016 at 11:08 AM, Steve Fink  wrote:
> 
>> I work remotely, normally from my laptop, and I have a single (fairly
>> slow) desktop usable as a compile server.
>>
> 
> Gecko developers should have access to 8+ modern cores to compile Gecko.
> Full stop. The cores can be local (from a home office), on a machine in a
> data center you SSH or remote desktop into, or via a compiler farm (like
> IceCC running in an office).
> 
> If you work from home full time, you should probably have a modern and
> beefy desktop at home. I recommend 2x Xeon E5-2637v4 or E5-2643v4. Go with
> the E5 v4's, as the v3's are already obsolete. If you go with the higher
> core count Xeons, watch out for clock speed: parts of the build like
> linking libxul are still bound by the speed of a single core and the Xeons
> with higher core counts tend to drop off in CPU frequency pretty fast. That
> means slower libxul links and slower builds.
> 
> Yes, dual socket Xeons will be expensive and more than you would pay for a
> personal machine. But the cost is insignificant compared to your cost as an
> employee paid to work on Gecko. So don't let the cost of something that
> would allow you to do your job better discourage you from asking for
> something! If you hit resistance buying a dual core Xeon machine, ping
> Lawrence Mandel, as he possesses jars of developer productivity lubrication
> that have the magic power of unblocking purchase requests.
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
> 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-05 Thread Chris Pearce
It would be cool if, once distributed compilation is reliable, if `./mach 
mercurial-setup` could 1. prompt you enable using the local network's 
infrastructure for compilation, and 2. prompt you to enable sharing your CPUs 
with the local network for compilation.

Distributing a Windows-friendly version inside the MozillaBuild package would 
be nice too.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-05 Thread Gregory Szorc
On Tue, Jul 5, 2016 at 11:08 AM, Steve Fink  wrote:

> I work remotely, normally from my laptop, and I have a single (fairly
> slow) desktop usable as a compile server.
>

Gecko developers should have access to 8+ modern cores to compile Gecko.
Full stop. The cores can be local (from a home office), on a machine in a
data center you SSH or remote desktop into, or via a compiler farm (like
IceCC running in an office).

If you work from home full time, you should probably have a modern and
beefy desktop at home. I recommend 2x Xeon E5-2637v4 or E5-2643v4. Go with
the E5 v4's, as the v3's are already obsolete. If you go with the higher
core count Xeons, watch out for clock speed: parts of the build like
linking libxul are still bound by the speed of a single core and the Xeons
with higher core counts tend to drop off in CPU frequency pretty fast. That
means slower libxul links and slower builds.

Yes, dual socket Xeons will be expensive and more than you would pay for a
personal machine. But the cost is insignificant compared to your cost as an
employee paid to work on Gecko. So don't let the cost of something that
would allow you to do your job better discourage you from asking for
something! If you hit resistance buying a dual core Xeon machine, ping
Lawrence Mandel, as he possesses jars of developer productivity lubrication
that have the magic power of unblocking purchase requests.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-05 Thread Steve Fink
I work remotely, normally from my laptop, and I have a single (fairly 
slow) desktop usable as a compile server. (Which I normally leave off, 
but when I'm doing a lot of compiling I'll turn it on. It's old and 
power-hungry.)


I used distcc for a long time, but more recently have switched to icecream.

With distcc, the time to build standalone > time to build on the laptop 
using distcc to use the compile server > time to build standalone 
locally on the compile server. (So if I wanted the fastest builds, I'd 
ditch the laptop and just do everything on the compile server.)


I haven't checked, but I would guess it's about the same story with icecc.

Both have given me numerous problems. distcc would fairly often get into 
a state where it would spend far more time sending and receiving data 
than it saved on compiling. I suspect it was some sort of 
bufferbloat-type problem. I poked at it a little, setting queue sizes 
and things, but never satisfactorily resolved it. I would just leave the 
graphical distcc monitor open, and notice when things started to go south.


With icecream, it's much more common to get complete failure -- every 
compile command starts returning weird icecc error messages, and the 
build slows way down because everything has to fail the icecc attempt 
before it falls back to building locally. I've tried digging into it on 
multiple occasions, to no avail, and with some amount of restarting it 
magically resolves itself.


At least mostly -- I still get an occasional failure message here and 
there, but it retries the build locally so it doesn't mess anything up.


I've also attempted to use a machine in the MTV office as an additional 
lower priority compile server, with fairly disastrous results. This was 
with distcc and a much older version of the build system, but it ended 
up slowing down the build substantially.


I've long thought it would be nice to have some magical integration 
between some combination of a distributed compiler, mercurial, and 
ccache. You'd kick off a build, and it would predict object files that 
you'd be needing in the future and download them into your local cache. 
Then when the build got to that part, it would already have that build 
in its cache and use it. If the network transfer were too slow, the 
build would just see a cache miss and rebuild it instead. (The optional 
mercurial portion would be to accelerate knowing which files have and 
have not changed, without needing to checksum them.)


All of that is just for gaining some use of remote infrastructure over a 
high latency/low throughput network.


On a related note, I wonder how much of a gain it would be to compile to 
separate debug info files, and then transfer them using a binary diff (a 
la rsync against some older local version) and/or (crazytalk here) 
transfer them in a post-build step that you don't necessarily have to 
wait for before running the binary. Think of it as a remote symbol 
server, locally cached and eagerly populated but in the background.


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-05 Thread Gregory Szorc
On Tue, Jul 5, 2016 at 7:07 AM, Michael Layzell 
wrote:

> I'm certain it's possible to get a windows build working, the problem is
> that:
>
> a) We would need to modify the client to understand cl-style flags (I don't
> think it does right now)
> b) We would need to create the environment tarball
>

There is a script in-tree to create a self-contained archive containing
MSVC and the Windows SDK. Instructions at
https://gecko.readthedocs.io/en/latest/build/buildsystem/toolchains.html#windows.
You only need MozillaBuild and the resulting archive to build Firefox on a
fresh Windows install.


> c) We would need to make sure everything runs on windows
>
> None of those are insurmountable problems, but this has been a small side
> project which hasn't taken too much of our time. The work to get MSVC
> working is much more substantial than the work to get macOS and linux
> working.
>
> Getting it such that linux distributes to darwin machines, and darwin
> distributes to darwin machines is much easier. It wasn't done by us because
> distributing jobs to people's laptops seems kinda silly, especially because
> they may have a wifi connection, and as far as I know, basically every mac
> in this office is a macbook.
>
> The darwin machines simply need to add an `icecc` user, to run the build
> jobs in, and then darwin-compatible toolchains need to be distributed to
> all building machines.
>
> On Mon, Jul 4, 2016 at 7:26 PM, Xidorn Quan  wrote:
>
> > I hope it could support MSVC one day as well, and support distribute any
> > job to macOS machines as well.
> >
> > In my case, I use Windows as my main development environment, and I have
> > a personally powerful enough MacBook Pro. (Actually I additionally have
> > a retired MBP which should still work.) And if it is possible to
> > distribute Windows builds to Linux machines, I would probably consider
> > purchasing another machine for Linux.
> >
> > I would expect MSVC to be something not too hard to run with wine. When
> > I was in my university, I ran VC6 compiler on Linux to test my homework
> > without much effort. I guess the situation shouldn't be much worse with
> > VS2015. Creating the environment tarball may need some work, though.
> >
> > - Xidorn
> >
> > On Tue, Jul 5, 2016, at 07:36 AM, Benoit Girard wrote:
> > > In my case I'm noticing an improvement with my mac distributing jobs
> to a
> > > single Ubuntu machine but not compiling itself (Right now we don't
> > > support
> > > distributing mac jobs to other mac, primarily because we just want to
> > > maintain one homogeneous cluster).
> > >
> > > On Mon, Jul 4, 2016 at 5:12 PM, Gijs Kruitbosch
> > > 
> > > wrote:
> > >
> > > > On 04/07/2016 22:06, Benoit Girard wrote:
> > > >
> > > >> So to emphasize, if you compile a lot and only have one or two
> > machines
> > > >> on your 100mps or 1gbps LAN you'll still see big benefits.
> > > >>
> > > >
> > > > I don't understand how this benefits anyone with just one machine
> > (that's
> > > > compatible...) - there's no other machines to delegate compile tasks
> > to (or
> > > > to fetch prebuilt blobs from). Can you clarify? Do you just mean "one
> > extra
> > > > machine"? Am I misunderstanding how this works?
> > > >
> > > > ~ Gijs
> > > >
> > > >
> > > >
> > > >> On Mon, Jul 4, 2016 at 4:39 PM, Gijs Kruitbosch <
> > gijskruitbo...@gmail.com
> > > >> >
> > > >> wrote:
> > > >>
> > > >> What about people not lucky enough to (regularly) work in an office,
> > > >>> including but not limited to our large number of volunteers? Do we
> > intend
> > > >>> to set up something public for people to use?
> > > >>>
> > > >>> ~ Gijs
> > > >>>
> > > >>>
> > > >>> On 04/07/2016 20:09, Michael Layzell wrote:
> > > >>>
> > > >>> If you saw the platform lightning talk by Jeff and Ehsan in London,
> > you
> > >  will know that in the Toronto office, we have set up a distributed
> > >  compiler
> > >  called `icecc`, which allows us to perform a clobber build of
> > >  mozilla-central in around 3:45. After some work, we have managed
> to
> > get
> > >  it
> > >  so that macOS computers can also dispatch cross-compiled jobs to
> the
> > >  network, have streamlined the macOS install process, and have
> > refined
> > >  the
> > >  documentation some more.
> > > 
> > >  If you are in the Toronto office, and running a macOS or Linux
> > machine,
> > >  getting started using icecream is as easy as following the
> > instructions
> > >  on
> > >  the wiki:
> > > 
> > > 
> > > 
> >
> https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Using_Icecream
> > > 
> > >  If you are in another office, then I suggest that your office
> > starts an
> > >  icecream cluster! Simply choose one linux desktop in the office,
> > run the
> > >  scheduler on it, and put its IP in the Wiki, then everyone can
> > connect
> > >  to
> > >  the network and get fast builds!
> > > 
> > >  If y

Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-05 Thread Michael Layzell
I'm certain it's possible to get a windows build working, the problem is
that:

a) We would need to modify the client to understand cl-style flags (I don't
think it does right now)
b) We would need to create the environment tarball
c) We would need to make sure everything runs on windows

None of those are insurmountable problems, but this has been a small side
project which hasn't taken too much of our time. The work to get MSVC
working is much more substantial than the work to get macOS and linux
working.

Getting it such that linux distributes to darwin machines, and darwin
distributes to darwin machines is much easier. It wasn't done by us because
distributing jobs to people's laptops seems kinda silly, especially because
they may have a wifi connection, and as far as I know, basically every mac
in this office is a macbook.

The darwin machines simply need to add an `icecc` user, to run the build
jobs in, and then darwin-compatible toolchains need to be distributed to
all building machines.

On Mon, Jul 4, 2016 at 7:26 PM, Xidorn Quan  wrote:

> I hope it could support MSVC one day as well, and support distribute any
> job to macOS machines as well.
>
> In my case, I use Windows as my main development environment, and I have
> a personally powerful enough MacBook Pro. (Actually I additionally have
> a retired MBP which should still work.) And if it is possible to
> distribute Windows builds to Linux machines, I would probably consider
> purchasing another machine for Linux.
>
> I would expect MSVC to be something not too hard to run with wine. When
> I was in my university, I ran VC6 compiler on Linux to test my homework
> without much effort. I guess the situation shouldn't be much worse with
> VS2015. Creating the environment tarball may need some work, though.
>
> - Xidorn
>
> On Tue, Jul 5, 2016, at 07:36 AM, Benoit Girard wrote:
> > In my case I'm noticing an improvement with my mac distributing jobs to a
> > single Ubuntu machine but not compiling itself (Right now we don't
> > support
> > distributing mac jobs to other mac, primarily because we just want to
> > maintain one homogeneous cluster).
> >
> > On Mon, Jul 4, 2016 at 5:12 PM, Gijs Kruitbosch
> > 
> > wrote:
> >
> > > On 04/07/2016 22:06, Benoit Girard wrote:
> > >
> > >> So to emphasize, if you compile a lot and only have one or two
> machines
> > >> on your 100mps or 1gbps LAN you'll still see big benefits.
> > >>
> > >
> > > I don't understand how this benefits anyone with just one machine
> (that's
> > > compatible...) - there's no other machines to delegate compile tasks
> to (or
> > > to fetch prebuilt blobs from). Can you clarify? Do you just mean "one
> extra
> > > machine"? Am I misunderstanding how this works?
> > >
> > > ~ Gijs
> > >
> > >
> > >
> > >> On Mon, Jul 4, 2016 at 4:39 PM, Gijs Kruitbosch <
> gijskruitbo...@gmail.com
> > >> >
> > >> wrote:
> > >>
> > >> What about people not lucky enough to (regularly) work in an office,
> > >>> including but not limited to our large number of volunteers? Do we
> intend
> > >>> to set up something public for people to use?
> > >>>
> > >>> ~ Gijs
> > >>>
> > >>>
> > >>> On 04/07/2016 20:09, Michael Layzell wrote:
> > >>>
> > >>> If you saw the platform lightning talk by Jeff and Ehsan in London,
> you
> >  will know that in the Toronto office, we have set up a distributed
> >  compiler
> >  called `icecc`, which allows us to perform a clobber build of
> >  mozilla-central in around 3:45. After some work, we have managed to
> get
> >  it
> >  so that macOS computers can also dispatch cross-compiled jobs to the
> >  network, have streamlined the macOS install process, and have
> refined
> >  the
> >  documentation some more.
> > 
> >  If you are in the Toronto office, and running a macOS or Linux
> machine,
> >  getting started using icecream is as easy as following the
> instructions
> >  on
> >  the wiki:
> > 
> > 
> > 
> https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Using_Icecream
> > 
> >  If you are in another office, then I suggest that your office
> starts an
> >  icecream cluster! Simply choose one linux desktop in the office,
> run the
> >  scheduler on it, and put its IP in the Wiki, then everyone can
> connect
> >  to
> >  the network and get fast builds!
> > 
> >  If you have questions, myself, BenWa, and jeff are probably the
> ones to
> >  talk to.
> > 
> > 
> >  ___
> > >>> dev-platform mailing list
> > >>> dev-platform@lists.mozilla.org
> > >>> https://lists.mozilla.org/listinfo/dev-platform
> > >>>
> > >>>
> > > ___
> > > dev-platform mailing list
> > > dev-platform@lists.mozilla.org
> > > https://lists.mozilla.org/listinfo/dev-platform
> > >
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozi

Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-04 Thread Xidorn Quan
I hope it could support MSVC one day as well, and support distribute any
job to macOS machines as well.

In my case, I use Windows as my main development environment, and I have
a personally powerful enough MacBook Pro. (Actually I additionally have
a retired MBP which should still work.) And if it is possible to
distribute Windows builds to Linux machines, I would probably consider
purchasing another machine for Linux.

I would expect MSVC to be something not too hard to run with wine. When
I was in my university, I ran VC6 compiler on Linux to test my homework
without much effort. I guess the situation shouldn't be much worse with
VS2015. Creating the environment tarball may need some work, though.

- Xidorn

On Tue, Jul 5, 2016, at 07:36 AM, Benoit Girard wrote:
> In my case I'm noticing an improvement with my mac distributing jobs to a
> single Ubuntu machine but not compiling itself (Right now we don't
> support
> distributing mac jobs to other mac, primarily because we just want to
> maintain one homogeneous cluster).
> 
> On Mon, Jul 4, 2016 at 5:12 PM, Gijs Kruitbosch
> 
> wrote:
> 
> > On 04/07/2016 22:06, Benoit Girard wrote:
> >
> >> So to emphasize, if you compile a lot and only have one or two machines
> >> on your 100mps or 1gbps LAN you'll still see big benefits.
> >>
> >
> > I don't understand how this benefits anyone with just one machine (that's
> > compatible...) - there's no other machines to delegate compile tasks to (or
> > to fetch prebuilt blobs from). Can you clarify? Do you just mean "one extra
> > machine"? Am I misunderstanding how this works?
> >
> > ~ Gijs
> >
> >
> >
> >> On Mon, Jul 4, 2016 at 4:39 PM, Gijs Kruitbosch  >> >
> >> wrote:
> >>
> >> What about people not lucky enough to (regularly) work in an office,
> >>> including but not limited to our large number of volunteers? Do we intend
> >>> to set up something public for people to use?
> >>>
> >>> ~ Gijs
> >>>
> >>>
> >>> On 04/07/2016 20:09, Michael Layzell wrote:
> >>>
> >>> If you saw the platform lightning talk by Jeff and Ehsan in London, you
>  will know that in the Toronto office, we have set up a distributed
>  compiler
>  called `icecc`, which allows us to perform a clobber build of
>  mozilla-central in around 3:45. After some work, we have managed to get
>  it
>  so that macOS computers can also dispatch cross-compiled jobs to the
>  network, have streamlined the macOS install process, and have refined
>  the
>  documentation some more.
> 
>  If you are in the Toronto office, and running a macOS or Linux machine,
>  getting started using icecream is as easy as following the instructions
>  on
>  the wiki:
> 
> 
>  https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Using_Icecream
> 
>  If you are in another office, then I suggest that your office starts an
>  icecream cluster! Simply choose one linux desktop in the office, run the
>  scheduler on it, and put its IP in the Wiki, then everyone can connect
>  to
>  the network and get fast builds!
> 
>  If you have questions, myself, BenWa, and jeff are probably the ones to
>  talk to.
> 
> 
>  ___
> >>> dev-platform mailing list
> >>> dev-platform@lists.mozilla.org
> >>> https://lists.mozilla.org/listinfo/dev-platform
> >>>
> >>>
> > ___
> > dev-platform mailing list
> > dev-platform@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-platform
> >
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-04 Thread Benoit Girard
In my case I'm noticing an improvement with my mac distributing jobs to a
single Ubuntu machine but not compiling itself (Right now we don't support
distributing mac jobs to other mac, primarily because we just want to
maintain one homogeneous cluster).

On Mon, Jul 4, 2016 at 5:12 PM, Gijs Kruitbosch 
wrote:

> On 04/07/2016 22:06, Benoit Girard wrote:
>
>> So to emphasize, if you compile a lot and only have one or two machines
>> on your 100mps or 1gbps LAN you'll still see big benefits.
>>
>
> I don't understand how this benefits anyone with just one machine (that's
> compatible...) - there's no other machines to delegate compile tasks to (or
> to fetch prebuilt blobs from). Can you clarify? Do you just mean "one extra
> machine"? Am I misunderstanding how this works?
>
> ~ Gijs
>
>
>
>> On Mon, Jul 4, 2016 at 4:39 PM, Gijs Kruitbosch > >
>> wrote:
>>
>> What about people not lucky enough to (regularly) work in an office,
>>> including but not limited to our large number of volunteers? Do we intend
>>> to set up something public for people to use?
>>>
>>> ~ Gijs
>>>
>>>
>>> On 04/07/2016 20:09, Michael Layzell wrote:
>>>
>>> If you saw the platform lightning talk by Jeff and Ehsan in London, you
 will know that in the Toronto office, we have set up a distributed
 compiler
 called `icecc`, which allows us to perform a clobber build of
 mozilla-central in around 3:45. After some work, we have managed to get
 it
 so that macOS computers can also dispatch cross-compiled jobs to the
 network, have streamlined the macOS install process, and have refined
 the
 documentation some more.

 If you are in the Toronto office, and running a macOS or Linux machine,
 getting started using icecream is as easy as following the instructions
 on
 the wiki:


 https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Using_Icecream

 If you are in another office, then I suggest that your office starts an
 icecream cluster! Simply choose one linux desktop in the office, run the
 scheduler on it, and put its IP in the Wiki, then everyone can connect
 to
 the network and get fast builds!

 If you have questions, myself, BenWa, and jeff are probably the ones to
 talk to.


 ___
>>> dev-platform mailing list
>>> dev-platform@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-platform
>>>
>>>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-04 Thread Michael Layzell
I'm pretty sure he means one extra machine. For example, if you have a
laptop and a desktop, just adding the desktop into the network at home will
still dramatically improve build times (I think).

On Mon, Jul 4, 2016 at 5:12 PM, Gijs Kruitbosch 
wrote:

> On 04/07/2016 22:06, Benoit Girard wrote:
>
>> So to emphasize, if you compile a lot and only have one or two machines
>> on your 100mps or 1gbps LAN you'll still see big benefits.
>>
>
> I don't understand how this benefits anyone with just one machine (that's
> compatible...) - there's no other machines to delegate compile tasks to (or
> to fetch prebuilt blobs from). Can you clarify? Do you just mean "one extra
> machine"? Am I misunderstanding how this works?
>
> ~ Gijs
>
>
>
>> On Mon, Jul 4, 2016 at 4:39 PM, Gijs Kruitbosch > >
>> wrote:
>>
>> What about people not lucky enough to (regularly) work in an office,
>>> including but not limited to our large number of volunteers? Do we intend
>>> to set up something public for people to use?
>>>
>>> ~ Gijs
>>>
>>>
>>> On 04/07/2016 20:09, Michael Layzell wrote:
>>>
>>> If you saw the platform lightning talk by Jeff and Ehsan in London, you
 will know that in the Toronto office, we have set up a distributed
 compiler
 called `icecc`, which allows us to perform a clobber build of
 mozilla-central in around 3:45. After some work, we have managed to get
 it
 so that macOS computers can also dispatch cross-compiled jobs to the
 network, have streamlined the macOS install process, and have refined
 the
 documentation some more.

 If you are in the Toronto office, and running a macOS or Linux machine,
 getting started using icecream is as easy as following the instructions
 on
 the wiki:


 https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Using_Icecream

 If you are in another office, then I suggest that your office starts an
 icecream cluster! Simply choose one linux desktop in the office, run the
 scheduler on it, and put its IP in the Wiki, then everyone can connect
 to
 the network and get fast builds!

 If you have questions, myself, BenWa, and jeff are probably the ones to
 talk to.


 ___
>>> dev-platform mailing list
>>> dev-platform@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-platform
>>>
>>>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-04 Thread Gijs Kruitbosch

On 04/07/2016 22:06, Benoit Girard wrote:

So to emphasize, if you compile a lot and only have one or two machines on your 
100mps or 1gbps LAN you'll still see big benefits.


I don't understand how this benefits anyone with just one machine 
(that's compatible...) - there's no other machines to delegate compile 
tasks to (or to fetch prebuilt blobs from). Can you clarify? Do you just 
mean "one extra machine"? Am I misunderstanding how this works?


~ Gijs




On Mon, Jul 4, 2016 at 4:39 PM, Gijs Kruitbosch 
wrote:


What about people not lucky enough to (regularly) work in an office,
including but not limited to our large number of volunteers? Do we intend
to set up something public for people to use?

~ Gijs


On 04/07/2016 20:09, Michael Layzell wrote:


If you saw the platform lightning talk by Jeff and Ehsan in London, you
will know that in the Toronto office, we have set up a distributed
compiler
called `icecc`, which allows us to perform a clobber build of
mozilla-central in around 3:45. After some work, we have managed to get it
so that macOS computers can also dispatch cross-compiled jobs to the
network, have streamlined the macOS install process, and have refined the
documentation some more.

If you are in the Toronto office, and running a macOS or Linux machine,
getting started using icecream is as easy as following the instructions on
the wiki:

https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Using_Icecream

If you are in another office, then I suggest that your office starts an
icecream cluster! Simply choose one linux desktop in the office, run the
scheduler on it, and put its IP in the Wiki, then everyone can connect to
the network and get fast builds!

If you have questions, myself, BenWa, and jeff are probably the ones to
talk to.



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-04 Thread Benoit Girard
This barely works in a office with 10MB/sec wireless uplink. Ideally you
want machines to be accessible on a gigabit LAN. It's more about bandwidth
throughput than latency AFAIK. i.e. can you *upload* dozens of 2-4MB
compressed pre-processed file faster than you compile it? I'd imagine
unless you can get reliable 50MB/sec upload throughput then you probably
wont benefit from connecting to a remote cluster.

However the good news is you can see a lot of benefits from having a
network of just one machine! In my case my Linux desktop can compile a mac
build faster than my top of the line 2013 macbook pro. and with a network
of 2 machines it's drastically faster. A cluster of 12 machines is nice,
but you're getting diminishing returns on that until the build system gets
better.

I'd imagine distributed object caching will have a similar bandwidth
problem, however users tend to have better download speeds than upload
speeds.

So to emphasize, if you compile a lot and only have one or two machines on
your 100mps or 1gbps LAN you'll still see big benefits.

On Mon, Jul 4, 2016 at 4:39 PM, Gijs Kruitbosch 
wrote:

> What about people not lucky enough to (regularly) work in an office,
> including but not limited to our large number of volunteers? Do we intend
> to set up something public for people to use?
>
> ~ Gijs
>
>
> On 04/07/2016 20:09, Michael Layzell wrote:
>
>> If you saw the platform lightning talk by Jeff and Ehsan in London, you
>> will know that in the Toronto office, we have set up a distributed
>> compiler
>> called `icecc`, which allows us to perform a clobber build of
>> mozilla-central in around 3:45. After some work, we have managed to get it
>> so that macOS computers can also dispatch cross-compiled jobs to the
>> network, have streamlined the macOS install process, and have refined the
>> documentation some more.
>>
>> If you are in the Toronto office, and running a macOS or Linux machine,
>> getting started using icecream is as easy as following the instructions on
>> the wiki:
>>
>> https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Using_Icecream
>>
>> If you are in another office, then I suggest that your office starts an
>> icecream cluster! Simply choose one linux desktop in the office, run the
>> scheduler on it, and put its IP in the Wiki, then everyone can connect to
>> the network and get fast builds!
>>
>> If you have questions, myself, BenWa, and jeff are probably the ones to
>> talk to.
>>
>>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-04 Thread David Burns
Yes!

Part of the build project work that I regularly email this list[1] we have
it on our roadmap to have the same distributed cache that we use in
automation available for engineers who are working on C++ code. We have
completed our rewrite and will be putting the initial work through try over
the next fortnight to make sure we havent regressed anything. After that we
will be working towards making it available to engineers before the end of
Q3 (at least on one platform).

David


[1]
https://groups.google.com/forum/#!topicsearchin/mozilla.dev.platform/Build$20System$20Project$20

On 4 July 2016 at 21:39, Gijs Kruitbosch  wrote:

> What about people not lucky enough to (regularly) work in an office,
> including but not limited to our large number of volunteers? Do we intend
> to set up something public for people to use?
>
> ~ Gijs
>
>
> On 04/07/2016 20:09, Michael Layzell wrote:
>
>> If you saw the platform lightning talk by Jeff and Ehsan in London, you
>> will know that in the Toronto office, we have set up a distributed
>> compiler
>> called `icecc`, which allows us to perform a clobber build of
>> mozilla-central in around 3:45. After some work, we have managed to get it
>> so that macOS computers can also dispatch cross-compiled jobs to the
>> network, have streamlined the macOS install process, and have refined the
>> documentation some more.
>>
>> If you are in the Toronto office, and running a macOS or Linux machine,
>> getting started using icecream is as easy as following the instructions on
>> the wiki:
>>
>> https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Using_Icecream
>>
>> If you are in another office, then I suggest that your office starts an
>> icecream cluster! Simply choose one linux desktop in the office, run the
>> scheduler on it, and put its IP in the Wiki, then everyone can connect to
>> the network and get fast builds!
>>
>> If you have questions, myself, BenWa, and jeff are probably the ones to
>> talk to.
>>
>>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-04 Thread Ralph Giles
On Mon, Jul 4, 2016 at 1:39 PM, Gijs Kruitbosch
 wrote:

> What about people not lucky enough to (regularly) work in an office,
> including but not limited to our large number of volunteers? Do we intend to
> set up something public for people to use?

By all accounts, the available distributed compilers aren't very good
at hiding latency. The build server needs to be on the local lan to
help much.

More generally, we have artifact builds for developers who don't need
the change C++ code, and there's some experiments happening to see if
the build can pull smaller pieces from the s3 build cache for those
who do.

 -r
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Faster gecko builds with IceCC on Mac and Linux

2016-07-04 Thread Gijs Kruitbosch
What about people not lucky enough to (regularly) work in an office, 
including but not limited to our large number of volunteers? Do we 
intend to set up something public for people to use?


~ Gijs

On 04/07/2016 20:09, Michael Layzell wrote:

If you saw the platform lightning talk by Jeff and Ehsan in London, you
will know that in the Toronto office, we have set up a distributed compiler
called `icecc`, which allows us to perform a clobber build of
mozilla-central in around 3:45. After some work, we have managed to get it
so that macOS computers can also dispatch cross-compiled jobs to the
network, have streamlined the macOS install process, and have refined the
documentation some more.

If you are in the Toronto office, and running a macOS or Linux machine,
getting started using icecream is as easy as following the instructions on
the wiki:
https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Using_Icecream

If you are in another office, then I suggest that your office starts an
icecream cluster! Simply choose one linux desktop in the office, run the
scheduler on it, and put its IP in the Wiki, then everyone can connect to
the network and get fast builds!

If you have questions, myself, BenWa, and jeff are probably the ones to
talk to.



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


  1   2   >