I work remotely, normally from my laptop, and I have a single (fairly
slow) desktop usable as a compile server. (Which I normally leave off,
but when I'm doing a lot of compiling I'll turn it on. It's old and
power-hungry.)
I used distcc for a long time, but more recently have switched to icecream.
With distcc, the time to build standalone > time to build on the laptop
using distcc to use the compile server > time to build standalone
locally on the compile server. (So if I wanted the fastest builds, I'd
ditch the laptop and just do everything on the compile server.)
I haven't checked, but I would guess it's about the same story with icecc.
Both have given me numerous problems. distcc would fairly often get into
a state where it would spend far more time sending and receiving data
than it saved on compiling. I suspect it was some sort of
bufferbloat-type problem. I poked at it a little, setting queue sizes
and things, but never satisfactorily resolved it. I would just leave the
graphical distcc monitor open, and notice when things started to go south.
With icecream, it's much more common to get complete failure -- every
compile command starts returning weird icecc error messages, and the
build slows way down because everything has to fail the icecc attempt
before it falls back to building locally. I've tried digging into it on
multiple occasions, to no avail, and with some amount of restarting it
magically resolves itself.
At least mostly -- I still get an occasional failure message here and
there, but it retries the build locally so it doesn't mess anything up.
I've also attempted to use a machine in the MTV office as an additional
lower priority compile server, with fairly disastrous results. This was
with distcc and a much older version of the build system, but it ended
up slowing down the build substantially.
I've long thought it would be nice to have some magical integration
between some combination of a distributed compiler, mercurial, and
ccache. You'd kick off a build, and it would predict object files that
you'd be needing in the future and download them into your local cache.
Then when the build got to that part, it would already have that build
in its cache and use it. If the network transfer were too slow, the
build would just see a cache miss and rebuild it instead. (The optional
mercurial portion would be to accelerate knowing which files have and
have not changed, without needing to checksum them.)
All of that is just for gaining some use of remote infrastructure over a
high latency/low throughput network.
On a related note, I wonder how much of a gain it would be to compile to
separate debug info files, and then transfer them using a binary diff (a
la rsync against some older local version) and/or (crazytalk here)
transfer them in a post-build step that you don't necessarily have to
wait for before running the binary. Think of it as a remote symbol
server, locally cached and eagerly populated but in the background.
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform