Re: prebuilt libraries?

2014-11-28 Thread Neil

Gregory Szorc wrote:


Please read http://www.conifersystems.com/whitepapers/gnu-make/.


after a command fails, |make| does not delete the partially built 
output file


.DELETE_ON_ERROR was added to address this.

--
Warning: May contain traces of nuts.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: prebuilt libraries?

2014-11-28 Thread Thomas Zimmermann
Hi Gregory

 
 Please read http://www.conifersystems.com/whitepapers/gnu-make/. That is
 one of my go to articles for explaining why make sucks.

I would not point people to this article as it is flawed. I won't go
through the points it mentions. Some are relevant, others aren't, and
some probably depend on the user's expectation.

What I really criticize is that the authors are often simply ignorant.
There are several examples of this, but the worst one is the case of
recursive make. They cite Recursive Make Considered Harmful, yet they
insist on using recursive make and then complain about how it leads to
problems; ignoring existing solutions provided in RMCH.

Another point to mention is that Conifer Systems sells a competing build
system. They have a financial interest in making make look bad; in
contrast to merely improving the State Of The Art.

Best regards
Thomas

p.s. I'd rather like to stop the discussion soon, because it's quite OT
at this point.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: prebuilt libraries?

2014-11-27 Thread Thomas Zimmermann
Hi Michael,

Thank you for providing more information on the topic.

 [1] http://gittup.org/tup/build_system_rules_and_algorithms.pdf

That was an interesting read and the numbers are quite impressive.

However I'm skeptical of the overall approach, as it seems to require a
considerable amount of manual maintenance: file-change lists, 'inverse
dependency' tracking, sorted file lists. (?) It looks like tup's solving
a much simpler problem than make; while make solves the whole problem.

 [2] 
 http://gittup.org/blog/2014/03/6-clobber-builds-part-1---missing-dependencies
 [3] 
 http://gittup.org/blog/2014/05/7-clobber-builds-part-2---fixing-missing-dependencies
 [4] 
 http://gittup.org/blog/2014/06/8-clobber-builds-part-3---other-clobber-causes
 
 (I haven't finished part 4 of the clobber series).
 
 Make is fine and we're not the only project of this size that uses make.
 The Linux kernel also does and achieves way better results here. The
 problem is in our build scripts.
 
 I agree the Linux kernel results are better, but for me it is still 
 insufficient, where a no-op build time of the linux tree is 7.7s. In 
 comparison, with tup a no-op build time is 0.002s [5]. I'm not saying tup is 
 perfect, or the only way forward. But to say that we just need to spend a 
 little more time writing better Makefiles is a myth that needs to be 
 dispelled. Although there are certainly improvements that can be had with 
 that approach, there is no end to it. You will always have slow incremental 
 build times for a project of this size. You will always have cases where you 
 need to clobber and start over from scratch. And you will always try to find 
 clever hacks and work-arounds to avoid these issues (like cd'ing into a 
 subdirectory and running make there. Why is it that you can figure this out 
 faster than a machine?). That's why I outlined these rules clearly in the 
 paper, as well as the algorithms necessary to accomplish them.
 
 Anyway, this is probably getting off-topic for dev-platform. If you still 
 believe that we just need to add some extra goop to our Makefiles and we'd 
 all be happy, I'd love to meet up with you in Portland and hear why.

As I mentioned elsewhere, I don't know the build scripts in detail, so I
probably can't add a contribution besides complaining. ;) If there's a
build-system session in Portland I'd be interested to attend, though.

Best regards
Thomas

 
 -Mike
 
 [5] The comparison isn't entirely valid, since with make I'm only building 
 linux, but with tup I'm also building alsa, binutils, busybox, libfuse, gcc, 
 mplayer, ssh, uClibc, and others.
 

___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: prebuilt libraries?

2014-11-27 Thread Gregory Szorc

On 11/27/14 1:05 AM, Thomas Zimmermann wrote:

Hi Michael,

Thank you for providing more information on the topic.


[1] http://gittup.org/tup/build_system_rules_and_algorithms.pdf


That was an interesting read and the numbers are quite impressive.

However I'm skeptical of the overall approach, as it seems to require a
considerable amount of manual maintenance: file-change lists, 'inverse
dependency' tracking, sorted file lists. (?) It looks like tup's solving
a much simpler problem than make; while make solves the whole problem.


No, make does not solve the whole problem. Tup solves more than make.

Please read http://www.conifersystems.com/whitepapers/gnu-make/. That is 
one of my go to articles for explaining why make sucks. Mike Shal's 
literature is also on the short list.


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: prebuilt libraries?

2014-11-26 Thread Gregory Szorc

On 11/26/14 10:58 AM, Thomas Zimmermann wrote:

Hi

Am 26.11.2014 um 17:35 schrieb Michael Shal:

Would it make sense to check in some of the libraries we build that we very
rarely change, and that don’t have a lot of configure dependencies people
twiddle with? (icu, pixman, cairo, vp8, vp9). This could speed up build
times in our infrastructure and for developers. This doesn’t have to be in
mozilla-central. mach could pick up a matching binary for the current
configuration from github or similar. Has anyone looked into this?


If the code for the library isn't changing, it's the build system's 
responsibility to ensure that nothing is done. One of the problems is that the 
build system we use (make) is so broken that we have to clobber frequently.


That's not true. We use CLOBBER because the build scripts are broken,
not make or the concepts behind make.

I once worked on a software with similar requirements, tons of
auto-generated code, and make-based build system. The Makefiles were
badly written and didn't track dependencies correctly. Consequently we
ran into exactly the same problems as with Gecko: we sometimes had to
cleanup dependencies files manually and often rebuilt too many files.

Once we fixed the Makefiles, building got fast and we never again had to
fix any dependencies by hand.



For non-clobber builds, at least in our infra, caching can still help by 
sharing objects among machines (eg: for a newly spun up AWS instance with no 
previous objdir). However, caching still doesn't prevent make from doing lots 
of unnecessary work (reading Makfiles, building a DAG and stat()ing files) for 
things that haven't changed. In other words, if icu hasn't changed, the ideal 
incremental build time for that component is zero, but with make it will always 
be more than that.


This seems like it would speed up first-build and clobber build times, but
at least for me, it's incremental build performance I care about.


gps/glandium have some more fixes in the works, but unfortunately make wasn't 
designed to scale to projects of this size.


Make is fine and we're not the only project of this size that uses make.
The Linux kernel also does and achieves way better results here. The
problem is in our build scripts.


No, make is not fine. make is not capable of handling a single DAG the 
size of a large project like Firefox or Linux. That is a fact and not up 
for debate. Mike Shal can show you numbers.


Large projects hack around the scaling limitations of make by 
establishing multiple make contexts / DAGs. This is what Linux and 
Firefox do. Count how many invocations of `make` there are in both 
projects (hint: hundreds).


Any time you split the DAG, you need to manually reconstruct those lost 
dependencies through custom traversal order. Again, this is what Firefox 
and other large projects do.


We have many inefficiencies in the way we do traversal. But as long as 
there are separate DAGs and we are using a build system that doesn't 
know to clean up orphaned artifacts (make doesn't), we run into the 
possibility of clobbers being required.


Modern build tools like Tup do not have these limitations. They can 
handle an insanely large DAG just fine. And they can clean up orphaned 
artifacts and integrate artifact caching into their build process.


Make is not fine and anyone who thinks otherwise is fortunate to have 
never had to maintain a large or complex project while supporting 
anything resembling a modern and productive workflow.


___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: prebuilt libraries?

2014-11-26 Thread Gregory Szorc
On 11/26/14 2:36 PM, Mike Hommey wrote:
 On Wed, Nov 26, 2014 at 08:48:05AM -0800, Gregory Szorc wrote:
 In the high-level approach, you recognize what the final output is and jump
 straight to fetching that. e.g. if all you really need is libxul, you'll
 fetch libxul.so. None of this intermediary .o files foo.

 Different audiences benefit from the different approaches.

 Firefox desktop, Fennec, and FxOS developers benefit mostly from a
 high-level approach, as they don't normally care about changing C++. They
 can jump straight to the end without paying a penalty of dealing with
 intermediaries.

 Gecko/C++ developers care about the low-level approach, as they'll be
 changing C++ things that invalidate the final output, so they'll be fetching
 intermediate objects out of necessity.

 Implementing an effective cache either way relies on several factors:

 * For a high-level cache, a build system capable of skipping intermediates
 to fetch the final entity (notably *not* make).
 * Consistent build environments across release automation and developer
 machines (otherwise the binaries are different and you sacrifice cache hit
 rate or accuracy).
 * People having fast internet connections to the cache (round trips don't
 take longer than building locally).
 * Fixing C++ header dependency hell so when C++ developers change something
 locally, it doesn't invalidate the world, causing excessive cache misses and
 local computation.
 * Writing to a globally distributed cache that is also read by release
 automation has some fun security challenges.
 * Having a database to correlate source tree state with build artifacts *or*
 a build system that is able to compute the equivalent DAG to formulate a
 cache key (something we can't do today).
 
 Does the audience that needs high-level cache actually need a cache
 /that/ accurate? I wager using nightly builds would work for the vast
 majority of cases. We just need a build mode that doesn't build code,
 which we kinda sorta have, but it's broken at the moment (bug 1063880).

I agree: last Nightly is probably sufficient for most use cases. I would
happily build this mode into the build system if I had time.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: prebuilt libraries?

2014-11-26 Thread Philip Chee
On 27/11/2014 00:03, Gregory Szorc wrote:

 Yes, people on this list generally care about C++. However, there is
 a very large group - most of the Firefox Team and a large amount of
 Firefox OS developers - who don't. To them, C++, libxul, others libs
 are 10+ minutes of CPU wall time before they can get to things they
 care about (JS, CSS, XUL, etc).

If all you care about is JS/CSS/XUL/XBL then you only need to build once
with flat file format and then you can edit your JS/CSS/etc repeatedly
without having to do another build.

There is a parallel discussion somewhere about shipping non-omni-jar
builds so that volunteer contributors don't need a build environment
before they start hacking the front end.

Phil

-- 
Philip Chee phi...@aleytys.pc.my, philip.c...@gmail.com
http://flashblock.mozdev.org/ http://xsidebar.mozdev.org
Guard us from the she-wolf and the wolf, and guard us from the thief,
oh Night, and so be good for us to pass.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: prebuilt libraries?

2014-11-26 Thread Mark Finkle
- Original Message -

 On 11/26/14 6:55 PM, Philip Chee wrote:
  On 27/11/2014 00:03, Gregory Szorc wrote:
 
  Yes, people on this list generally care about C++. However, there is
  a very large group - most of the Firefox Team and a large amount of
  Firefox OS developers - who don't. To them, C++, libxul, others libs
  are 10+ minutes of CPU wall time before they can get to things they
  care about (JS, CSS, XUL, etc).
 
  If all you care about is JS/CSS/XUL/XBL then you only need to build once
  with flat file format and then you can edit your JS/CSS/etc repeatedly
  without having to do another build.
This doesn't work for Android either. 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform