On 2020/02/23 16:29:20, dak wrote: > On 2020/02/23 16:23:34, hanwenn wrote: > > On 2020/02/23 16:05:08, dak wrote: > > > On 2020/02/23 15:54:54, hanwenn wrote: > > > > I think this is worth it because it simplifies the build system, and puts > > the > > > > locking in the place where we actually access the resource. > > > > > > Is there any indication that letting Make run multiple instances of > > > lilypond-book with every instance except one at a time locking up is going > to > > be > > > a net win for performance? > > > > input/regression/lilypond-book: > > > > rm -rf out-tst; time make out=tst local-test -j4 CPU_COUNT=4 > > > > > > before > > real 1m16.588s > > > > after > > real 0m25.224s > > So the idea is not as much to run parallel instances of lilypond-book but rather > let lilypond-book itself do the serialization. > > The net result will be that Make counts lilypond-book's use of 4 CPUs as just a > single CPU, so unless the parallel makes run into a locking instance of > lilypond-book, this will now result in a maximum of 7 jobs in parallel, right?
Correct. I have another separate plan, which is to do cat $(find -name '*.itely' -or '*.tely' | grep -v '(out|out-www)' ) > concat.itely and then have a special rule run that through lp-book as a whole. That should also get rid of a bunch of inefficiencies. https://codereview.appspot.com/555360043/