On Wed, 2014-09-10 at 11:19 -0700, Mike Stump wrote: > On Sep 9, 2014, at 8:14 AM, VandeVondele Joost > <joost.vandevond...@mat.ethz.ch> wrote: > > Attached is a further revision of the patch, now dealing with > check-c++. > > So when last I played in this area, I wanted a command line tool that > would bin-pack from the command line. I would then grab the seconds > per for each .exp, and bin pack to the fixed N, where N was the core > count or related to it like, like N+1, N*1.1+1, N*2, ceil(N*1.1)). > Then, I would just have 60-100 bins, and that -j64 run would be nicer. > The only reason why I didn’t push that patch up was I didn’t know of > any such program. :-( I mention this in case someone knows of such a > tool that is open source, hopefully GNU software. The idea being, if > a user has a 64 cores or want the .exp files to be more balanced on > their target, they can be bothered to download the tool, don’t have > it, and you get something a little more static. > > Another way is to just make the buckets 60 seconds apiece. This way, > have nice box, 60 seconds to test, otherwise, the test time is at > most 1 minute unbalanced.
Perhaps this is a silly question, but has anyone tried going the whole way and not having buckets, going to an extremely fine-grained approach: split out all of the dj work into three phases: (A) test discovery; write out a fine-grained Makefile in which *every* testcase is its own make target (to the extreme limit of parallelizability e.g. on the per-input-file level) (B) invoke the Makefile, with -jN; each make target invokes dejagnu for an individual testcase, and gets its own .log file (C) combine the results That way all parallelization in (B) relies on "make" to do the right thing in terms of total number running jobs, available cores, load average etc, albeit with a performance hit for all of the extra reinvocations of "expect" (and a reordering of the results, but we can impose a stable sort in phase (C) I guess). Has anyone tried this? Hope this is constructive Dave