On Thu, 2026-04-16 at 00:17 +0200, Buggy Bob wrote:
> The bad thing is that once things do settle down, it immediately
> starts as many jobs as allowed by --jobs parameter, which on a
> moderately loaded machine results in oscilations: Suddenly starting a
> bunch of jobs pushes the load way above limit, then new jobs get held
> back, so the machine idles waiting for load estimate to go down, and
> then we go pedal to the metal overwheling it again and restarting the
> cycle. I mean, I can see why someone wanted to make it better.
> 
> Maybe the old algorithm you mentioned just needs a rate limit on
> adding new parallel jobs instead of a whole new, better load
> estimation though?

That is already not supposed to happen.  There is an adjustment that
prevents too many jobs from being started per second, by trying to
guess what the load average would be at this instant based on the last
value plus any jobs we've started.  The goal is to prevent the
"stampeding horses" scenario especially when GNU Make starts.

Obviously we can't take into account any jobs that were started
elsewhere, including other instances of GNU Make that may be running in
parallel, so if your build systems use the old-school recursive make
invocation model instead of the more modern "single make" method, the
load average computation will be much less accurate.

This does assume that the getloadavg() 1 minute average is being
updated at least once per second.  If that's not happening then the
current algorithm is not sufficient.

Reply via email to