-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On 12/09/12 12:58 PM, Zac Medico wrote:
> On 09/12/2012 09:33 AM, Hans de Graaff wrote:
>> On Wed, 2012-09-12 at 08:58 -0400, Ian Stakenvicius wrote:
>> 
>>> So essentially what you're saying here is that it might be
>>> worthwhile to look into parallelism as a whole and possibly
>>> come up with a solution that combines 'emerge --jobs' and
>>> build-system parallelism together to maximum benefit?
>> 
>> Forget about jobs and load average, and just keep starting jobs
>> all around until there is only 20% (or whatever tuneable amount)
>> free memory left. As far as I can tell this is always the real
>> bottleneck in the end. Once you hit swap overall throughput has
>> to go down quite a bit.
> 
> Well, I think it's still good to limit the number of jobs at
> least, since otherwise you could become overloaded with processes
> that don't consume a lot of memory at first but by the time they
> complete they have consumed much more memory than desired (using
> swap).

I think this would need to be dealt with by having the parent emerge
process monitor all children and specifically block individual
processes (ie, 'make' , 'ld' , etc) once resources are unavailable
until they become so.  Swap may be hit by the big processes but they
wouldn't continue to be processed while in swap, at least.

I don't have a solution to the potential 'thrashing' issue, though,
which i expect would be a problem even if there's enough memory.

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.19 (GNU/Linux)

iF4EAREIAAYFAlBQwFoACgkQ2ugaI38ACPAiAwD/foU8Xw1BQM3jeO6IiVfCGOnw
xHtIufwVmMpsGVdJQRIA/3W7Utg92foSc6ZtKMzBP5Fj0qB2BXMt/RS2I4pLsCQm
=gy9K
-----END PGP SIGNATURE-----

Reply via email to