>  i spoke with dr stallman a couple of weeks ago and confirmed that in
> the original version of ld that he wrote, he very very specifically
> made sure that it ONLY allocated memory up to the maximum *physical*
> resident available amount (i.e. only went into swap as an absolute
> last resort), and secondly that the number of object files loaded into
> memory was kept, again, to the minimum that the amount of spare
> resident RAM could handle.

How did ld back then determine how much physical memory was available,
and how might a modern reimplemention do it?

Perhaps you use sysconf(_SC_PHYS_PAGES) or sysconf(_SC_AVPHYS_PAGES).
But which? I have often been annoyed by how "make -j" may attempt
several huge linking phases in parallel.

Would it be possible to put together a small script that demonstrates
ld's inefficient use of memory? It is easy enough to generate a big
object file from a tiny source file, and there are no doubt easy ways
of measuring how much memory a process used, so it may be possible to
provide a more convenient test case than "please try building Firefox
and watch/listen as your SSD/HDD gets t(h)rashed".

    extern void *a[], *b[];
    void *c[10000000] = { &a };
    void *d[10000000] = { &b };

If we had an easy test case we could compare GNU ld, GNU gold, and LLD.

Edmund

Reply via email to