https://sourceware.org/bugzilla/show_bug.cgi?id=22831
--- Comment #35 from Luke Kenneth Casson Leighton <lkcl at lkcl dot net> --- On Sat, Jul 23, 2022 at 3:04 PM amodra at gmail dot com <sourceware-bugzi...@sourceware.org> wrote: > And "new algorithm needed" is really saying "rewrite the linker". i mention this very early on in this bugreport: back in the early 90s it was indeed rewritten, to remove Dr Stallman's algorithms, on the flawed assumption "640k^H^H^H^H 4GB should be enough for anybody". > That's low > priority. Also, there are other linkers, eg. gold and lld, that are much > newer > than ld.bfd. gold suffers from similar problems - i was able to make it keel over just as easily. i've not heard of lld before: if it likewise makes the same flawed assumption that going into swap is acceptable, it will likewise result in the exact same problem. > They don't do much better at memory usage, do they? if Dr Stallman's carefully-crafted original algorithms had been left in place, which, just as in gcc, made *really certain* to only use *resident* RAM, we would not be having this conversation as this bugreport would not need to be raised. the fundamental flawed assumption is that it's "ok to use swap". the sheer overwhelming amount of cross-referencing required in a linker *100% guarantee* that even 10 kbytes over resident RAM will result in thrashing. any rewrite or redesign that does not take that into account is 100% guaranteed to be problematic. this is just how it is: it's basic fundamental computer science that a linker *has* to jump around across the entirety of *all* of the objects it's trying to link. this makes the "Working Set" *equal* to 100% of the available Swap, which is unfortunately the very definition of "thrash conditions". -- You are receiving this mail because: You are on the CC list for the bug.