And who are you, and what you have you done to test and then prove
your thesis?

Absolutely nothing, I must assume.

(BTW, your argument is weak and would be stronger if you tied it into
the faked moonlandings).

> I may get a little off-topic here and object for this very important
> topic to be discussed in a separate thread some day, but come to
> think of it, the overhead induced with dynamic linking, symbol
> versioning and crazy workarounds to make the dynamic linker remotely
> safe nowadays completely destroy the once good reasons for dynamic
> libraries.
> Static linking eliminates all the issues involved with symbol versions,
> wrecking your system with a library update and I'd even go as far
> as saying that static binaries are more or less independent from
> distributions. Memory usage is also not a point any more, because we
> have shared tables nowadays and in many cases, statically linked
> programs need less RAM than dynamically linked ones.
> 
> So, what are the remaining arguments against static linking?
> I agree there are programs which do not run well if statically linked
> (the X-server for instance), but that's more or less a matter of design
> and can be worked around in most cases.
> 
> One specific point often raised is the argument, that if you have
> an update for a specific library (a security update for instance),
> you just need to recompile the library and all programs depending
> on the library will behave correctly.
> With static libraries on the other hand, you would have to recompile
> all binaries and superset-libraries depending on this library for
> the security fix to be effective.
> This point is increasingly losing significance due to the following
> reasons:
> 
> 1) hot-swapping a library for a security-fix implies that the ABI
> doesn't change, i.e. that the binaries on your system using this
> library can access the functions the way they have been told
> where they can find them.
> In many cases nowadays, bugs are fixed concurrently with version
> bumps (major & minor) which means that all binaries have to be
> manually updated and recompiled anyway.
> 
> 2) compiling is not expensive any more (in most cases).
> On my Gentoo-based system, it just takes 2 hours to recompile the
> entire operating system including all user-space applications.
> Moore's law will decrease this time over the years significantly.
> Imagine if it just took 5 minutes, would there still be a reason to
> have a hand-crafted dynamic linker to carefully dissect libraries
> and binaries, imposing a run-time loss and lots of
> security-considerations?
> I'm not talking about beasts like libreoffice, chromium and others.
> There are better alternatives around and if not, there will be in the
> future. For huge packages, it should be simple enough to design the
> package-manager in a way serving static binaries, and in case there is
> a library-fix, tell all clients to redownload the current version
> again. So the only real worry here is to have a clean build-
> environment on the build-servers (designed by experts) and not wasting
> hundreds of man-hours designing systems to cope with the dll-hell
> almost all Un*xes have become on the client-side.
> 
> Why is Linux/BSD not popular on the desktop? Because of fragmentation.
> And one reason for fragmentation is that you can't use Debian packages
> in Ubuntu, mostly because there are library incompatibilities.
> Other reasons are lack of good software, but that's just a matter of
> time. And if we can get more developers to work on useful stuff instead
> of having to worry about library-versioning, this goal could be reached
> in shorter time.
> 
> It may be a little far-fetched, but I'm sure it would be possible
> to have one package-manager for all distributions if there would just
> be the motivation to distribute statically linked binaries and not fuck
> things up with distribution-specific folder-structures.
> 
> 3) security
> Well, the issues with dynamic linking have been stated often enough[0]
> [1][2][3][4].
> As far as I understand, the initial motivation of the OpenBSD-project
> was to favor security over speed. It just puzzles me that issues like
> dynamic linking have not yet been discussed broadly or dealt with in
> the last few years given these obvious negative implications.
> 
> Please let me know what you think.
> 
> Cheers
> 
> FRIGN
> 
> [0]: http://www.catonmat.net/blog/ldd-arbitrary-code-execution/
> [1]: http://benpfaff.org/papers/asrandom.pdf
> [2]: 
> http://web.archive.org/web/20120509105723/http://teddziuba.com/2008/09/a-web-os-are-you-dense.html
> [3]: https://www.nth-dimension.org.uk/pub/BTL.pdf
> [4]: http://harmful.cat-v.org/software/dynamic-linking/versioned-symbols
> 
> -- 
> FRIGN <d...@frign.de>

Reply via email to