On 17 Aug 1999 [EMAIL PROTECTED] wrote: > * static binaries are bigger (disk and RAM, which is a problem)
Could you explain the RAM part? I don't see how it affects RAM usage, given my understanding of the way the linker works. > * Dynamic linking gives one point of failure, but also one > point to repair and not 50 bins to revert to the previous > version I can only assume you aren't maintaining production servers. Any option that reduces failure points is a very valuable option, and there would have to be quite a large cost to not do it. Upgrades to core binaries aren't necessary very often, and when they are they are handled by the package system. Just like every other binary. It's not "50 bins", it's however many packages are affected. Certainly not one package per file. > * static binaries must be updated more often and are bigger, > increasing bandwith and download costs > > The only benefit from static linking is, that removing libc doesn't > break the system anymore. And thats not worth the space, ram and time, > bandwith and downloadcosts. If the lib is broken, all statically > linked binaries will break also with the update. It absolutely is. Not for a desktop, but definitely for a remotely-administered machine. There are lots of people that operate machines that are hosted by some company off on the other side of the country. I also don't understand your point about "if the lib is broken...". Why do you think this? > No script using /bin/sh may use any bashisms and if they use /bin/bash > having sash won't help. So they should use /bin/sh and skip the bashisms.

