On 01/12/2010 12:14 PM, retard wrote:
Tue, 12 Jan 2010 05:34:49 -0500, Nick Sabalausky wrote:

"retard"<r...@tard.com.invalid>  wrote in message
news:hihgbe$qt...@digitalmars.com...
Mon, 11 Jan 2010 19:24:06 -0800, Walter Bright wrote:

dsimcha wrote:
Vote++.  I'm convinced that there's just a subset of programmers out
there that will not use any high-level programming model, no matter
how much easier it makes life, unless they're convinced it has
**zero** overhead compared to the crufty old C way.  Not negligible
overhead, not practically insignificant overhead for their use case,
not zero overhead in terms of whatever their most constrained
resource is but nonzero overhead in terms of other resources, but
zero overhead, period.

Then there are those who won't make any tradeoff in terms of safety,
encapsulation, readability, modularity, maintainability, etc., even
if it means their program runs 15x slower.  Why can't more
programmers take a more pragmatic attitude towards efficiency (among
other things)?
  Yes, noone wants to just gratuitously squander massive resources,
  but
is a few hundred kilobytes (fine, even a few megabytes, given how
cheap bandwidth and storage are nowadays) larger binary really going
to make or break your app, especially if you get it working faster
and/or with less bugs than you would have using some cruftier, older,
lower level language that produces smaller binaries?


I agree that a lot of the concerns are based on obsolete notions.
First off, I just bought another terabyte drive for $90. The first
hard drive I bought was $600 for 10Mb. A couple years earlier I used a
10Mb drive that cost $5000. If I look at what eats space on my lovely
terabyte drive, it ain't executables. It's music and pictures. I'd be
very surprised if I had a whole CD's worth of exe files.

A 1 Tb spinning hard disk doesn't represent the current
state-of-the-art. I have Intel SSD disks are those are damn expensive
if you e.g. start to build a safe RAID 1+0 setup. Instead of 1000 GB
the same price SSD comes with 8..16 GB. Suddenly application size
starts to matter. For instance, my root partition seems to contain 9 GB
worth of files and I've only installed a quite minimal graphical Linux
environment to write some modern end-user applications.

Not that other OSes don't have their own forms for bloat, but from what
I've seen of linux, an enormus amout of the system is stored as raw text
files. I wouldn't be surprised if converting those to sensible (ie
non-over-engineered) binary formats, or even just storing them all in a
run-of-the-mill zip format would noticably cut down on that footprint.

At least on Linux this is solved on filesystem level. There are e.g. read-
only file systems with lzma/xz support. Unfortunately stable rw-
filesystems don't utilize compression.

What's actually happening regarding configuration files - parts of Linux
are moving to xml based configuration system. Seen stuff like hal or
policykit? Not only does xml consume more space, the century old rock
solid and stable unix configuration reader libraries aren't used anymore,
since we have these over-hyped, slow, and buggy xml parsers written in
slow dynamic languages.

That sucks, I find .conf files editing sometimes arcane, but at least it is very readable and easy once you know (or look up) what is what. Compare that to the windows registry... The great thing is, when all things fail all you need is pico/nano/vi to change the settings. I would hate to have to do that with xml. I myself am also a buggy xml parser ;)

OTOH the configuration file ecosystem isn't that big. On two of my
systems 'du -sh /etc/' gives 9.8M and 27M. I doubt the files on the
hidden folders and .*rc files on my home directory are much larger. My
Windows 7 (came preinstalled on my laptop) profile is already 1.5 GB big
- I have absolutely no idea what's inside that binary blob - I don't even
have almost anything installed.

I seem to have a 11 gb setup (excluding home of course). Here are some stats:

$ du /usr/sbin /usr/bin /usr/lib -s
30M     /usr/sbin
284M    /usr/bin (almost 3000 files)
2,4G    /usr/lib

I installed an enormous amount of apps, but it seems the executables don't consume that much at all. It is all in the libraries. That suggest to me that it would matter more if shared libraries for D would be better supported (that also means distributed by distro's!).

Reply via email to