[EMAIL PROTECTED] wrote:
> This thread really does show the unfortunate direction that software 
> development has taken even in open source: The simplest package is a rube 
> goldberg-like conglomeration of pre-packaged code and requires 50 and 100 
> other packages, each one recursively depended on it's own set of libs and 
> scripts and packages!!!
>   

That's called "not reinventing the wheel."  Often it's a good thing.

One example: The zlib bug.  You may remember this one -- a bug in a
decompression routine that created a security hole.  A lot of packages
had zlib as a dependency.  While they were all affected by the bug,
fixing them was just a matter of replacing one shared library.

Other packages simplified things, in the way you suggest, by simply
including the zlib source code inside their own code.  This meant they
didn't have zlib as a dependency.  But it also meant that every one of
these packages had to be tracked down and fixed individually.

I'd argue that nine times out of ten, using a pre-packaged library is
both simpler and more reliable than rolling your own.  It also saves
space.  Why should every package carry around all the code needed to,
say, draw a window, when they can link to a single library that does it?

> Creating a darned index should definitely take less time than solving 500,000 
> equations with 500,000 unknowns about 100 times over, updating the silly 
> thing should be almost instantaneous!!!

Indexing is I/O-heavy, unlike equation-solving.  This isn't a matter of
CPU power.  Until someone invents a mass storage medium where every
location can be read instantly, indexing is going to be time-consuming,
because you have to wait for data to be read off disk.

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to