On Fri, Jun 11, 2010 at 12:59:45PM -0400, Alex Puterbaugh wrote:
Kris, Kris, Kris...

So no one in the world ever reads files bigger than 2GB?  That's a
silly notion.  You can't design an API based on what you think a
programmer is _most likely_ to need, without consideration to other
scenarios.  At least not if you want it to be scalable enough to be
relevant in a few years.  The UNIX people understand that, and that's
why UNIX-like operating systems are still in use after decades.

As for the OP:  People have given a few good reasons why stderr is
useful, and that's why it's around.  Couldn't have said it better
myself.

Of course people read *files* bigger than two GB. That's why file offsets are 64bit values. The size of a single *read* is limited to 2GB. This is not a major obstacle. Even if it were commonplace for a program to read that much data into memory at once (which it is most certainly not), it would not be a major issue to split it into multiple reads.

--
Kris Maglione

If the programmer can simulate a construct faster than a compiler can
implement the construct itself, then the compiler writer has blown it
badly.
        --Guy Steele


Reply via email to