On Sat, Jun 12, 2010 at 03:03:27AM +0100, Connor Lane Smith wrote:
On 12 June 2010 02:42, Kris Maglione <maglion...@gmail.com> wrote:
Which is, of course, entirely relevant. This is a non-issue. It's not a
practical limitation. It's already possible to read as much of a file into
memory as your memory will hold. The only limitation is that if you want
more than 2GB, you need to make multiple calls.

My point was that that is false.

Your point was nothing. The size of size_t and ssize_t are irrelevant when you're talking about a function that isn't guaranteed or expected to return any specific amount of data on a specific call. If you want to read any amount of data, you have to call read repeatedly, with values less than SSIZE_MAX, until you've read what you want or read returns 0 bytes or error. It doesn't matter whether you want to read 5B or 5TB, whether ssize_t is a 32 bit or 128 bit value. The point is that the size of ssize_t hasn't even any practical bearing on the situation.

--
Kris Maglione

Correctness is clearly the prime quality.  If a system does not do
what it is supposed to do, then everything else about it matters
little.
        --Bertrand Meyer


Reply via email to