On Sat, Jun 12, 2010 at 02:15:07AM +0100, Connor Lane Smith wrote:
On 11 June 2010 21:15, Anders Andersson <pipat...@gmail.com> wrote:
Think before posting or blaming. 2GB might be silly now, much as 2MB
was silly 20 years ago. I can't see why it would be extraordinarily
silly to read in/map 2GB from a file 10 years from now. It takes 10
seconds at most *today*. And to limit your application because people
still use a broken processor architecture sounds a bit windows-y I
think..

The posix spec states that the nbyte argument is of type size_t, which
could be extended to 64 bit, making 2GB well within reach.

Which is, of course, entirely relevant. This is a non-issue. It's not a practical limitation. It's already possible to read as much of a file into memory as your memory will hold. The only limitation is that if you want more than 2GB, you need to make multiple calls. Which is made even more irrelevant by the fact that reads generally need to occur in a loop anyway to deal with incomplete and interrupted reads, hence utility functions like readn.

--
Kris Maglione

Complexity kills.  It sucks the life out of developers, it makes
products difficult to plan, build and test, it introduces security
challenges and it causes end-user and administrator frustration.
        --Ray Ozzie


Reply via email to