On 09/03/2013 02:30, Jonathan M Davis wrote:
<snip>
In general, ranges should work just fine for I/O as long as they have an
efficient implementation which underneathbuffers (and preferably makes them
forward ranges). Aside from how its implemented internally, there's no real
difference between operating on a range over a file and any other range. The
trick is making it efficient internally. Doing something like reading a
character at a time from a file every time that popFront is called would be
horrible, but with buffering, it should be just fine.

If examining one byte at a time is what you want. I mean this at the program logic level, not just the implementation level. The fact remains that most applications want to look at bigger portions of the file at a time.

    ubyte[] data;
    data.length = 100;
    foreach (ref b; data) b = file.popFront();

Even with buffering, a block memory copy is likely to be more efficient than transferring each byte individually.

You could provide direct memory access to the buffer, but this creates further complications if you want to read variable-size chunks. Further variables that affect the best way to do it include whether you want to keep hold of previously read chunks and whether you want to perform in-place modifications of the read-in data.

Now, you're not going to
get a random-access range that way, but it should work fine as a forward range,
and std.mmfile will probably give you want you want if an RA range is what you
really need (and that, we have already).

Yes, random-access file I/O is another thing. I was thinking primarily of cases where you want to just read the file through and process it while doing so. I imagine that most word processors, graphics editors, etc. will read the file and then generate the file afresh when you save, rather than just writing the changes to the file.

And then there are web browsers, which read files of various types both from the user's local file storage and over an HTTP connection.

Stewart.

Reply via email to