On Sunday, 8 November 2015 at 14:41:11 UTC, Spacen Jasset wrote:
But it doesn't seem efficient and strays off the conceptual path. In other words, why chunk things up, join them back, to get a stream?

`.byChunk` caches and `.joiner` hides this caching mechanism. Both operations happen under the hood "incrementally" while using the final input range because of lazy evaluation, so if your file is big, you are assured that only slices of N bytes (1024 in my example) will be loaded at once in the DRAM (unless you accumulate the whole thing later). This matches well to a "file stream concept", at least to read.

But as said in Jonathan M Davis's answer you can also read the whole file in a string or a ubyte[].

Perhaps the problem is that File is missing a .range() function?

Yes but this is a bit like this that phobos ranges and algorithms work. You have many independant low-level blocks with which you can compose rather than big classes that wrap everything. The whole standard library is organized around this.

std.stream was not compliant with this system and this is why "they" deprecated it (at least this is how I understood this).

Reply via email to