On Friday, 28 March 2014 at 16:59:05 UTC, Johannes Pfau wrote:
It 'works' with streams but it's way too slow. You don't want
to read
byte-per-byte. Of course you can always implement ranges on top
of
streams. Usually these will not provide byte-per-byte access but
efficient higher level abstractions (byLine, byChunk,
decodeText).
The point is you can implement ranges on streams easily, but
you can't
use ranges as the generic primitive for raw data. What's the
element
type of a data range?
ubyte - performance sucks
ubyte[n], ubyte[] now you have a range of ranges, most
algorithms wont
work as expected (find, count, ...).
(the call empty/don't call empty discussion is completely
unrelated to
this, btw. You can implement ranges on streams either way, but
again,
using ranges for raw data streams is not a good idea.)
I think a key is to offer something with gives you chunks at a
time right at the top, and the use .joiner on that. I read files
this way currently.
auto fileByteRange = File("something").byChunk(chunkSize).joiner;
I believe this to be a very good way to get good performance
without losing the functionality of std.algorithm.