On 2011-02-05 10:02:47 -0500, Andrei Alexandrescu <seewebsiteforem...@erdani.org> said:

Generally when one says "I want the stream to copy data straight into my buffers" this is the same as "I want the stream to be unbuffered". So if you want to provide your own buffers to be filled, we need to discuss refining the design of unbuffered input - for example by adding an optional routine for bulk transfer to input ranges.

You're right, this is a different thing.

My major gripe with ranges at this time is that it's almost impossible to design an algorithm that can take slices *or* make copies depending on whether the range supports slicing or not, and whether the slices are stable (not going to be mutated when popping elements from the range). At least not without writing two implementations of it.

I reread your initial post to get a clearer idea of what it meant. It seems to me that your buffered range design could be made to fix that hole. If the data you want to parse is all in memory, the buffered range could simply use the original array as its buffer; shiftFront would simply just the whole array to remove the first n elements while appendToFront would do nothing (as the buffer already contains all of the content). And if the data is immutable, then it's safe to just take a slice of it to preserve it instead of doing a copy. So you can't really be more efficient than that, it's just great.

As for getting the data in bulk directly so you can avoid needless copies... I think the same optimization is possible with a buffered range. All you need is a buffered range that doesn't reuse the buffer, presumably one of immutable(T)[]. With it, you can slice at will without fear of the data being overwritten at a later time.

So my rereading of your proposal convinced me. Go ahead, I can't wait to use it. :-)

--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/

Reply via email to