On 2/6/11 12:42 PM, spir wrote:
On 02/06/2011 04:25 PM, Andrei Alexandrescu wrote:
Say the buffer has 1000 elements of which the last 100 contain data
(and the
other 900 are past data that's not used). Then say this request comes:

stream.appendToFront(150);

At this point the stream may go two ways:

1. Append to the internal in-memory buffer, making it larger:

_buf.length += 150;
... read into _buf[$ - 150 .. $] ...

Now we have a buffer that has 1100 elements, of which the last 250 are
used.

2. Move the data to the beginning of the buffer and then read 150 more
elements
starting at position 100:

_buf[0 .. 100] = _buf[$ - 100 .. $];
... read into _buf[100 .. 250] ...

Now _buf has still 1000 elements, of which the first 250 contain data.

How does the stream decide between 1 and 2? Clearly it's undesirable
to grow
the buffer too much and it's also undesirable to copy too much data. A
simple
approach is to establish a bound on losses, for example copy data only
if size
to copy is < 5% of the entire buffer.

Isn't absolute size also relevant, if not more? I mean, as long as
buffer size is small (if not insignificant) in absolute value, compared
to eg CPU cache or available RAM, may it be good strategy to grow it
whatever the relative growth in proportion of current size?

Of course. But mathematically you want to find bounds as a fraction of the input size.

Also: could a (truely) circular buffer help & solve the above copy
problem, concretely?

Not if you want infinite lookahead, which I think is what any modern buffering system should offer.


Andrei

Reply via email to