On 02/06/2011 04:25 PM, Andrei Alexandrescu wrote:
Say the buffer has 1000 elements of which the last 100 contain data (and the
other 900 are past data that's not used). Then say this request comes:

stream.appendToFront(150);

At this point the stream may go two ways:

1. Append to the internal in-memory buffer, making it larger:

_buf.length += 150;
... read into _buf[$ - 150 .. $] ...

Now we have a buffer that has 1100 elements, of which the last 250 are used.

2. Move the data to the beginning of the buffer and then read 150 more elements
starting at position 100:

_buf[0 .. 100] = _buf[$ - 100 .. $];
... read into _buf[100 .. 250] ...

Now _buf has still 1000 elements, of which the first 250 contain data.

How does the stream decide between 1 and 2? Clearly it's undesirable to grow
the buffer too much and it's also undesirable to copy too much data. A simple
approach is to establish a bound on losses, for example copy data only if size
to copy is < 5% of the entire buffer.

Isn't absolute size also relevant, if not more? I mean, as long as buffer size is small (if not insignificant) in absolute value, compared to eg CPU cache or available RAM, may it be good strategy to grow it whatever the relative growth in proportion of current size?

Also: could a (truely) circular buffer help & solve the above copy problem, concretely?


Denis
--
_________________
vita es estrany
spir.wikidot.com

Reply via email to