On Mon, 19 Sep 2011 11:46:44 -0400, Robert Jacques <sandf...@jhu.edu>
wrote:
On Mon, 19 Sep 2011 10:08:32 -0400, Andrei Alexandrescu
<seewebsiteforem...@erdani.org> wrote:
On 9/19/11 6:25 AM, Steven Schveighoffer wrote:
On Sun, 18 Sep 2011 15:34:16 -0400, Timon Gehr <timon.g...@gmx.ch>
wrote:
On 09/18/2011 08:28 PM, Andrei Alexandrescu wrote:
That would allow us to e.g. switch from the
pointer+length representation to the arguably better pointer+pointer
representation with ease.
In what way is that representation better?
I agree, I don't see why the representation is inherently better. Some
operations become higher performance (i.e. popFront), and some become
worse (i.e. length). Most of the others are a wash.
That's where frequency of use comes into play. I'm thinking popFront
would be used most often, and it touches two words.
Andrei
The elephant in the room, of course, is that length now requires a
division and that popFront is actually implemented using slicing:
a = a[i .. $];
which translates into:
auto begin = i;
auto end = length;
if(end - begin >= 0 && length - end >= 0) {
ptr = ptr + T.sizeof * begin;
length = end - begin;
}
vs
auto length = (ptrBack - ptrFront) / T.sizeof;
auto begin = ptrFront + T.sizeof * i;
auto end = ptrFront + T.sizeof * length;
if(end - begin >= 0 && ptrBack - end >= 0) {
ptrFront = begin;
ptrBack = end;
}
I would hope something like this would be optimized by the compiler:
auto begin = ptrFront + T.sizeof * i;
if(ptrBack - begin >= 0)
ptrFront = begin;
If not, popFront could optimize it.
Certainly, to say popFront is going to perform *worse* using a
dual-pointer representation is false. Only one calculation is needed.
-Steve