On Thursday, 14 August 2014 at 19:10:18 UTC, Jonathan M Davis
wrote:
I've never really tried to benchmark it, but it was my
understanding that the idea behind Appender was to use it to
create the array when you do that via a lot of appending, and
then you use it as a normal array and stop using Appender. It
sounds like you're trying to use it as a way to manage reusing
the array, and I have no idea how it works for that. But then
again, I've never actually benchmarked it for just creating
arrays via appending. I'd just assumed that it was faster than
just using ~=, because that's what it's supposedly for. But
maybe I just completely misunderstood what the point of
Appender was.
- Jonathan M Davis
I too have trouble understanding what Appender does that
supposedly makes it faster (at least from the documentation). My
old, naive thought was that it was something like a linked list
of fixed size arrays so that appends didn't have to move existing
elements until you were done appending, at which point it would
bake it into a regular dynamic array moving each element only
once looking at the code it appeared to be nothing like that (an
std::deque with a copy into a vector in c++ terms).
Skimming the code it appears to be more focused on the much more
basic "~= always reallocates" performance problem. It seems it
boils down to doing essentially this (someone feel free to
correct me) in the form of an output range:
auto a = /* some array */;
auto b = a;
a = a.array();
for(...)
b.assumeSafeAppend() ~= /* element */;
(assumeSafeAppend's documentation doesn't say whether or not
it'll reallocate when capacity is exhausted, I assume it does).