I didn't mean anything negative.  You asked whether the idea was common
knowledge.  I've seen it used as a homework problem e.g. after the
introduction of Fibonacci heaps. Last time for sure was 1984.  Usually
homework problems fall in the domain of "common knowedge."

Googling "linked list of arrays" provides

http://en.wikipedia.org/wiki/VList

"... the VList is a persistent data structure designed by Phil Bagwell
in 2002 that combines the fast indexing of arrays with the easy
extension of cons-based (or singly-linked) linked lists ... "

I guess yours aren't persistent, but otherwise the idea seems to match.

Certainly you've  picked the benchmark cases where the vlist advantages
over a pointer to a single array.  I can't see how the vlist would
intrinsically use less memory unless you mean they may causes less heap
fragmentation, which I can believe.

As you say, applications tend to iterate over vectors.  Indexing VLists
will require more instructions than indexing a single vector, which
your benchmark shows.  This was the basis for my "marginally better"
guess: what is saves while pushing can easily be lost again while
traversing.

To be fair about comparing with the usual STL implementation you ought
to point out that when many items are to be pushed onto a vector most
libraries let you pre-allocate the space.  E.g. in C++ you can say
v.reserve(1000). before pushing a thousand new elements on v.  In this
way you avoid a lot of copying overhead.

Best regards!

Reply via email to