Josef Hook wrote:

> 
>>- can handle sparse arrays, saving many MBs for very sparse arrays
>>
> 
> What is the complexity on the algoritm?


Here is a snippet docu, from my recent changes, not checked in yet:

  [ get_chunk ... locate List_chunk, calc index in chunk ]
....

  * The scheme of operations is:
  *
  *   if all_chunks_are_MAX_ITEMS
  *      chunk = chunk_list[ idx / MAX_ITEMS ]
  *      idx =   idx % MAX_ITEMS
  *      done.
  *
  *   chunk = first
  *   repeat
  *      if (index < chunk->items)
  *          done.
  *
  *      if (index >= items_in_chunk_block)
  *          index -= items_in_chunk_block
  *          chunk += chunks_in_chunk_block
  *          continue
  *
  *      calc chunk and index in this block
  *      done.
  *
  *  One chunk_block consists of chunks of the same type: fixed, growing
  *  or other. So the time to look up a chunk doesn't depend on the array
  *  length, but on the complexity of the array. rebuild_chunk_list tries
  *  to reduce the complexity, but may fail, if you e.g. do a prime sieve
  *  by actually list_delet-ing the none prime numbers.
  *
  *  The complexity of the array is how many different chunk_blocks are
  *  there. They come from:
  *  - initially fixed: 1
  *  - initially growing: 2
  *  - first unshift: 1 except for initially fixed arrays
  *  - insert: 1 - 3
  *  - delete: 1 - 2
  *  - sparse hole: 3 (could be 2, code assumes access at either end now)

I hope, that answers the question.
In all tests it's as fast as intlist or much (10x) faster on sparse arrays.

A lookup if 10^6 items did last ~0.3s on my Athlon 800.


> /Josef


leo

Reply via email to