On Wed, Mar 17, 2010 at 5:40 PM, Ivan Novick <[email protected]> wrote: > Regarding the amount of allocation of memory for a large hashtable: > > It seems the overhead is limited to 2 X overhead. For example if I am > growing a table from 1 entry to 1024 entries would have an array of size 16, > 32, 64, 128, 256, 512, 1024 = 2048 entries allocated instead of 1024 entries > > Also this overhead is just for the pointers... If each item in the table is > 4KB for example the overall overhead of the data structure is an extra > 100-200 Bytes per entry whereas the data itself is much larger than that. > > I am missing something that makes the situation much worse?
No, that's accurate. If the # of hash table entries is small relative to the size of each entry, it isn't a serious problem. Neil
