On 07/31/2013 10:31 AM, Mircea Markus wrote:
> On 30 Jul 2013, at 20:03, Shane Johnson <shjoh...@redhat.com> wrote:
>
>> One option might be to use a fix key set size and simply increment the value 
>> for each key by X every time it is written. Sort of like an object with a 
>> collection and every time a nested object is added to the collection, the 
>> parent object is written to the cache.
> In this example the aggregated objects should hold a foreign key to the 
> aggregator, otherwise the object would grow indefinitely causing OOMs.
> But good point nevertheless: if the size of every object varies from 
> 1k,2k,3k..Nk circularly, then the total disk capacity allocated for storing 
> an entry is ((1+N)*N)/2.
> So for storing 100MB of data you'd end up with a file with size 5GB. On top 
> of that the memory consumption grows proportionally, as we keep in memory 
> information about all the allocated segments on disk.
>
>
I am afraid that with single-file storage you cannot evade this without 
coalescing and splitting of empty blocks, which also results in 
fragmentation. This would mean either reads for record on each side to 
check if it's free (slowdown) or all blocks sorted by offset in memory 
(another memory requirement). There may be other options such as 
buddy-block system but I think these have degenerative allocation 
patterns as well.

It's a pity there is no generic space-allocation structure in Java, as 
Doug Lea (author of long-time used C malloc implementation) wrote part 
of the concurrent stuff in Java runtime libraries...

Radim
_______________________________________________
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Reply via email to