I suggest to make the defragmentation strategy another configurable aspect and to keep the first option in any case because its both the simplest to achieve and the one with less runtime overhead. I would call it the aggressive strategy ;-)
On Sunday, October 16, 2011, Ashish <[email protected]> wrote: > On Sun, Oct 16, 2011 at 5:57 PM, Daniel Manzke > <[email protected]> wrote: >> How about the Idea Hadoop is working. Make the Size of the Slices >> configurable and asume that the Values fit into it. Or small ones get into >> One slice, but this ones are marked as dropable :) >> >> Bye, >> Daniel > > I think this is what memcache does. This would be important as the for > lesser number of entries people might not use offheap. They would go > for it to store millions of entries without having to worry about GC, > and that's where our Memory Manager implementation would matter a lot. > > We can also take idea from the HFile, the way HBase stores key-value > pairs there. The important distinction would be give our > implementation a Map view, as we won't be using scans to retrieve > data, and may never store keys in lexicographical order. > > cheers > ashish >
