Along the lines of Terracotta big memory, apparently what they are actually
doing is just using the DirectByteBuffer class (see this forum post:
http://forums.terracotta.org/forums/posts/list/4304.page) which is basically
the same as using malloc - it gives you non-gc access to a giant pool of
memory that you can allocate as you please.

Using the DirectByteBuffer directly might be even better than using
bigmemory, since it appears to use java object serialization to translate
between their "special" memory and regular java memory, which is probably
just another unnecessary layer.

On Wed, Dec 15, 2010 at 3:27 PM, Vladimir Rodionov
<vrodio...@carrieriq.com>wrote:

> Why do not you use off heap memory for this purpose? If its block cache
> (all blocks are of equal sizes)
> alloc/free algorithm is pretty much simple - you do not have to
> re-implement malloc in Java.
>
> I think something like open source version of Terracotta BigMemory is a
> good candidate for
> Apache project. I see at least  several large Hadoops : HBase, HDFS
> DataNodes, TaskTrackers and NameNode who suffer a lot from GC timeouts.
>
>
> Best regards,
> Vladimir Rodionov
> Principal Platform Engineer
> Carrier IQ, www.carrieriq.com
> e-mail: vrodio...@carrieriq.com
>
>
>

Reply via email to