Hey JM,

  I suspect they are referring to the DN process only.  It is important in
these discussion to talk about individual component memory usage.  In
my experience most HBase clusters only need 1 - 2 GB of heap space for the
DN process.  I am not a Map Reduce expert, but typically the actual TT
process only needs 1GB of memory then you control everything else through
max slots and child heap.  What is your current block count per DN?

On Sun, Jan 27, 2013 at 9:28 AM, Jean-Marc Spaggiari <
jean-m...@spaggiari.org> wrote:

> Hi,
>
> I saw on another message that hadoop only need 1GB...
>
> Today, I have configured my nodes with 45% memory for HBase, 45%
> memory for Hadoop. The last 10% are for the OS.
>
> Should I move that with 1GB for Hadoop, 10% for the OS and the rest
> for HBase? Even if running MR jobs?
>
> Thanks,
>
> JM
>



-- 
Kevin O'Dell
Customer Operations Engineer, Cloudera

Reply via email to