I run 10 node cluster with 2 cores 2.4Ghz with 4Gb Ram and dual 250GB drives per node. I run on used 32 bit servers so I can only run 2GB hbase but I still have memory left for tasktracker and datanode. more files in hadoop = more memory used on the namenode. hbase master is lightly loaded so I run my on the same node as namenode

My personal option is a large memory 64bit machine can not be fully loaded with hbase at this time but will give you better performance. Maybe if you have lots of MR jobs or need the netter response then it would be worth it. I thank there is still some open issues on to many open file handles etc that can limit larger server to not be fully used to there capacity. Thank in terms of google they stick with low (cheap to replace) hard drive sizes medium memory and cost/performance cpus but have lots of them.

Billy

"Amandeep Khurana" <ama...@gmail.com> wrote in message news:35a22e220903272207s30f26310y3ecbec723b83e...@mail.gmail.com...
What are the typical hardware config for a node that people are using for
Hadoop and HBase? I am setting up a new 10 node cluster which will have
HBase running as well that will be feeding my front end directly. Currently, I had a 3 node cluster with 2 GB of RAM on the slaves and 4 GB of RAM on the
master. This didnt work very well due to the RAM being a little low.

I got some config details from the powered by page on the Hadoop wiki, but
nothing like that for Hbase.


Amandeep Khurana
Computer Science Graduate Student
University of California, Santa Cruz



Reply via email to