Hello Hadoop Gurus,
We are running a 4-node cluster. We just upgraded the RAM to 48 GB. We have
allocated around 33-34 GB per node for hadoop processes. Leaving the rest of
the 14-15 GB memory for OS and as buffer. There are no other processes
running on these nodes.
Most of the lighter jobs run
You have to do the math...
If you have 2gb per mapper, and run 10 mappers per node... That means 20gb of
memory.
Then you have TT and DN running which also take memory...
What did you set as the number of mappers/reducers per node?
What do you see in ganglia or when you run top?
Sent from a
By our calculations hadoop should not exceed 70% of memory.
Allocated per node - 48 map slots (24 GB) , 12 reduce slots (6 GB), 1 GB
each for DataNode/TaskTracker and one JobTracker Totalling 33/34 GB
allocation.
The queues are capped at using only 90% of capacity allocated so generally
10% of
How is it that 36 processes are not expected if you have configured 48 + 12
= 50 slots available on the machine?
On Wed, May 11, 2011 at 11:11 AM, Adi adi.pan...@gmail.com wrote:
By our calculations hadoop should not exceed 70% of memory.
Allocated per node - 48 map slots (24 GB) , 12 reduce
Actually per node 56 + 12 = 68 slots(not mappers/reducers).
With the jobs configuration it was using 6 slots per mapper(resulting in 8-9
mappers), 6 slot per reducer( 1 reducer).
There was mistake in my earlier mails. The map slots are 56 not 48, but
still total memory allocation for hadoop comes
On May 11, 2011, at 11:11 AM, Adi wrote:
By our calculations hadoop should not exceed 70% of memory.
Allocated per node - 48 map slots (24 GB) , 12 reduce slots (6 GB), 1 GB
each for DataNode/TaskTracker and one JobTracker Totalling 33/34 GB
allocation.
It sounds like you are only
Thanks for your comments Allen.I have added mine inline.
On May 11, 2011, at 11:11 AM, Adi wrote:
By our calculations hadoop should not exceed 70% of memory.
Allocated per node - 48 map slots (24 GB) , 12 reduce slots (6 GB), 1 GB
each for DataNode/TaskTracker and one JobTracker Totalling