al Message-
From: Michael Segel [mailto:michael_se...@hotmail.com]
Sent: Tuesday, June 28, 2011 1:31 PM
To: common-user@hadoop.apache.org
Subject: RE: Performance Tunning
Matthew,
I understood that Juan was talking about a 2 socket quad core box. We run
boxes with the e5500 (xeon quad core ) ch
rst, and then
>network or memory becomes the next constraint. Really the xeon chipsets are
>really good.
HTH
-Mike
> From: matthew.go...@monsanto.com
> To: common-user@hadoop.apache.org
> Subject: RE: Performance Tunning
> Date: Tue, 28 Jun 2011 14:46:40 +
>
&g
ommon-user@hadoop.apache.org
Subject: Re: Performance Tunning
Matt,
You have 2 threads per core, so your Linux box thinks an 8 core box has16
cores. In my calcs, I tend to take a whole core for TT DN and RS and then a
thread per slot so you end up w 10 slots per node. Of course memory is also a
f
Matt,
You have 2 threads per core, so your Linux box thinks an 8 core box has16
cores. In my calcs, I tend to take a whole core for TT DN and RS and then a
thread per slot so you end up w 10 slots per node. Of course memory is also a
factor.
Note this is only a starting point.you can always tun
. [mailto:gordoslo...@gmail.com]
Sent: Monday, June 27, 2011 10:13 PM
To: common-user@hadoop.apache.org
Subject: Re: Performance Tunning
Ok,
So I tried putting the following config in the mapred-site.xml of all of my
nodes
mapred.job.tracker
name-node:54311
mapred.map.tasks
7
> running currently:
>> http://hadoop.apache.org/common/docs/r0.20.2/mapred-default.html
>> http://hadoop.apache.org/common/docs/r0.20.2/hdfs-default.html
>> http://hadoop.apache.org/common/docs/r0.20.2/core-default.html
>>
>> HTH,
>> Matt
>>
>> ---
; http://hadoop.apache.org/common/docs/r0.20.2/hdfs-default.html
> http://hadoop.apache.org/common/docs/r0.20.2/core-default.html
>
> HTH,
> Matt
>
> -Original Message-
> From: Juan P. [mailto:gordoslo...@gmail.com]
> Sent: Monday, June 27, 2011 2:50 PM
> To: common-us
://hadoop.apache.org/common/docs/r0.20.2/core-default.html
HTH,
Matt
-Original Message-
From: Juan P. [mailto:gordoslo...@gmail.com]
Sent: Monday, June 27, 2011 2:50 PM
To: common-user@hadoop.apache.org
Subject: Performance Tunning
I'm trying to run a MapReduce task against a cluster of 4 Data
I'm trying to run a MapReduce task against a cluster of 4 DataNodes with 4
cores each.
My input data is 4GB in size and it's split into 100MB files. Current
configuration is default so block size is 64MB.
If I understand it correctly Hadoop should be running 64 Mappers to process
the data.
I'm ru