[ 
https://issues.apache.org/jira/browse/HADOOP-4396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12639552#action_12639552
 ] 

devaraj edited comment on HADOOP-4396 at 10/14/08 12:39 PM:
----------------------------------------------------------------

Raghu, I pasted a portion of the beginning of the task logs for one of the map 
tasks where the problem showed up. Notice the time when the map started running 
(15:05:15) and note the time when the first spill happened (15:13:17). The 
entire time duration (~8 minutes) was spent in consuming ~128MB of data from 
the dfs (and this happened to be a task that got scheduled on a data-local 
node). Quite unexpected since in the normal case it takes a few seconds... The 
strange thing is that it is not a node issue. So for this run of the sort400, 
on a particular node, a couple of maps exhibited this behavior but many ran 
normally.. And in a subsequent sort400 run, with the same dataset, *none*  of 
the maps scheduled on this node exhibited the problem!


2008-10-14 15:05:15,064 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: 
Initializing JVM Metrics with processName=MAP, sessionId=
2008-10-14 15:05:15,156 INFO org.apache.hadoop.mapred.MapTask: numReduceTasks: 
1741
2008-10-14 15:05:15,173 INFO org.apache.hadoop.mapred.MapTask: io.sort.mb = 128
2008-10-14 15:05:15,378 INFO org.apache.hadoop.mapred.MapTask: data buffer = 
102005480/127506848
2008-10-14 15:05:15,379 INFO org.apache.hadoop.mapred.MapTask: record buffer = 
335544/419430
2008-10-14 15:05:15,394 INFO org.apache.hadoop.util.NativeCodeLoader: Loaded 
the native-hadoop library
2008-10-14 15:05:15,396 INFO org.apache.hadoop.io.compress.LzoCodec: 
Successfully loaded & initialized native-lzo library
2008-10-14 15:05:15,591 INFO org.apache.hadoop.io.compress.zlib.ZlibFactory: 
Successfully loaded & initialized native-zlib library
2008-10-14 15:05:15,592 INFO org.apache.hadoop.io.compress.CodecPool: Got 
brand-new decompressor
2008-10-14 15:05:15,593 INFO org.apache.hadoop.io.compress.CodecPool: Got 
brand-new decompressor
2008-10-14 15:05:15,593 INFO org.apache.hadoop.io.compress.CodecPool: Got 
brand-new decompressor
2008-10-14 15:05:15,594 INFO org.apache.hadoop.io.compress.CodecPool: Got 
brand-new decompressor
2008-10-14 15:13:17,408 INFO org.apache.hadoop.mapred.MapTask: Spilling map 
output: buffer full= true

      was (Author: devaraj):
    Raghu, I pasted a portion of the beginning of the task logs for one of the 
map tasks where the problem showed up. Notice the time when the map started 
running (15:05:15) and note the time when the first spill happened (15:13:17). 
The entire time duration (~8 minutes) was spent in consuming ~128MB of data 
from the dfs (and this happened to be a task that got scheduled on a data-local 
node). Quite unexpected since in the normal case it takes a few seconds... The 
strange thing is that it is not a node issue. So for this run of the sort400, a 
couple of maps exhibited this behavior but many ran normally.. And in a 
subsequent sort400 run, with the same dataset, *none*  of the maps run on this 
node exhibited the problem!


2008-10-14 15:05:15,064 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: 
Initializing JVM Metrics with processName=MAP, sessionId=
2008-10-14 15:05:15,156 INFO org.apache.hadoop.mapred.MapTask: numReduceTasks: 
1741
2008-10-14 15:05:15,173 INFO org.apache.hadoop.mapred.MapTask: io.sort.mb = 128
2008-10-14 15:05:15,378 INFO org.apache.hadoop.mapred.MapTask: data buffer = 
102005480/127506848
2008-10-14 15:05:15,379 INFO org.apache.hadoop.mapred.MapTask: record buffer = 
335544/419430
2008-10-14 15:05:15,394 INFO org.apache.hadoop.util.NativeCodeLoader: Loaded 
the native-hadoop library
2008-10-14 15:05:15,396 INFO org.apache.hadoop.io.compress.LzoCodec: 
Successfully loaded & initialized native-lzo library
2008-10-14 15:05:15,591 INFO org.apache.hadoop.io.compress.zlib.ZlibFactory: 
Successfully loaded & initialized native-zlib library
2008-10-14 15:05:15,592 INFO org.apache.hadoop.io.compress.CodecPool: Got 
brand-new decompressor
2008-10-14 15:05:15,593 INFO org.apache.hadoop.io.compress.CodecPool: Got 
brand-new decompressor
2008-10-14 15:05:15,593 INFO org.apache.hadoop.io.compress.CodecPool: Got 
brand-new decompressor
2008-10-14 15:05:15,594 INFO org.apache.hadoop.io.compress.CodecPool: Got 
brand-new decompressor
2008-10-14 15:13:17,408 INFO org.apache.hadoop.mapred.MapTask: Spilling map 
output: buffer full= true
  
> sort on 400 nodes is now slower than in 18
> ------------------------------------------
>
>                 Key: HADOOP-4396
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4396
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>    Affects Versions: 0.19.0
>            Reporter: Jothi Padmanabhan
>            Assignee: Jothi Padmanabhan
>            Priority: Blocker
>             Fix For: 0.19.0
>
>
> Sort on 400 nodes on  hadoop release 18 takes about 29 minutes, but with the 
> 19 branch takes about 32 minutes. This behavior is consistent.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to