[ 
https://issues.apache.org/jira/browse/HDFS-4070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13482187#comment-13482187
 ] 

Gopal V commented on HDFS-4070:
-------------------------------

Since it was suggested to me that the 64kb number was related to the max TCP 
packet size, I tried to run a 3 node cluster (+1 name node) on EC2 (with hadoop 
1.0.3) to pull in all the replication & network factors.

The results are pretty similar - I planned to do 10 runs writing 64x1Gb of 
data, each map controlled by a single file with a single name (like TestDFSIO). 
Every map was data-local as there were 3 data nodes and replication level of 3.

Six runs had at least one failed job, so they were skipped for results.

||timestamp||65536||1056252||
|1350932059|945599|863490|
|1350936449|998821|930685|
|1350956097|1290333|1004663|
|1350958583|1118328|1096637|

So this is not merely about the disk IOPS, but about the overall system doing 
fewer syscalls.
                
> DFSClient ignores bufferSize argument & always performs small writes
> --------------------------------------------------------------------
>
>                 Key: HDFS-4070
>                 URL: https://issues.apache.org/jira/browse/HDFS-4070
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs client
>    Affects Versions: 1.0.3, 2.0.3-alpha
>         Environment: RHEL 5.5 x86_64 (ec2)
>            Reporter: Gopal V
>            Priority: Minor
>         Attachments: 
> gistfe319436b880026cbad4-aad495d50e0d6b538831327752b984e0fdcc74db.tar.gz, 
> HDFS-4090-dfs+packetsize.patch
>
>
> The following code illustrates the issue at hand 
> {code}
>  protected void map(LongWritable offset, Text value, Context context) 
>               throws IOException, InterruptedException {
>                       OutputStream out = fs.create(new 
> Path("/tmp/benchmark/",value.toString()), true, 1024*1024); 
>                       int i;
>                       for(i = 0; i < 1024*1024; i++) {
>                               out.write(buffer, 0, 1024);
>                       }
>                       out.close();
>                       context.write(value, new IntWritable(i));
>       }
> {code}
> This code is run as a single map-only task with an input file on disk and 
> map-output to disk.
> {{# su - hdfs -c 'hadoop jar /tmp/dfs-test-1.0-SNAPSHOT-job.jar  
> file:///tmp/list file:///grid/0/hadoop/hdfs/tmp/benchmark'}}
> In the data node disk access patterns, the following consistent pattern was 
> observed irrespective of bufferSize provided.
> {code}
> 21119 read(58,  <unfinished ...>
> 21119 <... read resumed> 
> "\0\1\0\0\0\0\0\0\0034\212\0\0\0\0\0\0\0+\220\0\0\0\376\0\262\252ux\262\252u"...,
>  65557) = 65557
> 21119 lseek(107, 0, SEEK_CUR <unfinished ...>
> 21119 <... lseek resumed> )             = 53774848
> 21119 write(107, 
> "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 65024 
> <unfinished ...>
> 21119 <... write resumed> )             = 65024
> 21119 write(108, 
> "\262\252ux\262\252ux\262\252ux\262\252ux\262\252ux\262\252ux\262\252ux\262\252ux"...,
>  508 <unfinished ...>
> 21119 <... write resumed> )             = 508
> {code}
> Here fd 58 is the incoming socket, 107 is the blk file and 108 is the .meta 
> file.
> The DFS packet size ignores the bufferSize argument and suffers from 
> suboptimal syscall & disk performance because of the default 64kb value, as 
> is obvious from the interrupted read/write operations.
> Changing the packet size to a more optimal 1056405 bytes results in a decent 
> spike in performance, by cutting down on disk & network iops.
> h3. Average time (milliseconds) for a 10 GB write as 10 files in a single map 
> task
> ||timestamp||65536||1056252||
> |1350469614|88530|78662|
> |1350469827|88610|81680|
> |1350470042|92632|78277|
> |1350470261|89726|79225|
> |1350470476|92272|78265|
> |1350470696|89646|81352|
> |1350470913|92311|77281|
> |1350471132|89632|77601|
> |1350471345|89302|81530|
> |1350471564|91844|80413|
> That is by average an increase from ~115 MB/s to ~130 MB/s, by modifying the 
> global packet size setting.
> This suggests that there is value in adapting the user provided buffer sizes 
> to hadoop packet sizing, per stream.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to