[ https://issues.apache.org/jira/browse/HDFS-347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13630905#comment-13630905 ]
Aaron T. Myers commented on HDFS-347: ------------------------------------- Nicholas, There is quite a bit of precedent for merging a branch to trunk before all of the sub-tasks are completed, if the branch as-is is functional and won't break trunk and those sub-tasks can reasonably be continued on trunk, e.g. HDFS-1623, HDFS-3077, and HDFS-1073. Given that Suresh had suggested in response to your -1 on the merge vote that we continue working on this feedback on trunk, and given that you then removed your -1 and the vote passed, it seems to me like this branch was ready for a merge from trunk. Yes, there are a few more small cleanup sub-tasks to work on, but that can be done on trunk. Let's continue the work there. > DFS read performance suboptimal when client co-located on nodes with data > ------------------------------------------------------------------------- > > Key: HDFS-347 > URL: https://issues.apache.org/jira/browse/HDFS-347 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode, hdfs-client, performance > Reporter: George Porter > Assignee: Colin Patrick McCabe > Fix For: 3.0.0 > > Attachments: 2013.01.28.design.pdf, 2013.01.31.consolidated2.patch, > 2013.01.31.consolidated.patch, 2013.02.15.consolidated4.patch, > 2013-04-01-jenkins.patch, all.tsv, a.patch, BlockReaderLocal1.txt, > full.patch, HADOOP-4801.1.patch, HADOOP-4801.2.patch, HADOOP-4801.3.patch, > HDFS-347-016_cleaned.patch, HDFS-347.016.patch, HDFS-347.017.clean.patch, > HDFS-347.017.patch, HDFS-347.018.clean.patch, HDFS-347.018.patch2, > HDFS-347.019.patch, HDFS-347.020.patch, HDFS-347.021.patch, > HDFS-347.022.patch, HDFS-347.024.patch, HDFS-347.025.patch, > HDFS-347.026.patch, HDFS-347.027.patch, HDFS-347.029.patch, > HDFS-347.030.patch, HDFS-347.033.patch, HDFS-347.035.patch, > HDFS-347-branch-20-append.txt, hdfs-347-merge.txt, hdfs-347-merge.txt, > hdfs-347-merge.txt, hdfs-347.png, hdfs-347.txt, local-reads-doc > > > One of the major strategies Hadoop uses to get scalable data processing is to > move the code to the data. However, putting the DFS client on the same > physical node as the data blocks it acts on doesn't improve read performance > as much as expected. > After looking at Hadoop and O/S traces (via HADOOP-4049), I think the problem > is due to the HDFS streaming protocol causing many more read I/O operations > (iops) than necessary. Consider the case of a DFSClient fetching a 64 MB > disk block from the DataNode process (running in a separate JVM) running on > the same machine. The DataNode will satisfy the single disk block request by > sending data back to the HDFS client in 64-KB chunks. In BlockSender.java, > this is done in the sendChunk() method, relying on Java's transferTo() > method. Depending on the host O/S and JVM implementation, transferTo() is > implemented as either a sendfilev() syscall or a pair of mmap() and write(). > In either case, each chunk is read from the disk by issuing a separate I/O > operation for each chunk. The result is that the single request for a 64-MB > block ends up hitting the disk as over a thousand smaller requests for 64-KB > each. > Since the DFSClient runs in a different JVM and process than the DataNode, > shuttling data from the disk to the DFSClient also results in context > switches each time network packets get sent (in this case, the 64-kb chunk > turns into a large number of 1500 byte packet send operations). Thus we see > a large number of context switches for each block send operation. > I'd like to get some feedback on the best way to address this, but I think > providing a mechanism for a DFSClient to directly open data blocks that > happen to be on the same machine. It could do this by examining the set of > LocatedBlocks returned by the NameNode, marking those that should be resident > on the local host. Since the DataNode and DFSClient (probably) share the > same hadoop configuration, the DFSClient should be able to find the files > holding the block data, and it could directly open them and send data back to > the client. This would avoid the context switches imposed by the network > layer, and would allow for much larger read buffers than 64KB, which should > reduce the number of iops imposed by each read block operation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira