[ 
https://issues.apache.org/jira/browse/HDFS-347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-347:
--------------------------------------

    Attachment: HDFS-347.020.patch

This is an updated patch based on a discussion we had in HADOOP-6311.  
Basically, the current design is to pass file descriptors over a new class 
named {{DomainSocket}}, which represents UNIX domain sockets.  This is 
accomplished by adding a new message to the {{DataTransferProtocol}}, 
{{RequestShortCircuitFd}}.

The {{DataXceiverServer}} can manage these UNIX domain sockets just as easily 
as it manages existing the IPv4 sockets, because they implement the same 
interfaces.

One thing I refactored in this patch is {{BlockReaderFactory}}.  It formerly 
contained only static methods; this patch changes it to be a "real" class with 
instance methods and instance data.  I felt that the {{BlockReaderFactory}} 
methods were getting too unwieldy because we were passing a tremendous amount 
of parameters, many of which could be considered properties of the factory in a 
sense.  Using instance data also allows the factory to keep a blacklist of 
which {{DataNodes}} do not support file descriptor passing.  It uses this 
information to avoid making unnecesary requests.

This patch also introduces the concept of a format version number for blocks.  
The idea here is that if we later change the block format on-disk, we want to 
be able to tell clients that they can't short-circuit access these blocks 
unless they can understand the corresponding version number.  (One change we've 
talked a lot about doing in the past is merging block data and metadata files.) 
 This makes it possible to have a cluster where you have some block files in 
one format and some in another-- a necessity for doing a real-world transition. 
 The clients are passed the version number, so they can act intelligently-- or 
simply refuse to read the newer formats if they don't know how.

Because this patch depends on the {{DomainSocket}} code, it currently 
incorporates that code.  HADOOP-6311 is the best place to comment about 
{{DomainSocket}}, since that is what that JIRA is about.
                
> DFS read performance suboptimal when client co-located on nodes with data
> -------------------------------------------------------------------------
>
>                 Key: HDFS-347
>                 URL: https://issues.apache.org/jira/browse/HDFS-347
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: data-node, hdfs client, performance
>            Reporter: George Porter
>            Assignee: Colin Patrick McCabe
>         Attachments: all.tsv, BlockReaderLocal1.txt, HADOOP-4801.1.patch, 
> HADOOP-4801.2.patch, HADOOP-4801.3.patch, HDFS-347-016_cleaned.patch, 
> HDFS-347.016.patch, HDFS-347.017.clean.patch, HDFS-347.017.patch, 
> HDFS-347.018.clean.patch, HDFS-347.018.patch2, HDFS-347.019.patch, 
> HDFS-347.020.patch, HDFS-347-branch-20-append.txt, hdfs-347.png, 
> hdfs-347.txt, local-reads-doc
>
>
> One of the major strategies Hadoop uses to get scalable data processing is to 
> move the code to the data.  However, putting the DFS client on the same 
> physical node as the data blocks it acts on doesn't improve read performance 
> as much as expected.
> After looking at Hadoop and O/S traces (via HADOOP-4049), I think the problem 
> is due to the HDFS streaming protocol causing many more read I/O operations 
> (iops) than necessary.  Consider the case of a DFSClient fetching a 64 MB 
> disk block from the DataNode process (running in a separate JVM) running on 
> the same machine.  The DataNode will satisfy the single disk block request by 
> sending data back to the HDFS client in 64-KB chunks.  In BlockSender.java, 
> this is done in the sendChunk() method, relying on Java's transferTo() 
> method.  Depending on the host O/S and JVM implementation, transferTo() is 
> implemented as either a sendfilev() syscall or a pair of mmap() and write().  
> In either case, each chunk is read from the disk by issuing a separate I/O 
> operation for each chunk.  The result is that the single request for a 64-MB 
> block ends up hitting the disk as over a thousand smaller requests for 64-KB 
> each.
> Since the DFSClient runs in a different JVM and process than the DataNode, 
> shuttling data from the disk to the DFSClient also results in context 
> switches each time network packets get sent (in this case, the 64-kb chunk 
> turns into a large number of 1500 byte packet send operations).  Thus we see 
> a large number of context switches for each block send operation.
> I'd like to get some feedback on the best way to address this, but I think 
> providing a mechanism for a DFSClient to directly open data blocks that 
> happen to be on the same machine.  It could do this by examining the set of 
> LocatedBlocks returned by the NameNode, marking those that should be resident 
> on the local host.  Since the DataNode and DFSClient (probably) share the 
> same hadoop configuration, the DFSClient should be able to find the files 
> holding the block data, and it could directly open them and send data back to 
> the client.  This would avoid the context switches imposed by the network 
> layer, and would allow for much larger read buffers than 64KB, which should 
> reduce the number of iops imposed by each read block operation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to