[ 
https://issues.apache.org/jira/browse/HADOOP-4386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12638606#action_12638606
 ] 

Raghu Angadi commented on HADOOP-4386:
--------------------------------------

I should add that using transferTo or not does not actually change the blocking 
dynamic. In side the RPC, user code still needs to do blocking read or write to 
disk. 


> RPC support for large data transfers.
> -------------------------------------
>
>                 Key: HADOOP-4386
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4386
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: dfs, ipc
>            Reporter: Raghu Angadi
>
> Currently HDFS has a socket level protocol for serving HDFS data to clients. 
> Clients do not use RPCs to read or write data. Fundamentally there is no 
> reason why this data transfer  can not use RPCs.
> This jira is place holder for any porting Datanode transfers to RPC. This 
> topic has been discussed in varying detail many times, the latest being in 
> the context of HADOOP-3856. There are quite a few issues to be resolved both 
> at API level and at implementation level. 
> We should probably copy some of the comments from HADOOP-3856 to here.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to