[
https://issues.apache.org/jira/browse/HADOOP-4386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12638435#action_12638435
]
Raghu Angadi commented on HADOOP-4386:
--------------------------------------
I think so too.
Some of the issues that I can think of regd implementing 'non-copied'
parameters:
- how is user's copier code run. Will it hold up the RPC handler during the
call?
- what happens when data can not be written or read without blocking. Will the
connection be locked by this call, will a new connection be created for other
RPCs
> RPC support for large data transfers.
> -------------------------------------
>
> Key: HADOOP-4386
> URL: https://issues.apache.org/jira/browse/HADOOP-4386
> Project: Hadoop Core
> Issue Type: New Feature
> Components: dfs, ipc
> Reporter: Raghu Angadi
>
> Currently HDFS has a socket level protocol for serving HDFS data to clients.
> Clients do not use RPCs to read or write data. Fundamentally there is no
> reason why this data transfer can not use RPCs.
> This jira is place holder for any porting Datanode transfers to RPC. This
> topic has been discussed in varying detail many times, the latest being in
> the context of HADOOP-3856. There are quite a few issues to be resolved both
> at API level and at implementation level.
> We should probably copy some of the comments from HADOOP-3856 to here.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.