[ 
https://issues.apache.org/jira/browse/HDFS-9011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744469#comment-14744469
 ] 

Colin Patrick McCabe commented on HDFS-9011:
--------------------------------------------

You can just raise the maximum RPC size via {{ipc.maximum.data.length}}, as 
added in HADOOP-9676, right?  It is true that processing such a large report 
will take a long time on the NameNode, but this patch does not address that 
problem either.

I am very skeptical about adding more complexity to the full block report path, 
unless it can really address the main problem: the length of time which the 
NameNode holds the lock for when processing a long storage report.

> Support splitting BlockReport of a storage into multiple RPC
> ------------------------------------------------------------
>
>                 Key: HDFS-9011
>                 URL: https://issues.apache.org/jira/browse/HDFS-9011
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Jing Zhao
>            Assignee: Jing Zhao
>         Attachments: HDFS-9011.000.patch, HDFS-9011.001.patch, 
> HDFS-9011.002.patch
>
>
> Currently if a DataNode has too many blocks (more than 1m by default), it 
> sends multiple RPC to the NameNode for the block report, each RPC contains 
> report for a single storage. However, in practice we've seen sometimes even a 
> single storage can contains large amount of blocks and the report even 
> exceeds the max RPC data length. It may be helpful to support sending 
> multiple RPC for the block report of a storage. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to