[ https://issues.apache.org/jira/browse/HDFS-10312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15248713#comment-15248713 ]
Mingliang Liu commented on HDFS-10312: -------------------------------------- +1 (non-binding) One nit is that, in the unit test {{testBlockReportExceedsLengthLimit()}}, we can add a {{fail("Should have failed because of the too long RPC data length");}} as the last statement of the {{try}} block. > Large block reports may fail to decode at NameNode due to 64 MB protobuf > maximum length restriction. > ---------------------------------------------------------------------------------------------------- > > Key: HDFS-10312 > URL: https://issues.apache.org/jira/browse/HDFS-10312 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode > Reporter: Chris Nauroth > Assignee: Chris Nauroth > Attachments: HDFS-10312.001.patch > > > Our RPC server caps the maximum size of incoming messages at 64 MB by > default. For exceptional circumstances, this can be uptuned using > {{ipc.maximum.data.length}}. However, for block reports, there is still an > internal maximum length restriction of 64 MB enforced by protobuf. (Sample > stack trace to follow in comments.) This issue proposes to apply the same > override to our block list decoding, so that large block reports can proceed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)