[ 
https://issues.apache.org/jira/browse/HDFS-10645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15381802#comment-15381802
 ] 

Yuanbo Liu commented on HDFS-10645:
-----------------------------------

If the cluster grows big enough, it will hit this error:
{noformat}
org.apache.hadoop.ipc.RemoteException: java.lang.IllegalStateException: 
com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase the 
size limit.
{noformat}
Apparently the block report size exceed the limit of PB, and the blocks in the 
data directory will be marked as unavailable in namespace. This is a bad sign 
for the cluster despite of 3 replications. It's will be better if the 
administrators get the "Max block report size" in time. So I propose to add 
this metric to datanode web ui.

> Make block report size as a metric and add this metric to datanode web ui
> -------------------------------------------------------------------------
>
>                 Key: HDFS-10645
>                 URL: https://issues.apache.org/jira/browse/HDFS-10645
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: datanode, ui
>            Reporter: Yuanbo Liu
>            Assignee: Yuanbo Liu
>
> Add a new metric called "Max block report size". It's important for 
> administrators to know the bottleneck of  block report, and the metric is 
> also a good tuning metric.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to