[ 
https://issues.apache.org/jira/browse/HDFS-7435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14223437#comment-14223437
 ] 

Daryn Sharp commented on HDFS-7435:
-----------------------------------

We've also seen large block reports aggravate GC, not due to decoded size, but 
due to the spew of garbage from the re-allocs and (un)boxing.  This patch is 
basically taking us back to pre-PB performance & behavior.

The block report is ~44,000 blocks/MB which at our scale is a few megs per 
report.  HDFS-4879 chunked a list of blocks to delete because of 25M entries 
consuming ~400MB.  I don't foresee blocks/node approaching that level anytime 
soon.  In that light, it's probably premature to introduce chunking?


> PB encoding of block reports is very inefficient
> ------------------------------------------------
>
>                 Key: HDFS-7435
>                 URL: https://issues.apache.org/jira/browse/HDFS-7435
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: datanode, namenode
>    Affects Versions: 2.0.0-alpha, 3.0.0
>            Reporter: Daryn Sharp
>            Assignee: Daryn Sharp
>            Priority: Critical
>         Attachments: HDFS-7435.patch
>
>
> Block reports are encoded as a PB repeating long.  Repeating fields use an 
> {{ArrayList}} with default capacity of 10.  A block report containing tens or 
> hundreds of thousand of longs (3 for each replica) is extremely expensive 
> since the {{ArrayList}} must realloc many times.  Also, decoding repeating 
> fields will box the primitive longs which must then be unboxed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to