[ https://issues.apache.org/jira/browse/HDFS-6758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14090398#comment-14090398 ]
Tsz Wo Nicholas Sze commented on HDFS-6758: ------------------------------------------- I think we do not need to change OpWriteBlockProto since the header (BaseHeaderProto) already has the numBytes. > block writer should pass the expected block size to DataXceiverServer > --------------------------------------------------------------------- > > Key: HDFS-6758 > URL: https://issues.apache.org/jira/browse/HDFS-6758 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode, hdfs-client > Affects Versions: 2.4.1 > Reporter: Arpit Agarwal > Assignee: Arpit Agarwal > Attachments: HDFS-6758.01.patch > > > DataXceiver initializes the block size to the default block size for the > cluster. This size is later used by the FsDatasetImpl when applying > VolumeChoosingPolicy. > {code} > block.setNumBytes(dataXceiverServer.estimateBlockSize); > {code} > where > {code} > /** > * We need an estimate for block size to check if the disk partition has > * enough space. For now we set it to be the default block size set > * in the server side configuration, which is not ideal because the > * default block size should be a client-size configuration. > * A better solution is to include in the header the estimated block size, > * i.e. either the actual block size or the default block size. > */ > final long estimateBlockSize; > {code} > In most cases the writer can just pass the maximum expected block size to the > DN instead of having to use the cluster default. -- This message was sent by Atlassian JIRA (v6.2#6252)