[ https://issues.apache.org/jira/browse/HDFS-6758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14105851#comment-14105851 ]
Hudson commented on HDFS-6758: ------------------------------ SUCCESS: Integrated in Hadoop-Mapreduce-trunk #1870 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1870/]) HDFS-6758. Block writer should pass the expected block size to DataXceiverServer (Arpit Agarwal) (arp: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619275) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestWriteBlockGetsBlockLengthHint.java > block writer should pass the expected block size to DataXceiverServer > --------------------------------------------------------------------- > > Key: HDFS-6758 > URL: https://issues.apache.org/jira/browse/HDFS-6758 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode, hdfs-client > Affects Versions: 2.4.1 > Reporter: Arpit Agarwal > Assignee: Arpit Agarwal > Fix For: 3.0.0, 2.6.0 > > Attachments: HDFS-6758.01.patch, HDFS-6758.02.patch > > > DataXceiver initializes the block size to the default block size for the > cluster. This size is later used by the FsDatasetImpl when applying > VolumeChoosingPolicy. > {code} > block.setNumBytes(dataXceiverServer.estimateBlockSize); > {code} > where > {code} > /** > * We need an estimate for block size to check if the disk partition has > * enough space. For now we set it to be the default block size set > * in the server side configuration, which is not ideal because the > * default block size should be a client-size configuration. > * A better solution is to include in the header the estimated block size, > * i.e. either the actual block size or the default block size. > */ > final long estimateBlockSize; > {code} > In most cases the writer can just pass the maximum expected block size to the > DN instead of having to use the cluster default. -- This message was sent by Atlassian JIRA (v6.2#6252)