[ 
https://issues.apache.org/jira/browse/HDFS-17205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17769754#comment-17769754
 ] 

ASF GitHub Bot commented on HDFS-17205:
---------------------------------------

ayushtkn commented on code in PR #6112:
URL: https://github.com/apache/hadoop/pull/6112#discussion_r1339106383


##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java:
##########
@@ -42,6 +42,7 @@
 
 @InterfaceAudience.Private
 public interface HdfsServerConstants {
+  // Will set by {@code 
DFSConfigKeys.DFS_NAMENODE_BLOCKPLACEMENTPOLICY_MIN_BLOCKS_FOR_WRITE_KEY}.

Review Comment:
   nit
   `` Will be set by```



##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java:
##########
@@ -2671,6 +2676,18 @@ private String 
reconfigureDecommissionBackoffMonitorParameters(
     }
   }
 
+  private String reconfigureMinBlocksForWrite(String property, String newValue)
+      throws ReconfigurationException {
+    try {
+      int newSetting = adjustNewVal(
+          DFS_NAMENODE_BLOCKPLACEMENTPOLICY_MIN_BLOCKS_FOR_WRITE_DEFAULT, 
newValue);
+      this.namesystem.getBlockManager().setMinBlocksForWrite(newSetting);

Review Comment:
   ``this.namesystem`` isn't required, can be just 
``namesystem.getBlockManager()``?



##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java:
##########
@@ -5683,4 +5683,16 @@ public void setExcludeSlowNodesEnabled(boolean enable) {
   public boolean getExcludeSlowNodesEnabled(BlockType blockType) {
     return placementPolicies.getPolicy(blockType).getExcludeSlowNodesEnabled();
   }
+
+  public void setMinBlocksForWrite(int minBlocksForWrite) {
+    ensurePositiveInt(minBlocksForWrite,
+        
DFSConfigKeys.DFS_NAMENODE_BLOCKPLACEMENTPOLICY_MIN_BLOCKS_FOR_WRITE_KEY);

Review Comment:
   nit
   ```  DFSConfigKeys.``` isn't required actually there is a static import on 
top 
   ```
   L20: import static org.apache.hadoop.hdfs.DFSConfigKeys.*;
   ```





> HdfsServerConstants.MIN_BLOCKS_FOR_WRITE should be configurable
> ---------------------------------------------------------------
>
>                 Key: HDFS-17205
>                 URL: https://issues.apache.org/jira/browse/HDFS-17205
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Haiyang Hu
>            Assignee: Haiyang Hu
>            Priority: Major
>              Labels: pull-request-available
>
> Current allocate new block , the NameNode will choose datanode and choose a 
> good storage of given storage type from datanode, the specific calling code 
> is DatanodeDescriptor#chooseStorage4Block, here will calculate the space 
> required for write operations, 
> requiredSize = blockSize * HdfsServerConstants.MIN_BLOCKS_FOR_WRITE(default 
> is 1).
> {code:java}
> public DatanodeStorageInfo chooseStorage4Block(StorageType t,
>     long blockSize) {
>   final long requiredSize =
>       blockSize * HdfsServerConstants.MIN_BLOCKS_FOR_WRITE;
>   final long scheduledSize = blockSize * getBlocksScheduled(t);
>   long remaining = 0;
>   DatanodeStorageInfo storage = null;
>   for (DatanodeStorageInfo s : getStorageInfos()) {
>     if (s.getState() == State.NORMAL && s.getStorageType() == t) {
>       if (storage == null) {
>         storage = s;
>       }
>       long r = s.getRemaining();
>       if (r >= requiredSize) {
>         remaining += r;
>       }
>     }
>   }
>   if (requiredSize > remaining - scheduledSize) {
>     BlockPlacementPolicy.LOG.debug(
>         "The node {} does not have enough {} space (required={},"
>         + " scheduled={}, remaining={}).",
>         this, t, requiredSize, scheduledSize, remaining);
>     return null;
>   }
>   return storage;
> }
> {code}
> But when multiple NameSpaces select the storage of the same datanode to write 
> blocks at the same time. 
> In extreme cases, if there is only one block size left in the current 
> storage, there will be a situation where there is not enough free space for 
> the writer to write data.
> log similar to the following appears:
> {code:java}
> The volume [file:/disk1/] with the available space (=21129618 B) is less than 
> the block size (=268435456 B).  
> {code}
> In order to avoid this case, consider 
> HdfsServerConstants.MIN_BLOCKS_FOR_WRITE should be configurable, and the 
> parameters can be adjusted in larger clusters.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to