[ 
https://issues.apache.org/jira/browse/HDFS-17205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17769147#comment-17769147
 ] 

ASF GitHub Bot commented on HDFS-17205:
---------------------------------------

Hexiaoqiao commented on code in PR #6112:
URL: https://github.com/apache/hadoop/pull/6112#discussion_r1337110886


##########
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeReconfigure.java:
##########
@@ -654,6 +655,52 @@ public void 
testReconfigureDecommissionBackoffMonitorParameters()
     }
   }
 
+  @Test
+  public void testReconfigureMinBlocksForWrite() throws Exception {
+    final NameNode nameNode = cluster.getNameNode(0);
+    final BlockManager bm = nameNode.getNamesystem().getBlockManager();
+    String key = 
DFSConfigKeys.DFS_NAMENODE_BLOCKPLACEMENTPOLICY_MIN_BLOCKS_FOR_WRITE_KEY;
+    int defaultVal = 
DFSConfigKeys.DFS_NAMENODE_BLOCKPLACEMENTPOLICY_MIN_BLOCKS_FOR_WRITE_DEFAULT;
+
+    // Ensure we cannot set any of the parameters negative
+    ReconfigurationException reconfigurationException =
+        LambdaTestUtils.intercept(ReconfigurationException.class,
+            () -> nameNode.reconfigurePropertyImpl(key, "-20"));
+    assertTrue(reconfigurationException.getCause() instanceof 
IllegalArgumentException);
+    assertEquals(key+" = '-20' is invalid. It should be a "

Review Comment:
   Leave blank space before and after plus sign. The same as L671,L680,L681





> HdfsServerConstants.MIN_BLOCKS_FOR_WRITE should be configurable
> ---------------------------------------------------------------
>
>                 Key: HDFS-17205
>                 URL: https://issues.apache.org/jira/browse/HDFS-17205
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Haiyang Hu
>            Assignee: Haiyang Hu
>            Priority: Major
>              Labels: pull-request-available
>
> Current allocate new block , the NameNode will choose datanode and choose a 
> good storage of given storage type from datanode, the specific calling code 
> is DatanodeDescriptor#chooseStorage4Block, here will calculate the space 
> required for write operations, 
> requiredSize = blockSize * HdfsServerConstants.MIN_BLOCKS_FOR_WRITE(default 
> is 1).
> {code:java}
> public DatanodeStorageInfo chooseStorage4Block(StorageType t,
>     long blockSize) {
>   final long requiredSize =
>       blockSize * HdfsServerConstants.MIN_BLOCKS_FOR_WRITE;
>   final long scheduledSize = blockSize * getBlocksScheduled(t);
>   long remaining = 0;
>   DatanodeStorageInfo storage = null;
>   for (DatanodeStorageInfo s : getStorageInfos()) {
>     if (s.getState() == State.NORMAL && s.getStorageType() == t) {
>       if (storage == null) {
>         storage = s;
>       }
>       long r = s.getRemaining();
>       if (r >= requiredSize) {
>         remaining += r;
>       }
>     }
>   }
>   if (requiredSize > remaining - scheduledSize) {
>     BlockPlacementPolicy.LOG.debug(
>         "The node {} does not have enough {} space (required={},"
>         + " scheduled={}, remaining={}).",
>         this, t, requiredSize, scheduledSize, remaining);
>     return null;
>   }
>   return storage;
> }
> {code}
> But when multiple NameSpaces select the storage of the same datanode to write 
> blocks at the same time. 
> In extreme cases, if there is only one block size left in the current 
> storage, there will be a situation where there is not enough free space for 
> the writer to write data.
> log similar to the following appears:
> {code:java}
> The volume [file:/disk1/] with the available space (=21129618 B) is less than 
> the block size (=268435456 B).  
> {code}
> In order to avoid this case, consider 
> HdfsServerConstants.MIN_BLOCKS_FOR_WRITE should be configurable, and the 
> parameters can be adjusted in larger clusters.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to