[GitHub] [hadoop] Hexiaoqiao commented on a diff in pull request #5564: HDFS-16985. delete local block file when FileNotFoundException occurred may lead to missing block.

2023-04-27 Thread via GitHub


Hexiaoqiao commented on code in PR #5564:
URL: https://github.com/apache/hadoop/pull/5564#discussion_r1178986957


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java:
##
@@ -1919,4 +1919,63 @@ public void delayDeleteReplica() {
   DataNodeFaultInjector.set(oldInjector);
 }
   }
+
+  /**
+   * Test the block file which is not found when disk with some exception.
+   * We expect:
+   * 1. block file wouldn't be deleted from disk.
+   * 2. block info would be removed from dn memory.
+   * 3. block would be reported to nn as missing block.
+   * 4. block would be recovered when disk back to normal.
+   */
+  @Test
+  public void tesInvalidateMissingBlock() throws Exception {
+long blockSize = 1024;
+int heatbeatInterval = 1;
+HdfsConfiguration c = new HdfsConfiguration();
+c.setInt(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, heatbeatInterval);
+c.setLong(DFS_BLOCK_SIZE_KEY, blockSize);
+MiniDFSCluster cluster = new MiniDFSCluster.Builder(c).
+numDataNodes(1).build();
+try {
+  cluster.waitActive();
+  DFSTestUtil.createFile(cluster.getFileSystem(), new Path("/a"),
+  blockSize, (short)1, 0);
+
+  String bpid = cluster.getNameNode().getNamesystem().getBlockPoolId();
+  DataNode dn = cluster.getDataNodes().get(0);
+  FsDatasetImpl fsdataset = (FsDatasetImpl) dn.getFSDataset();
+  List replicaInfos = fsdataset.getFinalizedBlocks(bpid);
+  assertEquals(1, replicaInfos.size());
+
+  ReplicaInfo replicaInfo = replicaInfos.get(0);
+  String blockPath = replicaInfo.getBlockURI().getPath();
+  String metaPath = replicaInfo.getMetadataURI().getPath();
+  File blockFile = new File(blockPath);
+  File metaFile = new File(metaPath);
+
+  // Mock local block file not found when disk with some exception.
+  fsdataset.invalidateMissingBlock(bpid, replicaInfo);
+
+  // assert local block file wouldn't be deleted from disk.

Review Comment:
   Please use a capital letter at the beginning of the sentences and period at 
the end for Line 1960, 1962, 1970



##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java:
##
@@ -1919,4 +1919,63 @@ public void delayDeleteReplica() {
   DataNodeFaultInjector.set(oldInjector);
 }
   }
+
+  /**
+   * Test the block file which is not found when disk with some exception.
+   * We expect:
+   * 1. block file wouldn't be deleted from disk.
+   * 2. block info would be removed from dn memory.
+   * 3. block would be reported to nn as missing block.
+   * 4. block would be recovered when disk back to normal.
+   */
+  @Test
+  public void tesInvalidateMissingBlock() throws Exception {
+long blockSize = 1024;
+int heatbeatInterval = 1;
+HdfsConfiguration c = new HdfsConfiguration();
+c.setInt(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, heatbeatInterval);

Review Comment:
   Any purpose to set heartbeat inverrval and blocksize here?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao commented on a diff in pull request #5564: HDFS-16985. delete local block file when FileNotFoundException occurred may lead to missing block.

2023-04-25 Thread via GitHub


Hexiaoqiao commented on code in PR #5564:
URL: https://github.com/apache/hadoop/pull/5564#discussion_r1177275219


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java:
##
@@ -116,10 +103,7 @@
 import static org.junit.Assert.assertNull;
 import static org.junit.Assert.assertSame;
 import static org.junit.Assert.fail;
-import static org.mockito.ArgumentMatchers.anyList;
-import static org.mockito.ArgumentMatchers.any;
-import static org.mockito.ArgumentMatchers.anyString;
-import static org.mockito.ArgumentMatchers.eq;
+import static org.mockito.ArgumentMatchers.*;

Review Comment:
   as last comment.



##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java:
##
@@ -31,10 +31,7 @@
 
 import org.apache.hadoop.fs.DF;
 import org.apache.hadoop.hdfs.server.blockmanagement.BlockManager;
-import org.apache.hadoop.hdfs.server.datanode.DataNodeFaultInjector;
-import org.apache.hadoop.hdfs.server.datanode.DataSetLockManager;
-import org.apache.hadoop.hdfs.server.datanode.DirectoryScanner;
-import org.apache.hadoop.hdfs.server.datanode.LocalReplica;
+import org.apache.hadoop.hdfs.server.datanode.*;

Review Comment:
   Please don't import using wildcard.



##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java:
##
@@ -1919,4 +1903,54 @@ public void delayDeleteReplica() {
   DataNodeFaultInjector.set(oldInjector);
 }
   }
+  @Test
+  public void tesInvalidateMissingBlock() throws Exception {
+long blockSize = 1024;
+int heatbeatInterval = 1;
+HdfsConfiguration c = new HdfsConfiguration();
+c.setInt(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, heatbeatInterval);
+c.setLong(DFS_BLOCK_SIZE_KEY, blockSize);
+MiniDFSCluster cluster = new MiniDFSCluster.Builder(c).
+numDataNodes(1).build();
+try {
+  cluster.waitActive();
+  DFSTestUtil.createFile(cluster.getFileSystem(), new Path("/a"),
+  blockSize, (short)1, 0);
+
+  String bpid = cluster.getNameNode().getNamesystem().getBlockPoolId();
+  DataNode dn = cluster.getDataNodes().get(0);
+  FsDatasetImpl fsdataset = (FsDatasetImpl) dn.getFSDataset();
+  List replicaInfos = fsdataset.getFinalizedBlocks(bpid);
+  assertEquals(1, replicaInfos.size());
+
+  ReplicaInfo replicaInfo = replicaInfos.get(0);
+  String blockPath = replicaInfo.getBlockURI().getPath();
+  String metaPath = replicaInfo.getMetadataURI().getPath();
+  File blockFile = new File(blockPath);
+  File metaFile = new File(metaPath);
+
+  // mock local block file not found when disk with some exception

Review Comment:
   Please use a capital letter at the beginning of the sentences and period at 
the end.



##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java:
##
@@ -1919,4 +1903,54 @@ public void delayDeleteReplica() {
   DataNodeFaultInjector.set(oldInjector);
 }
   }
+  @Test

Review Comment:
   1. Leave one empty line between methods.
   2. Please add some javadoc for this tests about what it will cover.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org