[GitHub] [hadoop] ayushtkn commented on a change in pull request #2746: HDFS-15875. Check whether file is being truncated before truncate

2021-03-08 Thread GitBox


ayushtkn commented on a change in pull request #2746:
URL: https://github.com/apache/hadoop/pull/2746#discussion_r589419978



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
##
@@ -631,10 +698,10 @@ public void testTruncateFailure() throws IOException {
 {
   try {
 fs.truncate(p, 0);
-fail("Truncate must fail since a trancate is already in pregress.");
+fail("Truncate must fail since a truncate is already in progress.");
   } catch (IOException expected) {
 GenericTestUtils.assertExceptionContains(
-"Failed to TRUNCATE_FILE", expected);
+"/dir/testTruncateFailure is being truncated", expected);
   }

Review comment:
   What is the difference between this case and ours? Why doesn't this 
exception triggers for our case? and can we accommodate our fix to the check 
throwing this exception





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on a change in pull request #2746: HDFS-15875. Check whether file is being truncated before truncate

2021-03-05 Thread GitBox


ayushtkn commented on a change in pull request #2746:
URL: https://github.com/apache/hadoop/pull/2746#discussion_r588195715



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
##
@@ -218,6 +219,65 @@ public void testSnapshotTruncateThenDeleteSnapshot() 
throws IOException {
 fs.delete(dir, true);
   }
 
+
+  /**
+   * Test truncate twice together on a file
+   */
+  @Test(timeout=9)
+  public void testTruncateTwiceTogether() throws Exception {
+
+Path dir = new Path("/testTruncateTwiceTogether");
+fs.mkdirs(dir);
+final Path p = new Path(dir, "file");
+final byte[] data = new byte[100 * BLOCK_SIZE];
+ThreadLocalRandom.current().nextBytes(data);
+writeContents(data, data.length, p);
+
+DataNodeFaultInjector originInjector = DataNodeFaultInjector.get();

Review comment:
   Is this used somewhere?

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
##
@@ -218,6 +219,65 @@ public void testSnapshotTruncateThenDeleteSnapshot() 
throws IOException {
 fs.delete(dir, true);
   }
 
+
+  /**
+   * Test truncate twice together on a file
+   */
+  @Test(timeout=9)
+  public void testTruncateTwiceTogether() throws Exception {
+
+Path dir = new Path("/testTruncateTwiceTogether");
+fs.mkdirs(dir);
+final Path p = new Path(dir, "file");
+final byte[] data = new byte[100 * BLOCK_SIZE];
+ThreadLocalRandom.current().nextBytes(data);
+writeContents(data, data.length, p);
+
+DataNodeFaultInjector originInjector = DataNodeFaultInjector.get();
+DataNodeFaultInjector injector = new DataNodeFaultInjector() {
+  @Override
+  public void delay() {
+try {
+  // Bigger than soft lease period.
+  Thread.sleep(65000);
+} catch (InterruptedException e) {
+  e.printStackTrace();

Review comment:
   No need to print the trace, if possible add Exception to method 
signature and remove this try-catch, else ignore the exception

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
##
@@ -218,6 +219,65 @@ public void testSnapshotTruncateThenDeleteSnapshot() 
throws IOException {
 fs.delete(dir, true);
   }
 
+
+  /**
+   * Test truncate twice together on a file
+   */
+  @Test(timeout=9)
+  public void testTruncateTwiceTogether() throws Exception {
+
+Path dir = new Path("/testTruncateTwiceTogether");
+fs.mkdirs(dir);
+final Path p = new Path(dir, "file");
+final byte[] data = new byte[100 * BLOCK_SIZE];
+ThreadLocalRandom.current().nextBytes(data);
+writeContents(data, data.length, p);
+
+DataNodeFaultInjector originInjector = DataNodeFaultInjector.get();
+DataNodeFaultInjector injector = new DataNodeFaultInjector() {
+  @Override
+  public void delay() {
+try {
+  // Bigger than soft lease period.
+  Thread.sleep(65000);
+} catch (InterruptedException e) {
+  e.printStackTrace();
+}
+  }
+};
+// Delay to recovery.
+DataNodeFaultInjector.set(injector);
+
+// Truncate by using different client name.
+Thread t = new Thread(() ->
+{
+  String hdfsCacheDisableKey = "fs.hdfs.impl.disable.cache";
+  boolean originCacheDisable =
+  conf.getBoolean(hdfsCacheDisableKey, false);
+  try {
+conf.setBoolean(hdfsCacheDisableKey, true);
+FileSystem fs1 = FileSystem.get(conf);
+fs1.truncate(p, data.length-1);
+} catch (IOException e) {
+  // ignore
+} finally{
+  conf.setBoolean(hdfsCacheDisableKey, originCacheDisable);
+}
+});
+t.start();
+t.join();
+Thread.sleep(6);
+try {
+  fs.truncate(p, data.length - 2);
+} catch (IOException e) {
+  //GenericTestUtils.assertExceptionContains("is being truncated.", e);
+}

Review comment:
   Can use LambdaTestUtils.
   
   ```
   LambdaTestUtils.intercept(RemoteException.class,
   "/testTruncateTwiceTogether/file is being truncated",
   () -> fs.truncate(p, data.length - 2));
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org