[hadoop] branch branch-3.3 updated (a7c1fad0c9a -> 703158c9c66)

2023-03-01 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a change to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


from a7c1fad0c9a HDFS-16923. [SBN read] getlisting RPC to observer will 
throw NPE if path does not exist (#5400)
 add 703158c9c66 HDFS-16896 clear ignoredNodes list when we clear deadnode 
list on ref… (#5322) (#5444)

No new revisions were added by this update.

Summary of changes:
 .../org/apache/hadoop/hdfs/DFSInputStream.java | 34 ++
 .../hdfs/TestDFSInputStreamBlockLocations.java | 23 +++
 .../java/org/apache/hadoop/hdfs/TestPread.java |  4 ++-
 3 files changed, 55 insertions(+), 6 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: HDFS-16923. [SBN read] getlisting RPC to observer will throw NPE if path does not exist (#5400)

2023-03-01 Thread xkrogen
This is an automated email from the ASF dual-hosted git repository.

xkrogen pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new a7c1fad0c9a HDFS-16923. [SBN read] getlisting RPC to observer will 
throw NPE if path does not exist (#5400)
a7c1fad0c9a is described below

commit a7c1fad0c9a675195579c971962ddd32e5d9fc51
Author: ZanderXu 
AuthorDate: Thu Mar 2 08:18:38 2023 +0800

HDFS-16923. [SBN read] getlisting RPC to observer will throw NPE if path 
does not exist (#5400)

Signed-off-by: Erik Krogen 

(cherry picked from commit 6bd24448154fcd3ab9099d7783cc7f7f76c61e08)
---
 .../org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java |  2 +-
 .../hadoop/hdfs/server/namenode/ha/TestObserverNode.java | 12 
 2 files changed, 13 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index 437ffab6727..9855b434e9c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -4090,7 +4090,7 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
   logAuditEvent(false, operationName, src);
   throw e;
 }
-if (needLocation && isObserver()) {
+if (dl != null && needLocation && isObserver()) {
   for (HdfsFileStatus fs : dl.getPartialListing()) {
 if (fs instanceof HdfsLocatedFileStatus) {
   LocatedBlocks lbs = ((HdfsLocatedFileStatus) fs).getLocatedBlocks();
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestObserverNode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestObserverNode.java
index d7e2d118549..178f2fcde90 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestObserverNode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestObserverNode.java
@@ -64,6 +64,7 @@ import org.apache.hadoop.hdfs.server.namenode.NameNodeAdapter;
 import org.apache.hadoop.hdfs.server.namenode.TestFsck;
 import org.apache.hadoop.hdfs.tools.GetGroups;
 import org.apache.hadoop.ipc.ObserverRetryOnActiveException;
+import org.apache.hadoop.test.LambdaTestUtils;
 import org.apache.hadoop.util.Time;
 import org.apache.hadoop.util.concurrent.HadoopExecutors;
 import org.junit.After;
@@ -608,6 +609,17 @@ public class TestObserverNode {
 }
   }
 
+  @Test
+  public void testGetListingForDeletedDir() throws Exception {
+Path path = new Path("/dir1/dir2/testFile");
+dfs.create(path).close();
+
+assertTrue(dfs.delete(new Path("/dir1/dir2"), true));
+
+LambdaTestUtils.intercept(FileNotFoundException.class,
+() -> dfs.listLocatedStatus(new Path("/dir1/dir2")));
+  }
+
   @Test
   public void testSimpleReadEmptyDirOrFile() throws IOException {
 // read empty dir


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (e1ca466bdbf -> 6bd24448154)

2023-03-01 Thread xkrogen
This is an automated email from the ASF dual-hosted git repository.

xkrogen pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


from e1ca466bdbf HADOOP-18648. Avoid loading kms log4j properties 
dynamically by KMSWebServer (#5441)
 add 6bd24448154 HDFS-16923. [SBN read] getlisting RPC to observer will 
throw NPE if path does not exist (#5400)

No new revisions were added by this update.

Summary of changes:
 .../org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java |  2 +-
 .../hadoop/hdfs/server/namenode/ha/TestObserverNode.java | 12 
 2 files changed, 13 insertions(+), 1 deletion(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (162288bc0af -> e1ca466bdbf)

2023-03-01 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


from 162288bc0af HDFS-16896 clear ignoredNodes list when we clear deadnode 
list on ref… (#5322)
 add e1ca466bdbf HADOOP-18648. Avoid loading kms log4j properties 
dynamically by KMSWebServer (#5441)

No new revisions were added by this update.

Summary of changes:
 .../crypto/key/kms/server/KMSConfiguration.java| 39 --
 .../hadoop/crypto/key/kms/server/KMSWebServer.java |  2 +-
 .../src/main/libexec/shellprofile.d/hadoop-kms.sh  |  2 ++
 3 files changed, 16 insertions(+), 27 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (2ab7eb4caa9 -> 162288bc0af)

2023-03-01 Thread omalley
This is an automated email from the ASF dual-hosted git repository.

omalley pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


from 2ab7eb4caa9 HDFS-16935. Fix TestFsDatasetImpl#testReportBadBlocks 
(#5432)
 add 162288bc0af HDFS-16896 clear ignoredNodes list when we clear deadnode 
list on ref… (#5322)

No new revisions were added by this update.

Summary of changes:
 .../org/apache/hadoop/hdfs/DFSInputStream.java | 34 ++
 .../hdfs/TestDFSInputStreamBlockLocations.java | 23 +++
 .../java/org/apache/hadoop/hdfs/TestPread.java |  4 ++-
 3 files changed, 55 insertions(+), 6 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: HDFS-16935. Fix TestFsDatasetImpl#testReportBadBlocks (#5432)

2023-03-01 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 91ce13ea013 HDFS-16935. Fix TestFsDatasetImpl#testReportBadBlocks 
(#5432)
91ce13ea013 is described below

commit 91ce13ea013ac7a20675ed28341871a2ba019a46
Author: Viraj Jasani 
AuthorDate: Wed Mar 1 10:53:10 2023 -0800

HDFS-16935. Fix TestFsDatasetImpl#testReportBadBlocks (#5432)

Contributed by Viraj Jasani
---
 .../datanode/fsdataset/impl/TestFsDatasetImpl.java | 24 --
 1 file changed, 9 insertions(+), 15 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java
index 0e04702f10a..711ca5fae53 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java
@@ -927,16 +927,14 @@ public class TestFsDatasetImpl {
   @Test(timeout = 3)
   public void testReportBadBlocks() throws Exception {
 boolean threwException = false;
-MiniDFSCluster cluster = null;
-try {
-  Configuration config = new HdfsConfiguration();
-  cluster = new MiniDFSCluster.Builder(config).numDataNodes(1).build();
+final Configuration config = new HdfsConfiguration();
+try (MiniDFSCluster cluster = new MiniDFSCluster.Builder(config)
+.numDataNodes(1).build()) {
   cluster.waitActive();
 
   Assert.assertEquals(0, 
cluster.getNamesystem().getCorruptReplicaBlocks());
   DataNode dataNode = cluster.getDataNodes().get(0);
-  ExtendedBlock block =
-  new ExtendedBlock(cluster.getNamesystem().getBlockPoolId(), 0);
+  ExtendedBlock block = new 
ExtendedBlock(cluster.getNamesystem().getBlockPoolId(), 0);
   try {
 // Test the reportBadBlocks when the volume is null
 dataNode.reportBadBlocks(block);
@@ -953,15 +951,11 @@ public class TestFsDatasetImpl {
 
   block = DFSTestUtil.getFirstBlock(fs, filePath);
   // Test for the overloaded method reportBadBlocks
-  dataNode.reportBadBlocks(block, dataNode.getFSDataset()
-  .getFsVolumeReferences().get(0));
-  Thread.sleep(3000);
-  BlockManagerTestUtil.updateState(cluster.getNamesystem()
-  .getBlockManager());
-  // Verify the bad block has been reported to namenode
-  Assert.assertEquals(1, 
cluster.getNamesystem().getCorruptReplicaBlocks());
-} finally {
-  cluster.shutdown();
+  dataNode.reportBadBlocks(block, 
dataNode.getFSDataset().getFsVolumeReferences().get(0));
+  DataNodeTestUtils.triggerHeartbeat(dataNode);
+  
BlockManagerTestUtil.updateState(cluster.getNamesystem().getBlockManager());
+  assertEquals("Corrupt replica blocks could not be reflected with the 
heartbeat", 1,
+  cluster.getNamesystem().getCorruptReplicaBlocks());
 }
   }
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-16935. Fix TestFsDatasetImpl#testReportBadBlocks (#5432)

2023-03-01 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 2ab7eb4caa9 HDFS-16935. Fix TestFsDatasetImpl#testReportBadBlocks 
(#5432)
2ab7eb4caa9 is described below

commit 2ab7eb4caa9fe012e671434c5bce0e7169440e16
Author: Viraj Jasani 
AuthorDate: Wed Mar 1 10:53:10 2023 -0800

HDFS-16935. Fix TestFsDatasetImpl#testReportBadBlocks (#5432)


Contributed by Viraj Jasani
---
 .../datanode/fsdataset/impl/TestFsDatasetImpl.java | 24 --
 1 file changed, 9 insertions(+), 15 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java
index d6f42f3d020..b744a6fa586 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java
@@ -1075,16 +1075,14 @@ public class TestFsDatasetImpl {
   @Test(timeout = 3)
   public void testReportBadBlocks() throws Exception {
 boolean threwException = false;
-MiniDFSCluster cluster = null;
-try {
-  Configuration config = new HdfsConfiguration();
-  cluster = new MiniDFSCluster.Builder(config).numDataNodes(1).build();
+final Configuration config = new HdfsConfiguration();
+try (MiniDFSCluster cluster = new MiniDFSCluster.Builder(config)
+.numDataNodes(1).build()) {
   cluster.waitActive();
 
   Assert.assertEquals(0, 
cluster.getNamesystem().getCorruptReplicaBlocks());
   DataNode dataNode = cluster.getDataNodes().get(0);
-  ExtendedBlock block =
-  new ExtendedBlock(cluster.getNamesystem().getBlockPoolId(), 0);
+  ExtendedBlock block = new 
ExtendedBlock(cluster.getNamesystem().getBlockPoolId(), 0);
   try {
 // Test the reportBadBlocks when the volume is null
 dataNode.reportBadBlocks(block);
@@ -1101,15 +1099,11 @@ public class TestFsDatasetImpl {
 
   block = DFSTestUtil.getFirstBlock(fs, filePath);
   // Test for the overloaded method reportBadBlocks
-  dataNode.reportBadBlocks(block, dataNode.getFSDataset()
-  .getFsVolumeReferences().get(0));
-  Thread.sleep(3000);
-  BlockManagerTestUtil.updateState(cluster.getNamesystem()
-  .getBlockManager());
-  // Verify the bad block has been reported to namenode
-  Assert.assertEquals(1, 
cluster.getNamesystem().getCorruptReplicaBlocks());
-} finally {
-  cluster.shutdown();
+  dataNode.reportBadBlocks(block, 
dataNode.getFSDataset().getFsVolumeReferences().get(0));
+  DataNodeTestUtils.triggerHeartbeat(dataNode);
+  
BlockManagerTestUtil.updateState(cluster.getNamesystem().getBlockManager());
+  assertEquals("Corrupt replica blocks could not be reflected with the 
heartbeat", 1,
+  cluster.getNamesystem().getCorruptReplicaBlocks());
 }
   }
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: MAPREDUCE-7434. Fix ShuffleHandler tests. Contributed by Tamas Domok

2023-03-01 Thread snemeth
This is an automated email from the ASF dual-hosted git repository.

snemeth pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 8f6be3678d1 MAPREDUCE-7434. Fix ShuffleHandler tests. Contributed by 
Tamas Domok
8f6be3678d1 is described below

commit 8f6be3678d1113e3e7f5477c357fc81f62d460b8
Author: Szilard Nemeth 
AuthorDate: Wed Mar 1 16:10:05 2023 +0100

MAPREDUCE-7434. Fix ShuffleHandler tests. Contributed by Tamas Domok
---
 .../hadoop/mapred/TestShuffleChannelHandler.java   |  2 +-
 .../apache/hadoop/mapred/TestShuffleHandler.java   | 44 +++---
 .../hadoop/mapred/TestShuffleHandlerBase.java  | 29 +++---
 3 files changed, 47 insertions(+), 28 deletions(-)

diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleChannelHandler.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleChannelHandler.java
index 7fedc7bb2dc..66fa3de94f8 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleChannelHandler.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleChannelHandler.java
@@ -225,7 +225,7 @@ public class TestShuffleChannelHandler extends 
TestShuffleHandlerBase {
 final ShuffleTest t = createShuffleTest();
 final EmbeddedChannel shuffle = t.createShuffleHandlerChannelFileRegion();
 
-String dataFile = getDataFile(tempDir.toAbsolutePath().toString(), 
TEST_ATTEMPT_2);
+String dataFile = getDataFile(TEST_USER, 
tempDir.toAbsolutePath().toString(), TEST_ATTEMPT_2);
 assertTrue("should delete", new File(dataFile).delete());
 
 FullHttpRequest req = t.createRequest(getUri(TEST_JOB_ID, 0,
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java
index 37a9210286c..cc46b49b113 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/test/java/org/apache/hadoop/mapred/TestShuffleHandler.java
@@ -29,6 +29,7 @@ import static 
org.apache.hadoop.test.MetricsAsserts.assertCounter;
 import static org.apache.hadoop.test.MetricsAsserts.assertGauge;
 import static org.apache.hadoop.test.MetricsAsserts.getMetrics;
 import static org.junit.Assert.assertNotEquals;
+import static org.junit.Assert.assertNotNull;
 import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.fail;
@@ -41,6 +42,7 @@ import java.io.DataInputStream;
 import java.io.File;
 import java.io.FileInputStream;
 import java.io.IOException;
+import java.io.InputStream;
 import java.io.InputStreamReader;
 import java.net.HttpURLConnection;
 import java.net.MalformedURLException;
@@ -159,7 +161,7 @@ public class TestShuffleHandler extends 
TestShuffleHandlerBase {
 shuffleHandler.init(conf);
 shuffleHandler.start();
 final String port = 
shuffleHandler.getConfig().get(SHUFFLE_PORT_CONFIG_KEY);
-final SecretKey secretKey = shuffleHandler.addTestApp();
+final SecretKey secretKey = shuffleHandler.addTestApp(TEST_USER);
 
 // setup connections
 HttpURLConnection[] conns = new HttpURLConnection[connAttempts];
@@ -237,7 +239,7 @@ public class TestShuffleHandler extends 
TestShuffleHandlerBase {
 shuffleHandler.init(conf);
 shuffleHandler.start();
 final String port = 
shuffleHandler.getConfig().get(ShuffleHandler.SHUFFLE_PORT_CONFIG_KEY);
-final SecretKey secretKey = shuffleHandler.addTestApp();
+final SecretKey secretKey = shuffleHandler.addTestApp(TEST_USER);
 
 HttpURLConnection conn1 = createRequest(
 geURL(port, TEST_JOB_ID, 0, Collections.singletonList(TEST_ATTEMPT_1), 
true),
@@ -278,18 +280,34 @@ public class TestShuffleHandler extends 
TestShuffleHandlerBase {
 conf.set(CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION, 
"kerberos");
 UserGroupInformation.setConfiguration(conf);
 
+final String randomUser = "randomUser";
+final String attempt = "attempt_1_0004_m_04_0";
+generateMapOutput(randomUser, tempDir.toAbsolutePath().toString(), attempt,
+Arrays.asList(TEST_DATA_C, TEST_DATA_B, TEST_DATA_A));
+
 ShuffleHandlerMock shuffleHandler = new ShuffleHandlerMock();
 shuffleHandler.init(conf);
 try {