[GitHub] [hadoop] smallzhongfeng commented on pull request #5309: YARN-11419. Remove redundant exception capture in NMClientAsyncImpl and improve readability in ContainerShellWebSocket, etc
smallzhongfeng commented on PR #5309: URL: https://github.com/apache/hadoop/pull/5309#issuecomment-1416690101 Thanks for your review. @slfan1989 đź‘Ť -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on pull request #5349: HDFS-16907 Add LastHeartbeatResponseTime for BP service actor
virajjasani commented on PR #5349: URL: https://github.com/apache/hadoop/pull/5349#issuecomment-1416684656 Thanks for the nice suggestions! I modified only a little bit in the patch posted above. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on a diff in pull request #5349: HDFS-16907 Add LastHeartbeatResponseTime for BP service actor
virajjasani commented on code in PR #5349: URL: https://github.com/apache/hadoop/pull/5349#discussion_r1096495331 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java: ## @@ -3832,7 +3837,38 @@ public boolean isDatanodeFullyStarted() { } return true; } - + + /** + * Wait for the datanode to be fully started and also connected to active namenode. This means + * wait until the given time duration for all the BP threads to come alive and all the block + * pools to be initialized. Wait until any one of the BP service actor is connected to active + * namenode. + * + * @param waitTimeMs Wait time in millis for this method to return the datanode probes. If + * datanode stays unhealthy or not connected to any active namenode even after the given wait + * time elapses, it returns false. + * @return true - if the data node is fully started and connected to active namenode within + * the given time internal, false otherwise. + */ + public boolean isDatanodeHealthy(long waitTimeMs) { +long startTime = monotonicNow(); +while (monotonicNow() - startTime <= waitTimeMs) { + if (isDatanodeFullyStartedAndConnectedToActiveNN()) { +return true; + } +} +return false; + } Review Comment: I think it's fine, the client trying to probe could also have it's own time based looping. Scripting can usually do that so we are good. Let me make this change. Thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn commented on a diff in pull request #5349: HDFS-16907 Add LastHeartbeatResponseTime for BP service actor
ayushtkn commented on code in PR #5349: URL: https://github.com/apache/hadoop/pull/5349#discussion_r1096494121 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java: ## @@ -3832,7 +3837,38 @@ public boolean isDatanodeFullyStarted() { } return true; } - + + /** + * Wait for the datanode to be fully started and also connected to active namenode. This means + * wait until the given time duration for all the BP threads to come alive and all the block + * pools to be initialized. Wait until any one of the BP service actor is connected to active + * namenode. + * + * @param waitTimeMs Wait time in millis for this method to return the datanode probes. If + * datanode stays unhealthy or not connected to any active namenode even after the given wait + * time elapses, it returns false. + * @return true - if the data node is fully started and connected to active namenode within + * the given time internal, false otherwise. + */ + public boolean isDatanodeHealthy(long waitTimeMs) { +long startTime = monotonicNow(); +while (monotonicNow() - startTime <= waitTimeMs) { + if (isDatanodeFullyStartedAndConnectedToActiveNN()) { +return true; + } +} +return false; + } Review Comment: This waiting is test logic, we should keep it in ``MiniDfsCluster`` only and can refactor/reuse existing methods as well. Can you trying changing like this: ``` diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java index 414ab579dd0..ad0f8e8b03e 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java @@ -3830,39 +3830,12 @@ boolean isRestarting() { * @return true - if the data node is fully started */ public boolean isDatanodeFullyStarted() { -for (BPOfferService bp : blockPoolManager.getAllNamenodeThreads()) { - if (!bp.isInitialized() || !bp.isAlive()) { -return false; - } -} -return true; - } - - /** - * Wait for the datanode to be fully started and also connected to active namenode. This means - * wait until the given time duration for all the BP threads to come alive and all the block - * pools to be initialized. Wait until any one of the BP service actor is connected to active - * namenode. - * - * @param waitTimeMs Wait time in millis for this method to return the datanode probes. If - * datanode stays unhealthy or not connected to any active namenode even after the given wait - * time elapses, it returns false. - * @return true - if the data node is fully started and connected to active namenode within - * the given time internal, false otherwise. - */ - public boolean isDatanodeHealthy(long waitTimeMs) { -long startTime = monotonicNow(); -while (monotonicNow() - startTime <= waitTimeMs) { - if (isDatanodeFullyStartedAndConnectedToActiveNN()) { -return true; - } -} -return false; +return isDatanodeFullyStarted(false); } - private boolean isDatanodeFullyStartedAndConnectedToActiveNN() { + public boolean isDatanodeFullyStarted(boolean checkConnectionToActive) { for (BPOfferService bp : blockPoolManager.getAllNamenodeThreads()) { - if (!bp.isInitialized() || !bp.isAlive() || bp.getActiveNN() == null) { + if (!bp.isInitialized() || !bp.isAlive() || (checkConnectionToActive && bp.getActiveNN() == null)) { return false; } } diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java index dd8bb204382..0576b4a42e1 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java @@ -2529,6 +2529,11 @@ public boolean restartDataNode(DataNodeProperties dnprop) throws IOException { return restartDataNode(dnprop, false); } + public void waitDatanodeConnectedToActive(DataNode dn, int timeout) throws InterruptedException, TimeoutException { +GenericTestUtils.waitFor(() -> dn.isDatanodeFullyStarted(true), 100, timeout, +"Datanode is not connected to active even after " + timeout + " ms of waiting"); + } + public void waitDatanodeFullyStarted(DataNode dn, int timeout) throws TimeoutException, InterruptedException { GenericTestUtils.waitFo
[GitHub] [hadoop] virajjasani commented on a diff in pull request #5349: HDFS-16907 Add LastHeartbeatResponseTime for BP service actor
virajjasani commented on code in PR #5349: URL: https://github.com/apache/hadoop/pull/5349#discussion_r1096495098 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java: ## @@ -3832,7 +3837,38 @@ public boolean isDatanodeFullyStarted() { } return true; } - + + /** + * Wait for the datanode to be fully started and also connected to active namenode. This means + * wait until the given time duration for all the BP threads to come alive and all the block + * pools to be initialized. Wait until any one of the BP service actor is connected to active + * namenode. + * + * @param waitTimeMs Wait time in millis for this method to return the datanode probes. If + * datanode stays unhealthy or not connected to any active namenode even after the given wait + * time elapses, it returns false. + * @return true - if the data node is fully started and connected to active namenode within + * the given time internal, false otherwise. + */ + public boolean isDatanodeHealthy(long waitTimeMs) { +long startTime = monotonicNow(); +while (monotonicNow() - startTime <= waitTimeMs) { + if (isDatanodeFullyStartedAndConnectedToActiveNN()) { +return true; + } +} +return false; + } Review Comment: I understand that waiting is utility of tests but somehow I felt this was better. Providing timeout to the method could be treated like real datanode probes, hence I kept it that way. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn commented on a diff in pull request #5349: HDFS-16907 Add LastHeartbeatResponseTime for BP service actor
ayushtkn commented on code in PR #5349: URL: https://github.com/apache/hadoop/pull/5349#discussion_r1096494121 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java: ## @@ -3832,7 +3837,38 @@ public boolean isDatanodeFullyStarted() { } return true; } - + + /** + * Wait for the datanode to be fully started and also connected to active namenode. This means + * wait until the given time duration for all the BP threads to come alive and all the block + * pools to be initialized. Wait until any one of the BP service actor is connected to active + * namenode. + * + * @param waitTimeMs Wait time in millis for this method to return the datanode probes. If + * datanode stays unhealthy or not connected to any active namenode even after the given wait + * time elapses, it returns false. + * @return true - if the data node is fully started and connected to active namenode within + * the given time internal, false otherwise. + */ + public boolean isDatanodeHealthy(long waitTimeMs) { +long startTime = monotonicNow(); +while (monotonicNow() - startTime <= waitTimeMs) { + if (isDatanodeFullyStartedAndConnectedToActiveNN()) { +return true; + } +} +return false; + } Review Comment: This waiting is test logic, we should keep it in ``MiniDfsCluster`` only and can refactor/reuse existing methods as well. Can you trying changing like this: ``` diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java index 414ab579dd0..ad0f8e8b03e 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java @@ -3830,39 +3830,12 @@ boolean isRestarting() { * @return true - if the data node is fully started */ public boolean isDatanodeFullyStarted() { -for (BPOfferService bp : blockPoolManager.getAllNamenodeThreads()) { - if (!bp.isInitialized() || !bp.isAlive()) { -return false; - } -} -return true; - } - - /** - * Wait for the datanode to be fully started and also connected to active namenode. This means - * wait until the given time duration for all the BP threads to come alive and all the block - * pools to be initialized. Wait until any one of the BP service actor is connected to active - * namenode. - * - * @param waitTimeMs Wait time in millis for this method to return the datanode probes. If - * datanode stays unhealthy or not connected to any active namenode even after the given wait - * time elapses, it returns false. - * @return true - if the data node is fully started and connected to active namenode within - * the given time internal, false otherwise. - */ - public boolean isDatanodeHealthy(long waitTimeMs) { -long startTime = monotonicNow(); -while (monotonicNow() - startTime <= waitTimeMs) { - if (isDatanodeFullyStartedAndConnectedToActiveNN()) { -return true; - } -} -return false; +return isDatanodeFullyStarted(false); } - private boolean isDatanodeFullyStartedAndConnectedToActiveNN() { + public boolean isDatanodeFullyStarted(boolean checkConnectionToActive) { for (BPOfferService bp : blockPoolManager.getAllNamenodeThreads()) { - if (!bp.isInitialized() || !bp.isAlive() || bp.getActiveNN() == null) { + if (!bp.isInitialized() || !bp.isAlive() || (checkConnectionToActive && bp.getActiveNN() == null)) { return false; } } diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java index dd8bb204382..0576b4a42e1 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java @@ -2529,6 +2529,11 @@ public boolean restartDataNode(DataNodeProperties dnprop) throws IOException { return restartDataNode(dnprop, false); } + public void waitDatanodeConnectedToActive(DataNode dn, int timeout) throws InterruptedException, TimeoutException { +GenericTestUtils.waitFor(() -> dn.isDatanodeFullyStarted(true), 100, timeout, +"Datanode is not connected to active even after " + timeout + " ms of waiting"); + } + public void waitDatanodeFullyStarted(DataNode dn, int timeout) throws TimeoutException, InterruptedException { GenericTestUtils.waitFo
[GitHub] [hadoop] hfutatzhanghb closed pull request #5350: HDFS-16908. IncrementalBlockReportManager#sendImmediately should use or logic to decide whether send immediately or not.
hfutatzhanghb closed pull request #5350: HDFS-16908. IncrementalBlockReportManager#sendImmediately should use or logic to decide whether send immediately or not. URL: https://github.com/apache/hadoop/pull/5350 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hfutatzhanghb opened a new pull request, #5350: HDFS-16908. IncrementalBlockReportManager#sendImmediately should use or logic to decide whether send immediately or not.
hfutatzhanghb opened a new pull request, #5350: URL: https://github.com/apache/hadoop/pull/5350 IncrementalBlockReportManager#sendImmediately should use or logic to decide whether send immediately or not. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on a diff in pull request #5349: HDFS-16907 Add LastHeartbeatResponseTime for BP service actor
virajjasani commented on code in PR #5349: URL: https://github.com/apache/hadoop/pull/5349#discussion_r1096484078 ## hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeMXBean.java: ## @@ -294,4 +297,107 @@ public void testDataNodeMXBeanSlowDisksEnabled() throws Exception { if (cluster != null) {cluster.shutdown();} } } + + @Test + public void testDataNodeMXBeanLastHeartbeats() throws Exception { +Configuration conf = new Configuration(); +try (MiniDFSCluster cluster = new MiniDFSCluster +.Builder(conf) +.nnTopology(MiniDFSNNTopology.simpleHATopology(2)) +.numDataNodes(1) +.build()) { + cluster.waitActive(); + cluster.transitionToActive(0); + cluster.transitionToStandby(1); + + DataNode datanode = cluster.getDataNodes().get(0); + + MBeanServer mbs = ManagementFactory.getPlatformMBeanServer(); + ObjectName mxbeanName = new ObjectName( + "Hadoop:service=DataNode,name=DataNodeInfo"); + + // Verify and wait until one of the BP service actor identifies active namenode as active + // and another as standby. + GenericTestUtils.waitFor(() -> { +List> bpServiceActorInfo = datanode.getBPServiceActorInfoMap(); +Map bpServiceActorInfo1 = bpServiceActorInfo.get(0); +Map bpServiceActorInfo2 = bpServiceActorInfo.get(1); +return (HAServiceProtocol.HAServiceState.ACTIVE.toString() +.equals(bpServiceActorInfo1.get("NamenodeHaState")) +&& HAServiceProtocol.HAServiceState.STANDBY.toString() +.equals(bpServiceActorInfo2.get("NamenodeHaState"))) +|| (HAServiceProtocol.HAServiceState.ACTIVE.toString() +.equals(bpServiceActorInfo2.get("NamenodeHaState")) +&& HAServiceProtocol.HAServiceState.STANDBY.toString() +.equals(bpServiceActorInfo1.get("NamenodeHaState"))); + }, + 500, + 8000, + "No namenode is reported active"); Review Comment: This is better, but if the utility method has it's own wait, that would be nice. WDYT? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on a diff in pull request #5349: HDFS-16907 Add LastHeartbeatResponseTime for BP service actor
virajjasani commented on code in PR #5349: URL: https://github.com/apache/hadoop/pull/5349#discussion_r1096487189 ## hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeMXBean.java: ## @@ -294,4 +297,107 @@ public void testDataNodeMXBeanSlowDisksEnabled() throws Exception { if (cluster != null) {cluster.shutdown();} } } + + @Test + public void testDataNodeMXBeanLastHeartbeats() throws Exception { +Configuration conf = new Configuration(); +try (MiniDFSCluster cluster = new MiniDFSCluster +.Builder(conf) +.nnTopology(MiniDFSNNTopology.simpleHATopology(2)) +.numDataNodes(1) +.build()) { + cluster.waitActive(); + cluster.transitionToActive(0); + cluster.transitionToStandby(1); + + DataNode datanode = cluster.getDataNodes().get(0); + + MBeanServer mbs = ManagementFactory.getPlatformMBeanServer(); + ObjectName mxbeanName = new ObjectName( + "Hadoop:service=DataNode,name=DataNodeInfo"); + + // Verify and wait until one of the BP service actor identifies active namenode as active + // and another as standby. + GenericTestUtils.waitFor(() -> { +List> bpServiceActorInfo = datanode.getBPServiceActorInfoMap(); +Map bpServiceActorInfo1 = bpServiceActorInfo.get(0); +Map bpServiceActorInfo2 = bpServiceActorInfo.get(1); +return (HAServiceProtocol.HAServiceState.ACTIVE.toString() +.equals(bpServiceActorInfo1.get("NamenodeHaState")) +&& HAServiceProtocol.HAServiceState.STANDBY.toString() +.equals(bpServiceActorInfo2.get("NamenodeHaState"))) +|| (HAServiceProtocol.HAServiceState.ACTIVE.toString() +.equals(bpServiceActorInfo2.get("NamenodeHaState")) +&& HAServiceProtocol.HAServiceState.STANDBY.toString() +.equals(bpServiceActorInfo1.get("NamenodeHaState"))); + }, + 500, + 8000, + "No namenode is reported active"); + + // basic metrics validation + String clusterId = (String) mbs.getAttribute(mxbeanName, "ClusterId"); + Assert.assertEquals(datanode.getClusterId(), clusterId); + String version = (String)mbs.getAttribute(mxbeanName, "Version"); + Assert.assertEquals(datanode.getVersion(),version); + String bpActorInfo = (String) mbs.getAttribute(mxbeanName, "BPServiceActorInfo"); + Assert.assertEquals(datanode.getBPServiceActorInfo(), bpActorInfo); + + // Verify that last heartbeat sent to both namenodes in last 5 sec. + assertLastHeartbeatSentTime(datanode, "LastHeartbeat"); + // Verify that last heartbeat response from both namenodes have been received within + // last 5 sec. + assertLastHeartbeatSentTime(datanode, "LastHeartbeatResponseTime"); + + + NameNode sbNameNode = cluster.getNameNode(1); + + // Stopping standby namenode + sbNameNode.stop(); + + // Verify that last heartbeat response time from one of the namenodes would stay much higher + // after stopping one namenode. + GenericTestUtils.waitFor(() -> { +List> bpServiceActorInfo = datanode.getBPServiceActorInfoMap(); +Map bpServiceActorInfo1 = bpServiceActorInfo.get(0); +Map bpServiceActorInfo2 = bpServiceActorInfo.get(1); + +long lastHeartbeatResponseTime1 = + Long.parseLong(bpServiceActorInfo1.get("LastHeartbeatResponseTime")); +long lastHeartbeatResponseTime2 = + Long.parseLong(bpServiceActorInfo2.get("LastHeartbeatResponseTime")); + +LOG.info("Last heartbeat response from namenode 1: {}", lastHeartbeatResponseTime1); +LOG.info("Last heartbeat response from namenode 2: {}", lastHeartbeatResponseTime2); + +return (lastHeartbeatResponseTime1 < 5L && lastHeartbeatResponseTime2 > 5L) || ( +lastHeartbeatResponseTime1 > 5L && lastHeartbeatResponseTime2 < 5L); + + }, + 200, + 15000, + "Last heartbeat response should be higher than 5s for at least one namenode"); + + // Verify that last heartbeat sent to both namenodes in last 5 sec even though + // the last heartbeat received from one of the namenodes is greater than 5 sec ago. + assertLastHeartbeatSentTime(datanode, "LastHeartbeat"); +} + } + + private static void assertLastHeartbeatSentTime(DataNode datanode, String lastHeartbeat) { +List> bpServiceActorInfo = datanode.getBPServiceActorInfoMap(); +Map bpServiceActorInfo1 = bpServiceActorInfo.get(0); +Map bpServiceActorInfo2 = bpServiceActorInfo.get(1); + +long lastHeartbeatSent1 = +Long.parseLong(bpServiceActorInfo1.get(lastHeartbeat)); +long lastHeartbeatSent2 = +Long.parseLong(bpServiceActorInfo2.get(lastHeartbeat)); + +Assert.assertTrue(lastHeartbeat + " for first bp service actor is higher than 5s", +lastHeartbeatSent1 < 5L); +Assert.assertTr
[GitHub] [hadoop] virajjasani commented on pull request #5349: HDFS-16907 Add LastHeartbeatResponseTime for BP service actor
virajjasani commented on PR #5349: URL: https://github.com/apache/hadoop/pull/5349#issuecomment-1416664835 > got me an initial feeling as something is broken with the way Datanode tracks last heartbeat. It's not broken, it's just that the lastHeartbeat is not sufficient to track realtime broken connection. Also I wanted to update existing metric only but in order to keep the compatibility, thought it would be better to add new metric with the name that suggests what it is doing. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on a diff in pull request #5349: HDFS-16907 Add LastHeartbeatResponseTime for BP service actor
virajjasani commented on code in PR #5349: URL: https://github.com/apache/hadoop/pull/5349#discussion_r1096484078 ## hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeMXBean.java: ## @@ -294,4 +297,107 @@ public void testDataNodeMXBeanSlowDisksEnabled() throws Exception { if (cluster != null) {cluster.shutdown();} } } + + @Test + public void testDataNodeMXBeanLastHeartbeats() throws Exception { +Configuration conf = new Configuration(); +try (MiniDFSCluster cluster = new MiniDFSCluster +.Builder(conf) +.nnTopology(MiniDFSNNTopology.simpleHATopology(2)) +.numDataNodes(1) +.build()) { + cluster.waitActive(); + cluster.transitionToActive(0); + cluster.transitionToStandby(1); + + DataNode datanode = cluster.getDataNodes().get(0); + + MBeanServer mbs = ManagementFactory.getPlatformMBeanServer(); + ObjectName mxbeanName = new ObjectName( + "Hadoop:service=DataNode,name=DataNodeInfo"); + + // Verify and wait until one of the BP service actor identifies active namenode as active + // and another as standby. + GenericTestUtils.waitFor(() -> { +List> bpServiceActorInfo = datanode.getBPServiceActorInfoMap(); +Map bpServiceActorInfo1 = bpServiceActorInfo.get(0); +Map bpServiceActorInfo2 = bpServiceActorInfo.get(1); +return (HAServiceProtocol.HAServiceState.ACTIVE.toString() +.equals(bpServiceActorInfo1.get("NamenodeHaState")) +&& HAServiceProtocol.HAServiceState.STANDBY.toString() +.equals(bpServiceActorInfo2.get("NamenodeHaState"))) +|| (HAServiceProtocol.HAServiceState.ACTIVE.toString() +.equals(bpServiceActorInfo2.get("NamenodeHaState")) +&& HAServiceProtocol.HAServiceState.STANDBY.toString() +.equals(bpServiceActorInfo1.get("NamenodeHaState"))); + }, + 500, + 8000, + "No namenode is reported active"); Review Comment: This is better, but we need the utility method to introduce wait on it's own. Rather than returning boolean, it can use GenericTestUtil and wait (or throw Exception eventually). -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5347: HDFS-16906. Fixed leak in CryptoOutputStream::close
hadoop-yetus commented on PR #5347: URL: https://github.com/apache/hadoop/pull/5347#issuecomment-1416662229 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 3s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 57m 7s | | trunk passed | | +1 :green_heart: | compile | 30m 58s | | trunk passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | compile | 25m 51s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 20s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 1s | | trunk passed | | -1 :x: | javadoc | 1m 28s | [/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5347/1/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt) | hadoop-common in trunk failed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04. | | +1 :green_heart: | javadoc | 0m 51s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 14s | | trunk passed | | +1 :green_heart: | shadedclient | 29m 27s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 9s | | the patch passed | | +1 :green_heart: | compile | 25m 53s | | the patch passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | javac | 25m 53s | | the patch passed | | +1 :green_heart: | compile | 22m 38s | | the patch passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | javac | 22m 38s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 1s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 42s | | the patch passed | | -1 :x: | javadoc | 1m 0s | [/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5347/1/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt) | hadoop-common in the patch failed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04. | | +1 :green_heart: | javadoc | 0m 41s | | the patch passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 49s | | the patch passed | | +1 :green_heart: | shadedclient | 28m 27s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 18m 27s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 51s | | The patch does not generate ASF License warnings. | | | | 257m 18s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5347/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5347 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux b193ba2aca32 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / de1e723926cee37f61d8c0ada870aca6bb1ac791 | | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5347/1/testReport/ | | Max. process+thread count | 1246 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output |
[GitHub] [hadoop] ayushtkn commented on a diff in pull request #5349: HDFS-16907 BP service actor LastHeartbeat is not sufficient to track realtime connection breaks
ayushtkn commented on code in PR #5349: URL: https://github.com/apache/hadoop/pull/5349#discussion_r1096481460 ## hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeMXBean.java: ## @@ -294,4 +297,107 @@ public void testDataNodeMXBeanSlowDisksEnabled() throws Exception { if (cluster != null) {cluster.shutdown();} } } + + @Test + public void testDataNodeMXBeanLastHeartbeats() throws Exception { +Configuration conf = new Configuration(); +try (MiniDFSCluster cluster = new MiniDFSCluster +.Builder(conf) +.nnTopology(MiniDFSNNTopology.simpleHATopology(2)) +.numDataNodes(1) +.build()) { + cluster.waitActive(); + cluster.transitionToActive(0); + cluster.transitionToStandby(1); + + DataNode datanode = cluster.getDataNodes().get(0); + + MBeanServer mbs = ManagementFactory.getPlatformMBeanServer(); + ObjectName mxbeanName = new ObjectName( + "Hadoop:service=DataNode,name=DataNodeInfo"); + + // Verify and wait until one of the BP service actor identifies active namenode as active + // and another as standby. + GenericTestUtils.waitFor(() -> { +List> bpServiceActorInfo = datanode.getBPServiceActorInfoMap(); +Map bpServiceActorInfo1 = bpServiceActorInfo.get(0); +Map bpServiceActorInfo2 = bpServiceActorInfo.get(1); +return (HAServiceProtocol.HAServiceState.ACTIVE.toString() +.equals(bpServiceActorInfo1.get("NamenodeHaState")) +&& HAServiceProtocol.HAServiceState.STANDBY.toString() +.equals(bpServiceActorInfo2.get("NamenodeHaState"))) +|| (HAServiceProtocol.HAServiceState.ACTIVE.toString() +.equals(bpServiceActorInfo2.get("NamenodeHaState")) +&& HAServiceProtocol.HAServiceState.STANDBY.toString() +.equals(bpServiceActorInfo1.get("NamenodeHaState"))); + }, + 500, + 8000, + "No namenode is reported active"); Review Comment: This looks like checking if we have the datanode has acknowledged the active namenode or not? Can we have this util as part of MiniDfsCluster? We have something like ``waitDatanodeFullyStarted`` there, may be a new method with an extra param, checkActive or something like that, which can may be in the Datanode check for Active NN also ``` public boolean isDatanodeFullyStarted() { for (BPOfferService bp : blockPoolManager.getAllNamenodeThreads()) { if (!bp.isInitialized() || !bp.isAlive() || bp.getActiveNN()==null) { return false; } } ``` or at worst refactor this into a method somewhere -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn commented on a diff in pull request #5349: HDFS-16907 BP service actor LastHeartbeat is not sufficient to track realtime connection breaks
ayushtkn commented on code in PR #5349: URL: https://github.com/apache/hadoop/pull/5349#discussion_r1096481460 ## hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeMXBean.java: ## @@ -294,4 +297,107 @@ public void testDataNodeMXBeanSlowDisksEnabled() throws Exception { if (cluster != null) {cluster.shutdown();} } } + + @Test + public void testDataNodeMXBeanLastHeartbeats() throws Exception { +Configuration conf = new Configuration(); +try (MiniDFSCluster cluster = new MiniDFSCluster +.Builder(conf) +.nnTopology(MiniDFSNNTopology.simpleHATopology(2)) +.numDataNodes(1) +.build()) { + cluster.waitActive(); + cluster.transitionToActive(0); + cluster.transitionToStandby(1); + + DataNode datanode = cluster.getDataNodes().get(0); + + MBeanServer mbs = ManagementFactory.getPlatformMBeanServer(); + ObjectName mxbeanName = new ObjectName( + "Hadoop:service=DataNode,name=DataNodeInfo"); + + // Verify and wait until one of the BP service actor identifies active namenode as active + // and another as standby. + GenericTestUtils.waitFor(() -> { +List> bpServiceActorInfo = datanode.getBPServiceActorInfoMap(); +Map bpServiceActorInfo1 = bpServiceActorInfo.get(0); +Map bpServiceActorInfo2 = bpServiceActorInfo.get(1); +return (HAServiceProtocol.HAServiceState.ACTIVE.toString() +.equals(bpServiceActorInfo1.get("NamenodeHaState")) +&& HAServiceProtocol.HAServiceState.STANDBY.toString() +.equals(bpServiceActorInfo2.get("NamenodeHaState"))) +|| (HAServiceProtocol.HAServiceState.ACTIVE.toString() +.equals(bpServiceActorInfo2.get("NamenodeHaState")) +&& HAServiceProtocol.HAServiceState.STANDBY.toString() +.equals(bpServiceActorInfo1.get("NamenodeHaState"))); + }, + 500, + 8000, + "No namenode is reported active"); Review Comment: This looks like checking if we have the datanode has acknowledged the active namenode or not? Can we have this util as part of MiniDfsCluster? We have something like ``waitDatanodeFullyStarted`` there, may be a new method with an extra param, checkActive or something like that, which can may be in the Datanode check for Active NN also ``` public boolean isDatanodeFullyStarted() { for (BPOfferService bp : blockPoolManager.getAllNamenodeThreads()) { if (!bp.isInitialized() || !bp.isAlive() || bp.getActiveNN()==null) { return false; } } ``` or at worst refactor the present somewhere into a method -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn commented on a diff in pull request #5349: HDFS-16907 BP service actor LastHeartbeat is not sufficient to track realtime connection breaks
ayushtkn commented on code in PR #5349: URL: https://github.com/apache/hadoop/pull/5349#discussion_r1096479948 ## hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/datanode.html: ## @@ -84,7 +84,8 @@ Namenode HA State Block Pool ID Actor State - Last Heartbeat + Last Heartbeat Sent + Last Heartbeat Received Review Comment: Last Heartbeat Received gives me a sense like Namenode sent heartbeat to Datanode, That ain't good from Correctness point of view. Can you change to indicate it is the heartbeat response rather than the heartbeat itself ## hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeMXBean.java: ## @@ -294,4 +297,107 @@ public void testDataNodeMXBeanSlowDisksEnabled() throws Exception { if (cluster != null) {cluster.shutdown();} } } + + @Test + public void testDataNodeMXBeanLastHeartbeats() throws Exception { +Configuration conf = new Configuration(); +try (MiniDFSCluster cluster = new MiniDFSCluster +.Builder(conf) +.nnTopology(MiniDFSNNTopology.simpleHATopology(2)) +.numDataNodes(1) +.build()) { + cluster.waitActive(); + cluster.transitionToActive(0); + cluster.transitionToStandby(1); + + DataNode datanode = cluster.getDataNodes().get(0); + + MBeanServer mbs = ManagementFactory.getPlatformMBeanServer(); + ObjectName mxbeanName = new ObjectName( + "Hadoop:service=DataNode,name=DataNodeInfo"); + + // Verify and wait until one of the BP service actor identifies active namenode as active + // and another as standby. + GenericTestUtils.waitFor(() -> { +List> bpServiceActorInfo = datanode.getBPServiceActorInfoMap(); +Map bpServiceActorInfo1 = bpServiceActorInfo.get(0); +Map bpServiceActorInfo2 = bpServiceActorInfo.get(1); +return (HAServiceProtocol.HAServiceState.ACTIVE.toString() +.equals(bpServiceActorInfo1.get("NamenodeHaState")) +&& HAServiceProtocol.HAServiceState.STANDBY.toString() +.equals(bpServiceActorInfo2.get("NamenodeHaState"))) +|| (HAServiceProtocol.HAServiceState.ACTIVE.toString() +.equals(bpServiceActorInfo2.get("NamenodeHaState")) +&& HAServiceProtocol.HAServiceState.STANDBY.toString() +.equals(bpServiceActorInfo1.get("NamenodeHaState"))); + }, + 500, + 8000, + "No namenode is reported active"); + + // basic metrics validation + String clusterId = (String) mbs.getAttribute(mxbeanName, "ClusterId"); + Assert.assertEquals(datanode.getClusterId(), clusterId); + String version = (String)mbs.getAttribute(mxbeanName, "Version"); + Assert.assertEquals(datanode.getVersion(),version); + String bpActorInfo = (String) mbs.getAttribute(mxbeanName, "BPServiceActorInfo"); + Assert.assertEquals(datanode.getBPServiceActorInfo(), bpActorInfo); + + // Verify that last heartbeat sent to both namenodes in last 5 sec. + assertLastHeartbeatSentTime(datanode, "LastHeartbeat"); + // Verify that last heartbeat response from both namenodes have been received within + // last 5 sec. + assertLastHeartbeatSentTime(datanode, "LastHeartbeatResponseTime"); + + + NameNode sbNameNode = cluster.getNameNode(1); + + // Stopping standby namenode + sbNameNode.stop(); + + // Verify that last heartbeat response time from one of the namenodes would stay much higher + // after stopping one namenode. + GenericTestUtils.waitFor(() -> { +List> bpServiceActorInfo = datanode.getBPServiceActorInfoMap(); +Map bpServiceActorInfo1 = bpServiceActorInfo.get(0); +Map bpServiceActorInfo2 = bpServiceActorInfo.get(1); + +long lastHeartbeatResponseTime1 = + Long.parseLong(bpServiceActorInfo1.get("LastHeartbeatResponseTime")); +long lastHeartbeatResponseTime2 = + Long.parseLong(bpServiceActorInfo2.get("LastHeartbeatResponseTime")); + +LOG.info("Last heartbeat response from namenode 1: {}", lastHeartbeatResponseTime1); +LOG.info("Last heartbeat response from namenode 2: {}", lastHeartbeatResponseTime2); + +return (lastHeartbeatResponseTime1 < 5L && lastHeartbeatResponseTime2 > 5L) || ( +lastHeartbeatResponseTime1 > 5L && lastHeartbeatResponseTime2 < 5L); + + }, + 200, + 15000, + "Last heartbeat response should be higher than 5s for at least one namenode"); Review Comment: nit: reduce the line breaks ## hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeMXBean.java: ## @@ -294,4 +297,107 @@ public void testDataNodeMXBeanSlowDisksEnabled() throws Exception { if (cluster != null) {cluster.shutdown();} } } + + @Test + public void testDataNod
[GitHub] [hadoop] hadoop-yetus commented on pull request #5346: HDFS-16901: RBF: Propagates real user's username via the caller context, when a proxy user is being used.
hadoop-yetus commented on PR #5346: URL: https://github.com/apache/hadoop/pull/5346#issuecomment-1416648396 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 17m 25s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 5s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 33m 28s | | trunk passed | | +1 :green_heart: | compile | 25m 15s | | trunk passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | compile | 21m 43s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 4m 28s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 34s | | trunk passed | | -1 :x: | javadoc | 1m 7s | [/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5346/4/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt) | hadoop-common in trunk failed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04. | | +1 :green_heart: | javadoc | 1m 46s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 4m 6s | | trunk passed | | +1 :green_heart: | shadedclient | 26m 40s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 24s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 46s | | the patch passed | | +1 :green_heart: | compile | 24m 29s | | the patch passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | javac | 24m 29s | | the patch passed | | +1 :green_heart: | compile | 21m 44s | | the patch passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | javac | 21m 44s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 3m 53s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5346/4/artifact/out/results-checkstyle-root.txt) | root: The patch generated 3 new + 5 unchanged - 0 fixed = 8 total (was 5) | | +1 :green_heart: | mvnsite | 2m 26s | | the patch passed | | -1 :x: | javadoc | 0m 59s | [/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5346/4/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt) | hadoop-common in the patch failed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04. | | +1 :green_heart: | javadoc | 1m 44s | | the patch passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 4m 21s | | the patch passed | | +1 :green_heart: | shadedclient | 27m 17s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 18m 21s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 34m 9s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 53s | | The patch does not generate ASF License warnings. | | | | 300m 27s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5346/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5346 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux e021a107696b 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / b1e400f91cceb13183c1997346af3ccfafd6a23a | | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | Multi-JDK versions |
[GitHub] [hadoop] slfan1989 commented on pull request #5349: HDFS-16907 BP service actor LastHeartbeat is not sufficient to track realtime connection breaks
slfan1989 commented on PR #5349: URL: https://github.com/apache/hadoop/pull/5349#issuecomment-1416644855 Quick Look, LGTM. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani opened a new pull request, #5349: HDFS-16907 BP service actor LastHeartbeat is not sufficient to track realtime connection breaks
virajjasani opened a new pull request, #5349: URL: https://github.com/apache/hadoop/pull/5349 Each BP service actor thread maintains lastHeartbeatTime with the namenode that it is connected to. However, this is updated even if the connection to the namenode is broken. Suppose, the actor thread keeps heartbeating to namenode and suddenly the socket connection is broken. When this happens, until specific time duration, the actor thread consistently keeps updating lastHeartbeatTime before even initiating heartbeat connection with namenode. If connection cannot be established even after RPC retries are exhausted, then IOException is thrown. This means that heartbeat response has not been received from the namenode. In the loop, the actor thread keeps trying connecting for heartbeat and the last heartbeat stays close to 1/2s even though in reality there is no response being received from namenode. Sample Exception from the BP service actor thread, during which LastHeartbeat stays very low: ``` 2023-02-03 22:34:55,725 WARN  [xyz:9000] datanode.DataNode - IOException in offerService java.io.EOFException: End of File Exception between local host is: "dn-0"; destination host is: "nn-1":9000; : java.io.EOFException; For more details see:  http://wiki.apache.org/hadoop/EOFException   at sun.reflect.GeneratedConstructorAccessor34.newInstance(Unknown Source)   at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)   at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:862)   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1553)   at org.apache.hadoop.ipc.Client.call(Client.java:1495)   at org.apache.hadoop.ipc.Client.call(Client.java:1392)   at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)   at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)   at com.sun.proxy.$Proxy17.sendHeartbeat(Unknown Source)   at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.sendHeartbeat(DatanodeProtocolClientSideTranslatorPB.java:168)   at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:544)   at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:682)   at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:890)   at java.lang.Thread.run(Thread.java:750) Caused by: java.io.EOFException   at java.io.DataInputStream.readInt(DataInputStream.java:392)   at org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1884)   at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1176)   at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1074) ``` Last heartbeat response time is important to initiate any auto-recovery from datanode. Hence, we should introduce LastHeartbeatResponseTime that only gets updated if the BP service actor thread was successfully able to retrieve response from namenode. https://user-images.githubusercontent.com/34790606/216744006-743896d6-d2be-4fe4-9692-cdc4ac5ca7c4.png";> -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 opened a new pull request, #5348: YARN-3657. Federation maintenance mechanisms (simple CLI and command propagation)
slfan1989 opened a new pull request, #5348: URL: https://github.com/apache/hadoop/pull/5348 JIRA. YARN-3657. Federation maintenance mechanisms (simple CLI and command propagation) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5346: HDFS-16901: RBF: Propagates real user's username via the caller context, when a proxy user is being used.
hadoop-yetus commented on PR #5346: URL: https://github.com/apache/hadoop/pull/5346#issuecomment-1416643914 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 54s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 1s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 33m 56s | | trunk passed | | +1 :green_heart: | compile | 25m 10s | | trunk passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | compile | 21m 43s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 4m 38s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 41s | | trunk passed | | -1 :x: | javadoc | 1m 7s | [/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5346/2/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt) | hadoop-common in trunk failed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04. | | +1 :green_heart: | javadoc | 1m 45s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 4m 4s | | trunk passed | | +1 :green_heart: | shadedclient | 26m 44s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 23s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 46s | | the patch passed | | +1 :green_heart: | compile | 24m 33s | | the patch passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | javac | 24m 33s | | the patch passed | | +1 :green_heart: | compile | 21m 43s | | the patch passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | javac | 21m 43s | | the patch passed | | +1 :green_heart: | blanks | 0m 1s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 3m 54s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5346/2/artifact/out/results-checkstyle-root.txt) | root: The patch generated 3 new + 5 unchanged - 0 fixed = 8 total (was 5) | | +1 :green_heart: | mvnsite | 2m 38s | | the patch passed | | -1 :x: | javadoc | 0m 59s | [/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5346/2/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt) | hadoop-common in the patch failed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04. | | +1 :green_heart: | javadoc | 1m 45s | | the patch passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 4m 19s | | the patch passed | | +1 :green_heart: | shadedclient | 26m 39s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 18m 2s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 33m 3s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 54s | | The patch does not generate ASF License warnings. | | | | 282m 44s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5346/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5346 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 2fda639ab79f 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / dcbf0c9cf2a8259b31364f23739f6115c56d622a | | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | Multi-JDK versions |
[GitHub] [hadoop] hadoop-yetus commented on pull request #5346: HDFS-16901: RBF: Propagates real user's username via the caller context, when a proxy user is being used.
hadoop-yetus commented on PR #5346: URL: https://github.com/apache/hadoop/pull/5346#issuecomment-1416642680 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 12m 2s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 25s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 30m 50s | | trunk passed | | +1 :green_heart: | compile | 23m 2s | | trunk passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | compile | 20m 25s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 4m 17s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 45s | | trunk passed | | -1 :x: | javadoc | 1m 14s | [/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5346/3/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt) | hadoop-common in trunk failed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04. | | +1 :green_heart: | javadoc | 2m 2s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 4m 11s | | trunk passed | | +1 :green_heart: | shadedclient | 24m 29s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 45s | | the patch passed | | +1 :green_heart: | compile | 22m 23s | | the patch passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | javac | 22m 23s | | the patch passed | | +1 :green_heart: | compile | 20m 20s | | the patch passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | javac | 20m 20s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 3m 36s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5346/3/artifact/out/results-checkstyle-root.txt) | root: The patch generated 3 new + 5 unchanged - 0 fixed = 8 total (was 5) | | +1 :green_heart: | mvnsite | 2m 40s | | the patch passed | | -1 :x: | javadoc | 1m 6s | [/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5346/3/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt) | hadoop-common in the patch failed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04. | | +1 :green_heart: | javadoc | 2m 1s | | the patch passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 4m 26s | | the patch passed | | +1 :green_heart: | shadedclient | 24m 13s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 18m 19s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 22m 41s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 1m 1s | | The patch does not generate ASF License warnings. | | | | 271m 9s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5346/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5346 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 325d6a6bb0be 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / b1e400f91cceb13183c1997346af3ccfafd6a23a | | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | Multi-JDK versions |
[jira] [Commented] (HADOOP-18616) Java 11 JavaDoc fails due to missing package comments
[ https://issues.apache.org/jira/browse/HADOOP-18616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17684073#comment-17684073 ] ASF GitHub Bot commented on HADOOP-18616: - slfan1989 commented on PR #5344: URL: https://github.com/apache/hadoop/pull/5344#issuecomment-1416637414 > it does fix 29 of the complaints though, which https://github.com/apache/hadoop/pull/5226 didn't. is it just the lack of a doc comment which broke things then, not the @interfaceAudience tag? if so, yes, let's merge -but use the original JIRA ID I also don't think it's a tag issue, we can modify doc comment to solve this issue. @snmvaughan Thank you for your contribution, but can you modify the title of the pr, as suggested by @steveloughran > Java 11 JavaDoc fails due to missing package comments > - > > Key: HADOOP-18616 > URL: https://issues.apache.org/jira/browse/HADOOP-18616 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.4.0, 3.3.5, 3.3.9 > Environment: Yetus Java 11 OpenJDK JavaDoc >Reporter: Steve Vaughan >Assignee: Steve Vaughan >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.3.9 > > > Submissions to `hadoop-common` fail in Yetus due to Java 11 JavaDoc errors: > ``` > [ERROR] > /home/builder/src/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/concurrent/package-info.java:21: > error: unknown tag: InterfaceAudience.Private > [ERROR] @InterfaceAudience.Private > [ERROR] ^ > [ERROR] > /home/builder/src/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/concurrent/package-info.java:22: > error: unknown tag: InterfaceStability.Unstable > [ERROR] @InterfaceStability.Unstable > [ERROR] ^ > ``` -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on pull request #5344: HADOOP-18616. Java 11 JavaDoc fails due to missing package comments
slfan1989 commented on PR #5344: URL: https://github.com/apache/hadoop/pull/5344#issuecomment-1416637414 > it does fix 29 of the complaints though, which https://github.com/apache/hadoop/pull/5226 didn't. is it just the lack of a doc comment which broke things then, not the @interfaceAudience tag? if so, yes, let's merge -but use the original JIRA ID I also don't think it's a tag issue, we can modify doc comment to solve this issue. @snmvaughan Thank you for your contribution, but can you modify the title of the pr, as suggested by @steveloughran -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5346: HDFS-16901: RBF: Propagates real user's username via the caller context, when a proxy user is being used.
hadoop-yetus commented on PR #5346: URL: https://github.com/apache/hadoop/pull/5346#issuecomment-1416636876 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 48s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 23s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 33m 29s | | trunk passed | | +1 :green_heart: | compile | 25m 9s | | trunk passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | compile | 21m 34s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 4m 5s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 30s | | trunk passed | | -1 :x: | javadoc | 1m 6s | [/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5346/1/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt) | hadoop-common in trunk failed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04. | | +1 :green_heart: | javadoc | 1m 46s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 4m 4s | | trunk passed | | +1 :green_heart: | shadedclient | 27m 17s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 53s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 12s | | the patch passed | | +1 :green_heart: | compile | 29m 24s | | the patch passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | javac | 29m 24s | | the patch passed | | +1 :green_heart: | compile | 21m 40s | | the patch passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | javac | 21m 40s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 3m 54s | | the patch passed | | +1 :green_heart: | mvnsite | 2m 26s | | the patch passed | | -1 :x: | javadoc | 0m 59s | [/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5346/1/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt) | hadoop-common in the patch failed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04. | | +1 :green_heart: | javadoc | 1m 44s | | the patch passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 4m 19s | | the patch passed | | +1 :green_heart: | shadedclient | 27m 22s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 18m 24s | | hadoop-common in the patch passed. | | -1 :x: | unit | 34m 4s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5346/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 52s | | The patch does not generate ASF License warnings. | | | | 289m 56s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterRPCMultipleDestinationMountTableResolver | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5346/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5346 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 078843e37bfa 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git re
[GitHub] [hadoop] slfan1989 commented on pull request #5309: YARN-11419. Remove redundant exception capture in NMClientAsyncImpl and improve readability in ContainerShellWebSocket, etc
slfan1989 commented on PR #5309: URL: https://github.com/apache/hadoop/pull/5309#issuecomment-1416634931 merged trunk, @smallzhongfeng Thanks for your contribution! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 merged pull request #5309: YARN-11419. Remove redundant exception capture in NMClientAsyncImpl and improve readability in ContainerShellWebSocket, etc
slfan1989 merged PR #5309: URL: https://github.com/apache/hadoop/pull/5309 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] gardenia opened a new pull request, #5347: HDFS-16906. Fixed leak in CryptoOutputStream::close
gardenia opened a new pull request, #5347: URL: https://github.com/apache/hadoop/pull/5347 Fix a bug in the close() method of CryptoOutputStream. ### Description of PR When closing we need to wrap the flush() in a try .. finally, otherwise when flush throws it will prevent us completing the remainder of the close activities and in particular the close of the underlying wrapped stream object resulting in a resource leak. ### How was this patch tested? Unit test added -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] simbadzina commented on pull request #5346: HDFS-16901: RBF: Propagates real user's username via the caller context, when a proxy user is being used.
simbadzina commented on PR #5346: URL: https://github.com/apache/hadoop/pull/5346#issuecomment-1416493987 Before this patch the namenode audit log would show > ugi= via ,..,callerContext=... After this patch > ugi= via ,..,callerContext=...,realUser: -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18617) Make IOStatisticsStore and binding APIs public for use beyond our code
[ https://issues.apache.org/jira/browse/HADOOP-18617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17684040#comment-17684040 ] Viraj Jasani commented on HADOOP-18617: --- {quote}Ideally we should actually move the IOStatisticsStore interface into org.apache.hadoop.fs.statistics and the builder to match -but we can't do that without causing trauma elsewhere (google gcs). Strategy there: Add a new interface IOStatisticsCollector in .impl which is then implemented by IOStatisticsStore, and a new builder API which forwards to IOStatisticsStoreBuilder. {quote} If we do this * Create new interface IOStatisticsCollector in .impl * Move interface IOStatisticsStore to org.apache.hadoop.fs.statistics * Make interface IOStatisticsStore implement IOStatisticsCollector (which now belongs to .impl) We would essentially let an interface at *_xyz_* package implement another interface from *_xyz.impl_* package. I wonder if this makes the structure look a bit tricky. > Make IOStatisticsStore and binding APIs public for use beyond our code > -- > > Key: HADOOP-18617 > URL: https://issues.apache.org/jira/browse/HADOOP-18617 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: 3.3.5 >Reporter: Steve Loughran >Priority: Major > > it's really useful to be able to collect iostats in things other than the FS > classes -we do it in the S3A and manifest committers. > But external code -such as the spark committers can't use the methods in > {{org.apache.hadoop.fs.statistics.impl)) > Proposed > Make some classes/interfaces public > * IOStatisticsBinding > * IOStatisticsStore > * IOStatisticsStoreBuilder > Ideally we should actually move the IOStatisticsStore interface into > org.apache.hadoop.fs.statistics and the builder to match -but we can't do > that without causing trauma elsewhere (google gcs). > Strategy there: Add a new interface IOStatisticsCollector in .impl which is > then implemented by IOStatisticsStore, and a new builder API which forwards > to IOStatisticsStoreBuilder. > Side issue: we don't make any use of the "clever, elegant functional" bit of > DynamicIOStatisticsBuilder/DynamicIOStatistics, where every counter is mapped > to a function which is then invoked to get at the atomic longs. It's used in > IOStatisticsStoreImpl, but only with AtomicLong and MeanStatistic instances. > If we just move to simple maps we will save on lambda-expressions and on > lookup overhead. The original intent was something like coda hale metrics > where we could add dynamic lookup to other bits of instrumentation; in > practise we measure durations and build counts/min/max. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] simbadzina opened a new pull request, #5346: HDFS-16901: RBF: Propagates real user's username via the caller context, when a proxy user is being used.
simbadzina opened a new pull request, #5346: URL: https://github.com/apache/hadoop/pull/5346 HDFS-16901: RBF: Propagates real user's username via the caller context, when a proxy user is being used. ### Description of PR If the router receives an operation from a proxyUser, it drops the realUser in the UGI and makes the routerUser the realUser for the operation that goes to the namenode. In the namenode UGI logs, we'd like the ability to know the original realUser. The router should propagate the realUser from the client call as part of the callerContext. ### How was this patch tested? New test case : **TestRouterRpc#testRealUserPropagationInCallerContext** ### For code changes: - [ X] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-18617) Make IOStatisticsStore and binding APIs public for use beyond our code
Steve Loughran created HADOOP-18617: --- Summary: Make IOStatisticsStore and binding APIs public for use beyond our code Key: HADOOP-18617 URL: https://issues.apache.org/jira/browse/HADOOP-18617 Project: Hadoop Common Issue Type: Sub-task Components: fs Affects Versions: 3.3.5 Reporter: Steve Loughran it's really useful to be able to collect iostats in things other than the FS classes -we do it in the S3A and manifest committers. But external code -such as the spark committers can't use the methods in {{org.apache.hadoop.fs.statistics.impl)) Proposed Make some classes/interfaces public * IOStatisticsBinding * IOStatisticsStore * IOStatisticsStoreBuilder Ideally we should actually move the IOStatisticsStore interface into org.apache.hadoop.fs.statistics and the builder to match -but we can't do that without causing trauma elsewhere (google gcs). Strategy there: Add a new interface IOStatisticsCollector in .impl which is then implemented by IOStatisticsStore, and a new builder API which forwards to IOStatisticsStoreBuilder. Side issue: we don't make any use of the "clever, elegant functional" bit of DynamicIOStatisticsBuilder/DynamicIOStatistics, where every counter is mapped to a function which is then invoked to get at the atomic longs. It's used in IOStatisticsStoreImpl, but only with AtomicLong and MeanStatistic instances. If we just move to simple maps we will save on lambda-expressions and on lookup overhead. The original intent was something like coda hale metrics where we could add dynamic lookup to other bits of instrumentation; in practise we measure durations and build counts/min/max. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] simbadzina commented on pull request #4967: HDFS-16791 WIP - client protocol and Filesystem apis implemented and …
simbadzina commented on PR #4967: URL: https://github.com/apache/hadoop/pull/4967#issuecomment-1416276716 Javadoc issues are beginning worked on in the following two PRs. https://github.com/apache/hadoop/pull/5344 https://github.com/apache/hadoop/pull/5226 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] simbadzina commented on pull request #4967: HDFS-16791 WIP - client protocol and Filesystem apis implemented and …
simbadzina commented on PR #4967: URL: https://github.com/apache/hadoop/pull/4967#issuecomment-1416266135 Could you fix the new checkstyle issues: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4967/11/artifact/out/results-checkstyle-root.txt -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] simbadzina commented on pull request #5322: HDFS-16896 clear ignoredNodes list when we clear deadnode list on ref…
simbadzina commented on PR #5322: URL: https://github.com/apache/hadoop/pull/5322#issuecomment-1416241192 I ran each of the failing unit test classes in Intellij individually. > [2023-01-26T21:52:13.365Z] Reason | Tests [2023-01-26T21:52:13.365Z] Failed junit tests | hadoop.hdfs.server.datanode.TestDiskError [2023-01-26T21:52:13.365Z] | hadoop.hdfs.server.namenode.TestNameNodeReconfigure [2023-01-26T21:52:13.365Z] | hadoop.hdfs.server.namenode.TestReencryption [2023-01-26T21:52:13.365Z] | hadoop.hdfs.server.datanode.TestDataNodeMetricsLogger [2023-01-26T21:52:13.365Z] | hadoop.hdfs.server.datanode.TestDataNodeReconfiguration [2023-01-26T21:52:13.365Z] | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier [2023-01-26T21:52:13.365Z] | hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand [2023-01-26T21:52:13.365Z] | hadoop.hdfs.server.namenode.TestNameNodeMXBean [2023-01-26T21:52:13.365Z] | hadoop.hdfs.server.namenode.TestFsckWithMultipleNameNodes They all passed with your PR. So the failures are just test flakiness. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on pull request #5330: HDFS-16898. Make write lock fine-grain in method processCommandFromActor
virajjasani commented on PR #5330: URL: https://github.com/apache/hadoop/pull/5330#issuecomment-1416207201 In the meantime, I have two nits if you would like to consider: 1. For `processCommandFromActive` and `processCommandFromStandby`, it would be good to pass only `actor.getNNSocketAddress()` instead of `actor`, because it's the namenode address that is logged for `BlockRecoveryWorker` logs and others in standby. 2. Would be great to change log level to WARN for this: ``` if (processCommandsMs > dnConf.getProcessCommandsThresholdMs()) { LOG.info("Took {} ms to process {} commands from NN", processCommandsMs, cmds.length); } ``` With WARN level, it will likely come up front while debugging any slowness issues. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on pull request #5330: HDFS-16898. Make write lock fine-grain in method processCommandFromActor
virajjasani commented on PR #5330: URL: https://github.com/apache/hadoop/pull/5330#issuecomment-1416204631 > Hi, @virajjasani . thanks for your careful review. Surely, before [HDFS-6788](https://issues.apache.org/jira/browse/HDFS-6788), this part was covered by synchronized lock. but in method `processCommandFromActive` and `processCommandFromStandby`, it just use the parameter actor to print log info. The lock here is just trying to decide actor is whether bpServiceToActive or not and determine to execute either processCommandFromActive or processCommandFromStandby. > > when occurs switchover between active namenode and standby namenode, the datanodes would be set to stale status, in stale status, we are not allowed to delete blocks directly, we put those blocks into postponedMisreplicatedBlocks. So, even we execute the DatanodeCommand from the previous active namenode(now standby), it is okay. Thank you @hfutatzhanghb. I was just going to state that we don't need write lock to verify whether the current actor is the one connected to active namenode, read lock would be sufficient. But looks like you already made the change. I did a quick glance and we don't hit this log line in our clusters so far but this PR has interesting fix. I will check this further for any more resource contention. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] snmvaughan commented on pull request #5343: HDFS-16905. Provide default hadoop.log.dir for tests
snmvaughan commented on PR #5343: URL: https://github.com/apache/hadoop/pull/5343#issuecomment-1416155276 I originally had it set to `./target/logs`, but changed it to `.` to match the other modules. I would think pointing it to `target` would make the most sense. Perhaps we should update all the `hadoop.log.dir` for tests? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18616) Java 11 JavaDoc fails due to missing package comments
[ https://issues.apache.org/jira/browse/HADOOP-18616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17683964#comment-17683964 ] ASF GitHub Bot commented on HADOOP-18616: - snmvaughan commented on PR #5344: URL: https://github.com/apache/hadoop/pull/5344#issuecomment-1416148548 This is different because it addresses the JavaDoc errors. I'd be happy with any fix that unblocks other pull requests. > Java 11 JavaDoc fails due to missing package comments > - > > Key: HADOOP-18616 > URL: https://issues.apache.org/jira/browse/HADOOP-18616 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.4.0, 3.3.5, 3.3.9 > Environment: Yetus Java 11 OpenJDK JavaDoc >Reporter: Steve Vaughan >Assignee: Steve Vaughan >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.3.9 > > > Submissions to `hadoop-common` fail in Yetus due to Java 11 JavaDoc errors: > ``` > [ERROR] > /home/builder/src/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/concurrent/package-info.java:21: > error: unknown tag: InterfaceAudience.Private > [ERROR] @InterfaceAudience.Private > [ERROR] ^ > [ERROR] > /home/builder/src/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/concurrent/package-info.java:22: > error: unknown tag: InterfaceStability.Unstable > [ERROR] @InterfaceStability.Unstable > [ERROR] ^ > ``` -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] snmvaughan commented on pull request #5344: HADOOP-18616. Java 11 JavaDoc fails due to missing package comments
snmvaughan commented on PR #5344: URL: https://github.com/apache/hadoop/pull/5344#issuecomment-1416148548 This is different because it addresses the JavaDoc errors. I'd be happy with any fix that unblocks other pull requests. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18612) Avoid mixing canonical and non-canonical when performing comparisons
[ https://issues.apache.org/jira/browse/HADOOP-18612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17683962#comment-17683962 ] ASF GitHub Bot commented on HADOOP-18612: - snmvaughan commented on PR #5339: URL: https://github.com/apache/hadoop/pull/5339#issuecomment-1416139138 HADOOP-18616. Java 11 JavaDoc fails due to missing package comments #5344 would fix the JavaDoc issues. I pulled that out since it didn't feel right to include unrelated fixes. > Avoid mixing canonical and non-canonical when performing comparisons > > > Key: HADOOP-18612 > URL: https://issues.apache.org/jira/browse/HADOOP-18612 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 3.4.0, 3.3.5, 3.3.9 > Environment: Tests were run using the Hadoop development environment > docker image. >Reporter: Steve Vaughan >Assignee: Steve Vaughan >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.3.9 > > > The test mixes canonical and non-canonical paths and then perform > comparisons. We can avoid unexpected failures by ensuring that comparisons > are always made against canonical forms. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] snmvaughan commented on pull request #5339: HADOOP-18612. Avoid mixing canonical and non-canonical when performing comparisons
snmvaughan commented on PR #5339: URL: https://github.com/apache/hadoop/pull/5339#issuecomment-1416139138 HADOOP-18616. Java 11 JavaDoc fails due to missing package comments #5344 would fix the JavaDoc issues. I pulled that out since it didn't feel right to include unrelated fixes. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18206) Cleanup the commons-logging references in the code base
[ https://issues.apache.org/jira/browse/HADOOP-18206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17683958#comment-17683958 ] ASF GitHub Bot commented on HADOOP-18206: - hadoop-yetus commented on PR #5315: URL: https://github.com/apache/hadoop/pull/5315#issuecomment-1416133229 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 58s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +0 :ok: | shelldocs | 0m 1s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 28 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 23m 11s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 35m 18s | | trunk passed | | +1 :green_heart: | compile | 30m 6s | | trunk passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | compile | 24m 21s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 4m 6s | | trunk passed | | +1 :green_heart: | mvnsite | 32m 48s | | trunk passed | | -1 :x: | javadoc | 1m 30s | [/branch-javadoc-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5315/27/artifact/out/branch-javadoc-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt) | root in trunk failed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04. | | +1 :green_heart: | javadoc | 8m 26s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +0 :ok: | spotbugs | 0m 16s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | -1 :x: | spotbugs | 4m 59s | [/branch-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5315/27/artifact/out/branch-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client-warnings.html) | hadoop-mapreduce-project/hadoop-mapreduce-client in trunk has 1 extant spotbugs warnings. | | -1 :x: | spotbugs | 40m 36s | [/branch-spotbugs-root-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5315/27/artifact/out/branch-spotbugs-root-warnings.html) | root in trunk has 1 extant spotbugs warnings. | | +1 :green_heart: | shadedclient | 66m 54s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 67m 13s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 35s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 47m 18s | | the patch passed | | +1 :green_heart: | compile | 29m 21s | | the patch passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | javac | 29m 21s | | root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 generated 0 new + 2814 unchanged - 5 fixed = 2814 total (was 2819) | | +1 :green_heart: | compile | 24m 23s | | the patch passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | javac | 24m 23s | | root-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08 with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 generated 0 new + 2612 unchanged - 5 fixed = 2612 total (was 2617) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 4m 23s | | root: The patch generated 0 new + 683 unchanged - 28 fixed = 683 total (was 711) | | +1 :green_heart: | mvnsite | 24m 30s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | -1 :x: | javadoc | 1m 23s | [/patch-javadoc-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5315/27/artifact/out/patch-javadoc-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt) | root in the patch failed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04. | | +1 :green_heart: | javadoc | 8m 19s | | the patch passed wit
[GitHub] [hadoop] hadoop-yetus commented on pull request #5315: HADOOP-18206 Cleanup the commons-logging references and restrict its usage in future
hadoop-yetus commented on PR #5315: URL: https://github.com/apache/hadoop/pull/5315#issuecomment-1416133229 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 58s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +0 :ok: | shelldocs | 0m 1s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 28 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 23m 11s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 35m 18s | | trunk passed | | +1 :green_heart: | compile | 30m 6s | | trunk passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | compile | 24m 21s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 4m 6s | | trunk passed | | +1 :green_heart: | mvnsite | 32m 48s | | trunk passed | | -1 :x: | javadoc | 1m 30s | [/branch-javadoc-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5315/27/artifact/out/branch-javadoc-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt) | root in trunk failed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04. | | +1 :green_heart: | javadoc | 8m 26s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +0 :ok: | spotbugs | 0m 16s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | -1 :x: | spotbugs | 4m 59s | [/branch-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5315/27/artifact/out/branch-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client-warnings.html) | hadoop-mapreduce-project/hadoop-mapreduce-client in trunk has 1 extant spotbugs warnings. | | -1 :x: | spotbugs | 40m 36s | [/branch-spotbugs-root-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5315/27/artifact/out/branch-spotbugs-root-warnings.html) | root in trunk has 1 extant spotbugs warnings. | | +1 :green_heart: | shadedclient | 66m 54s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 67m 13s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 35s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 47m 18s | | the patch passed | | +1 :green_heart: | compile | 29m 21s | | the patch passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | javac | 29m 21s | | root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 generated 0 new + 2814 unchanged - 5 fixed = 2814 total (was 2819) | | +1 :green_heart: | compile | 24m 23s | | the patch passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | javac | 24m 23s | | root-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08 with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 generated 0 new + 2612 unchanged - 5 fixed = 2612 total (was 2617) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 4m 23s | | root: The patch generated 0 new + 683 unchanged - 28 fixed = 683 total (was 711) | | +1 :green_heart: | mvnsite | 24m 30s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | -1 :x: | javadoc | 1m 23s | [/patch-javadoc-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5315/27/artifact/out/patch-javadoc-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt) | root in the patch failed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04. | | +1 :green_heart: | javadoc | 8m 19s | | the patch passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +0 :ok: | spotbugs | 0m 16s | | hadoop-project has no data from spotbugs | | +1 :green_heart: | shadedclient | 67m 26s | | patch has no errors when building and testing our client arti
[jira] [Commented] (HADOOP-18206) Cleanup the commons-logging references in the code base
[ https://issues.apache.org/jira/browse/HADOOP-18206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17683944#comment-17683944 ] ASF GitHub Bot commented on HADOOP-18206: - hadoop-yetus commented on PR #5315: URL: https://github.com/apache/hadoop/pull/5315#issuecomment-1416118617 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 55s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +0 :ok: | shelldocs | 0m 1s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 28 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 20m 58s | | Maven dependency ordering for branch | | -1 :x: | mvninstall | 38m 37s | [/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5315/26/artifact/out/branch-mvninstall-root.txt) | root in trunk failed. | | +1 :green_heart: | compile | 30m 7s | | trunk passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | compile | 25m 43s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 4m 13s | | trunk passed | | +1 :green_heart: | mvnsite | 29m 48s | | trunk passed | | -1 :x: | javadoc | 1m 34s | [/branch-javadoc-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5315/26/artifact/out/branch-javadoc-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt) | root in trunk failed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04. | | +1 :green_heart: | javadoc | 8m 19s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +0 :ok: | spotbugs | 0m 16s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | -1 :x: | spotbugs | 4m 44s | [/branch-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5315/26/artifact/out/branch-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client-warnings.html) | hadoop-mapreduce-project/hadoop-mapreduce-client in trunk has 1 extant spotbugs warnings. | | -1 :x: | spotbugs | 41m 46s | [/branch-spotbugs-root-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5315/26/artifact/out/branch-spotbugs-root-warnings.html) | root in trunk has 1 extant spotbugs warnings. | | +1 :green_heart: | shadedclient | 67m 32s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 67m 50s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 35s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 47m 45s | | the patch passed | | +1 :green_heart: | compile | 28m 32s | | the patch passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | javac | 28m 32s | | root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 generated 0 new + 2813 unchanged - 5 fixed = 2813 total (was 2818) | | +1 :green_heart: | compile | 25m 34s | | the patch passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | javac | 25m 34s | | root-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08 with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 generated 0 new + 2612 unchanged - 5 fixed = 2612 total (was 2617) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 4m 51s | | root: The patch generated 0 new + 683 unchanged - 28 fixed = 683 total (was 711) | | +1 :green_heart: | mvnsite | 24m 2s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | -1 :x: | javadoc | 1m 20s | [/patch-javadoc-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5315/26/artifact/out/patch-javadoc-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt) | root in the pat
[GitHub] [hadoop] hadoop-yetus commented on pull request #5315: HADOOP-18206 Cleanup the commons-logging references and restrict its usage in future
hadoop-yetus commented on PR #5315: URL: https://github.com/apache/hadoop/pull/5315#issuecomment-1416118617 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 55s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +0 :ok: | shelldocs | 0m 1s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 28 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 20m 58s | | Maven dependency ordering for branch | | -1 :x: | mvninstall | 38m 37s | [/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5315/26/artifact/out/branch-mvninstall-root.txt) | root in trunk failed. | | +1 :green_heart: | compile | 30m 7s | | trunk passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | compile | 25m 43s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 4m 13s | | trunk passed | | +1 :green_heart: | mvnsite | 29m 48s | | trunk passed | | -1 :x: | javadoc | 1m 34s | [/branch-javadoc-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5315/26/artifact/out/branch-javadoc-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt) | root in trunk failed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04. | | +1 :green_heart: | javadoc | 8m 19s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +0 :ok: | spotbugs | 0m 16s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | -1 :x: | spotbugs | 4m 44s | [/branch-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5315/26/artifact/out/branch-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client-warnings.html) | hadoop-mapreduce-project/hadoop-mapreduce-client in trunk has 1 extant spotbugs warnings. | | -1 :x: | spotbugs | 41m 46s | [/branch-spotbugs-root-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5315/26/artifact/out/branch-spotbugs-root-warnings.html) | root in trunk has 1 extant spotbugs warnings. | | +1 :green_heart: | shadedclient | 67m 32s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 67m 50s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 35s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 47m 45s | | the patch passed | | +1 :green_heart: | compile | 28m 32s | | the patch passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | javac | 28m 32s | | root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 generated 0 new + 2813 unchanged - 5 fixed = 2813 total (was 2818) | | +1 :green_heart: | compile | 25m 34s | | the patch passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | javac | 25m 34s | | root-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08 with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 generated 0 new + 2612 unchanged - 5 fixed = 2612 total (was 2617) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 4m 51s | | root: The patch generated 0 new + 683 unchanged - 28 fixed = 683 total (was 711) | | +1 :green_heart: | mvnsite | 24m 2s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | -1 :x: | javadoc | 1m 20s | [/patch-javadoc-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5315/26/artifact/out/patch-javadoc-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt) | root in the patch failed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04. | | +1 :green_heart: | javadoc | 8m 27s | | the patch passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +0 :ok: | spotbugs | 0m 15s | | hadoop-project has no dat
[GitHub] [hadoop] slfan1989 commented on a diff in pull request #5335: YARN-11426. Improve YARN NodeLabel Memory Display.
slfan1989 commented on code in PR #5335: URL: https://github.com/apache/hadoop/pull/5335#discussion_r1095999019 ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/api/records/TestResource.java: ## @@ -42,4 +42,70 @@ void testCastToIntSafely() { "Cast to Integer.MAX_VALUE if the long is greater than " + "Integer.MAX_VALUE"); } + + @Test + public void testResourceFormatted() { +// We set 10MB +String expectedResult1 = ""; +MockResource capability1 = new MockResource(); +capability1.setMemory(10); +capability1.setVirtualCores(1); +assertEquals(capability1.toFormattedString(), expectedResult1); + +// We set 1024 MB = 1GB +String expectedResult2 = ""; +MockResource capability2 = new MockResource(); +capability2.setMemory(1024); +capability2.setVirtualCores(1); +assertEquals(capability2.toFormattedString(), expectedResult2); + +// We set 1024 * 1024 MB = 1024 GB = 1TB +String expectedResult3 = ""; +MockResource capability3 = new MockResource(); +capability3.setMemory(1024 * 1024); +capability3.setVirtualCores(1); +assertEquals(capability3.toFormattedString(), expectedResult3); + +// We set 1024 * 1024 * 1024 MB = 1024 * 1024 GB = 1 * 1024 TB = 1 PB +String expectedResult4 = ""; +MockResource capability4 = new MockResource(); +capability4.setMemory(1024 * 1024 * 1024); +capability4.setVirtualCores(1); +assertEquals(capability4.toFormattedString(), expectedResult4); + } + + class MockResource extends Resource { Review Comment: Thanks for reviewing the code. In this unit test, we want to test whether the return result of `toFormattedString()` is as expected. ![image](https://user-images.githubusercontent.com/55643692/216651973-eae6bbf2-5170-4339-9ce6-248d929cfaf5.png) The implementation class and definition of Resource are not in the same package. If we try to call it, classNotFound will be reported. Use Class `LightWeightResource`, The initialization of this class requires org.apache.hadoop.yarn.LocalConfigurationProvider, which is under the `hadoop-yarn-common` package and cannot be referenced. ``` Resource resource = Resource.newInstance(10,1); String expectedResult1 = ""; assertEquals(resource.toFormattedString(), expectedResult1); Error Msg: Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.yarn.LocalConfigurationProvider at java.net.URLClassLoader.findClass(URLClassLoader.java:382) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) ... ``` Use Class `ResourcePBImpl`, which is under the `hadoop-yarn-common` package and cannot be referenced. ``` Resource resource = spy(Resource.class); String expectedResult1 = ""; when(resource.getResources()).thenReturn(new ResourceInformation[0]); when(resource.getMemorySize()).thenReturn(10L); when(resource.getVirtualCores()).thenReturn(1); assertEquals(resource.toFormattedString(), expectedResult1); Error Msg: org.apache.commons.lang3.NotImplementedException: This method is implemented by ResourcePBImpl ... ``` I choose to use MockResource to complete unit testing. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18613) Upgrade ZooKeeper to version 3.8.1
[ https://issues.apache.org/jira/browse/HADOOP-18613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HADOOP-18613: Labels: pull-request-available (was: ) > Upgrade ZooKeeper to version 3.8.1 > -- > > Key: HADOOP-18613 > URL: https://issues.apache.org/jira/browse/HADOOP-18613 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.3.4 >Reporter: Tamas Penzes >Assignee: Tamas Penzes >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18613) Upgrade ZooKeeper to version 3.8.1
[ https://issues.apache.org/jira/browse/HADOOP-18613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17683881#comment-17683881 ] ASF GitHub Bot commented on HADOOP-18613: - hadoop-yetus commented on PR #5345: URL: https://github.com/apache/hadoop/pull/5345#issuecomment-1415919530 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 49s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 45m 44s | | trunk passed | | +1 :green_heart: | compile | 0m 16s | | trunk passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | compile | 0m 18s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | mvnsite | 0m 22s | | trunk passed | | +1 :green_heart: | javadoc | 0m 25s | | trunk passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | javadoc | 0m 19s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | shadedclient | 72m 52s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 14s | | the patch passed | | +1 :green_heart: | compile | 0m 12s | | the patch passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | javac | 0m 12s | | the patch passed | | +1 :green_heart: | compile | 0m 11s | | the patch passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 11s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 13s | | the patch passed | | +1 :green_heart: | javadoc | 0m 12s | | the patch passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | javadoc | 0m 11s | | the patch passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | shadedclient | 27m 14s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 14s | | hadoop-project in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | | The patch does not generate ASF License warnings. | | | | 104m 4s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5345/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5345 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint | | uname | Linux 97bfd0150be1 4.15.0-197-generic #208-Ubuntu SMP Tue Nov 1 17:23:37 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 7ed7f25f252f90c2e2580e849e44a6e70fa3e242 | | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5345/1/testReport/ | | Max. process+thread count | 540 (vs. ulimit of 5500) | | modules | C: hadoop-project U: hadoop-project | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5345/1/console | | versions | git=2.25.1 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. > Upgrade ZooKeeper to version 3.8.1 > -- > > Key: HADOOP-18613 > URL: https://issues.apache.org/jira/browse/HADOOP-18613 > Project: Hadoop Co
[GitHub] [hadoop] hadoop-yetus commented on pull request #5345: HADOOP-18613. Upgrade ZooKeeper to version 3.8.1
hadoop-yetus commented on PR #5345: URL: https://github.com/apache/hadoop/pull/5345#issuecomment-1415919530 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 49s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 45m 44s | | trunk passed | | +1 :green_heart: | compile | 0m 16s | | trunk passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | compile | 0m 18s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | mvnsite | 0m 22s | | trunk passed | | +1 :green_heart: | javadoc | 0m 25s | | trunk passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | javadoc | 0m 19s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | shadedclient | 72m 52s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 14s | | the patch passed | | +1 :green_heart: | compile | 0m 12s | | the patch passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | javac | 0m 12s | | the patch passed | | +1 :green_heart: | compile | 0m 11s | | the patch passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 11s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 13s | | the patch passed | | +1 :green_heart: | javadoc | 0m 12s | | the patch passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | javadoc | 0m 11s | | the patch passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | shadedclient | 27m 14s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 14s | | hadoop-project in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | | The patch does not generate ASF License warnings. | | | | 104m 4s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5345/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5345 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint | | uname | Linux 97bfd0150be1 4.15.0-197-generic #208-Ubuntu SMP Tue Nov 1 17:23:37 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 7ed7f25f252f90c2e2580e849e44a6e70fa3e242 | | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5345/1/testReport/ | | Max. process+thread count | 540 (vs. ulimit of 5500) | | modules | C: hadoop-project U: hadoop-project | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5345/1/console | | versions | git=2.25.1 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional c
[GitHub] [hadoop] hadoop-yetus commented on pull request #5330: HDFS-16898. Make write lock fine-grain in method processCommandFromActor
hadoop-yetus commented on PR #5330: URL: https://github.com/apache/hadoop/pull/5330#issuecomment-1415901934 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 42s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 46m 6s | | trunk passed | | +1 :green_heart: | compile | 1m 29s | | trunk passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | compile | 1m 20s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 5s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 33s | | trunk passed | | +1 :green_heart: | javadoc | 1m 9s | | trunk passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | javadoc | 1m 34s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 28s | | trunk passed | | +1 :green_heart: | shadedclient | 26m 46s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 30s | | the patch passed | | +1 :green_heart: | compile | 1m 24s | | the patch passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | javac | 1m 24s | | the patch passed | | +1 :green_heart: | compile | 1m 13s | | the patch passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | javac | 1m 13s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 49s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 25s | | the patch passed | | +1 :green_heart: | javadoc | 0m 54s | | the patch passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | javadoc | 1m 29s | | the patch passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 37s | | the patch passed | | +1 :green_heart: | shadedclient | 27m 4s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 288m 25s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5330/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 48s | | The patch does not generate ASF License warnings. | | | | 411m 51s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestLeaseRecovery2 | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5330/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5330 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 40a8c60b12b3 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 3d74257b46e8407b71c91540630e8a790851f2b1 | | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5330/3/testReport/ | | Max. process+thread count | 3711 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5330/3/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache
[jira] [Assigned] (HADOOP-18613) Upgrade ZooKeeper to version 3.8.1
[ https://issues.apache.org/jira/browse/HADOOP-18613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Penzes reassigned HADOOP-18613: - Assignee: Tamas Penzes > Upgrade ZooKeeper to version 3.8.1 > -- > > Key: HADOOP-18613 > URL: https://issues.apache.org/jira/browse/HADOOP-18613 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.3.4 >Reporter: Tamas Penzes >Assignee: Tamas Penzes >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18206) Cleanup the commons-logging references in the code base
[ https://issues.apache.org/jira/browse/HADOOP-18206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17683858#comment-17683858 ] ASF GitHub Bot commented on HADOOP-18206: - hadoop-yetus commented on PR #5315: URL: https://github.com/apache/hadoop/pull/5315#issuecomment-1415804500 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 11m 50s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 2s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 28 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 24m 59s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 30m 52s | | trunk passed | | +1 :green_heart: | compile | 22m 57s | | trunk passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | compile | 20m 32s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 3m 54s | | trunk passed | | +1 :green_heart: | mvnsite | 25m 1s | | trunk passed | | -1 :x: | javadoc | 1m 26s | [/branch-javadoc-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5315/28/artifact/out/branch-javadoc-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt) | root in trunk failed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04. | | +1 :green_heart: | javadoc | 7m 19s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +0 :ok: | spotbugs | 0m 20s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | -1 :x: | spotbugs | 3m 52s | [/branch-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5315/28/artifact/out/branch-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client-warnings.html) | hadoop-mapreduce-project/hadoop-mapreduce-client in trunk has 1 extant spotbugs warnings. | | -1 :x: | spotbugs | 32m 4s | [/branch-spotbugs-root-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5315/28/artifact/out/branch-spotbugs-root-warnings.html) | root in trunk has 1 extant spotbugs warnings. | | +1 :green_heart: | shadedclient | 54m 16s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 54m 37s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 41s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 41m 8s | | the patch passed | | +1 :green_heart: | compile | 22m 29s | | the patch passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | javac | 22m 29s | | root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 generated 0 new + 2813 unchanged - 5 fixed = 2813 total (was 2818) | | +1 :green_heart: | compile | 20m 25s | | the patch passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | javac | 20m 25s | | root-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08 with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 generated 0 new + 2614 unchanged - 5 fixed = 2614 total (was 2619) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 4m 12s | | root: The patch generated 0 new + 685 unchanged - 28 fixed = 685 total (was 713) | | +1 :green_heart: | mvnsite | 19m 52s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | -1 :x: | javadoc | 1m 14s | [/patch-javadoc-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5315/28/artifact/out/patch-javadoc-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt) | root in the patch failed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04. | | +1 :green_heart: | javadoc | 7m 6s | | the patch passed wit
[GitHub] [hadoop] hadoop-yetus commented on pull request #5315: HADOOP-18206 Cleanup the commons-logging references and restrict its usage in future
hadoop-yetus commented on PR #5315: URL: https://github.com/apache/hadoop/pull/5315#issuecomment-1415804500 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 11m 50s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 2s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 28 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 24m 59s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 30m 52s | | trunk passed | | +1 :green_heart: | compile | 22m 57s | | trunk passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | compile | 20m 32s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 3m 54s | | trunk passed | | +1 :green_heart: | mvnsite | 25m 1s | | trunk passed | | -1 :x: | javadoc | 1m 26s | [/branch-javadoc-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5315/28/artifact/out/branch-javadoc-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt) | root in trunk failed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04. | | +1 :green_heart: | javadoc | 7m 19s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +0 :ok: | spotbugs | 0m 20s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | -1 :x: | spotbugs | 3m 52s | [/branch-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5315/28/artifact/out/branch-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client-warnings.html) | hadoop-mapreduce-project/hadoop-mapreduce-client in trunk has 1 extant spotbugs warnings. | | -1 :x: | spotbugs | 32m 4s | [/branch-spotbugs-root-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5315/28/artifact/out/branch-spotbugs-root-warnings.html) | root in trunk has 1 extant spotbugs warnings. | | +1 :green_heart: | shadedclient | 54m 16s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 54m 37s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 41s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 41m 8s | | the patch passed | | +1 :green_heart: | compile | 22m 29s | | the patch passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | javac | 22m 29s | | root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 generated 0 new + 2813 unchanged - 5 fixed = 2813 total (was 2818) | | +1 :green_heart: | compile | 20m 25s | | the patch passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | javac | 20m 25s | | root-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08 with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 generated 0 new + 2614 unchanged - 5 fixed = 2614 total (was 2619) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 4m 12s | | root: The patch generated 0 new + 685 unchanged - 28 fixed = 685 total (was 713) | | +1 :green_heart: | mvnsite | 19m 52s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | -1 :x: | javadoc | 1m 14s | [/patch-javadoc-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5315/28/artifact/out/patch-javadoc-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt) | root in the patch failed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04. | | +1 :green_heart: | javadoc | 7m 6s | | the patch passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +0 :ok: | spotbugs | 0m 20s | | hadoop-project has no data from spotbugs | | +1 :green_heart: | shadedclient | 54m 22s | | patch has no errors when building and testing our client arti
[GitHub] [hadoop] susheel-gupta commented on pull request #5295: YARN-11404. Add junit5 dependency to hadoop-mapreduce-client-app to fix few unit test failure
susheel-gupta commented on PR #5295: URL: https://github.com/apache/hadoop/pull/5295#issuecomment-1415713461 > couple of belated comments > > 1. I don't see any need anywhere in the codebase to move to jupiter assertions. It makes backporting harder without offering any tangible benefits. >If the jupiter team chose to move classes to new packages, well, that's their choice. But if we are going to update test asserts, assertJ is a far better assert framework. >It's a richer assertion syntax, generates better messages and is already in the 3.3 line -which is why we are using for much of the new tests. >2, new pom imports should be added to hadoop-project and then referenced, so we can stay on top of the changes. > > I'm not going to suggest rolling this back -as it's in, and it was a big piece of work. It's just that in particular the code changes for jupiter assertions wasn't needed and it's potentially counter-productive. > > What I would propose is > > * followup PR to move the pom declarations up > * no new patches to move to jupiter asserts. stay on org.junit or embrace assertJ Hi @steveloughran, Thanks for the comments. I followed this previously merged pr https://github.com/apache/hadoop/pull/4771 by @aajisaka . I will create a followup ticket to move the pom declarations up. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18616) Java 11 JavaDoc fails due to missing package comments
[ https://issues.apache.org/jira/browse/HADOOP-18616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17683817#comment-17683817 ] Steve Loughran commented on HADOOP-18616: - #. think this is a duplicate of HADOOP-18576, which isn't being looked at > Java 11 JavaDoc fails due to missing package comments > - > > Key: HADOOP-18616 > URL: https://issues.apache.org/jira/browse/HADOOP-18616 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.4.0, 3.3.5, 3.3.9 > Environment: Yetus Java 11 OpenJDK JavaDoc >Reporter: Steve Vaughan >Assignee: Steve Vaughan >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.3.9 > > > Submissions to `hadoop-common` fail in Yetus due to Java 11 JavaDoc errors: > ``` > [ERROR] > /home/builder/src/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/concurrent/package-info.java:21: > error: unknown tag: InterfaceAudience.Private > [ERROR] @InterfaceAudience.Private > [ERROR] ^ > [ERROR] > /home/builder/src/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/concurrent/package-info.java:22: > error: unknown tag: InterfaceStability.Unstable > [ERROR] @InterfaceStability.Unstable > [ERROR] ^ > ``` -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18616) Java 11 JavaDoc fails due to missing package comments
[ https://issues.apache.org/jira/browse/HADOOP-18616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17683816#comment-17683816 ] ASF GitHub Bot commented on HADOOP-18616: - steveloughran commented on PR #5344: URL: https://github.com/apache/hadoop/pull/5344#issuecomment-1415651771 it does fix 29 of the complaints though, which #5226 didn't. is it just the lack of a doc comment which broke things then, not the @interfaceAudience tag? if so, yes, let's merge -but use the original JIRA ID > Java 11 JavaDoc fails due to missing package comments > - > > Key: HADOOP-18616 > URL: https://issues.apache.org/jira/browse/HADOOP-18616 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.4.0, 3.3.5, 3.3.9 > Environment: Yetus Java 11 OpenJDK JavaDoc >Reporter: Steve Vaughan >Assignee: Steve Vaughan >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.3.9 > > > Submissions to `hadoop-common` fail in Yetus due to Java 11 JavaDoc errors: > ``` > [ERROR] > /home/builder/src/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/concurrent/package-info.java:21: > error: unknown tag: InterfaceAudience.Private > [ERROR] @InterfaceAudience.Private > [ERROR] ^ > [ERROR] > /home/builder/src/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/concurrent/package-info.java:22: > error: unknown tag: InterfaceStability.Unstable > [ERROR] @InterfaceStability.Unstable > [ERROR] ^ > ``` -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #5344: HADOOP-18616. Java 11 JavaDoc fails due to missing package comments
steveloughran commented on PR #5344: URL: https://github.com/apache/hadoop/pull/5344#issuecomment-1415651771 it does fix 29 of the complaints though, which #5226 didn't. is it just the lack of a doc comment which broke things then, not the @interfaceAudience tag? if so, yes, let's merge -but use the original JIRA ID -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a diff in pull request #5343: HDFS-16905. Provide default hadoop.log.dir for tests
steveloughran commented on code in PR #5343: URL: https://github.com/apache/hadoop/pull/5343#discussion_r1095639304 ## hadoop-hdfs-project/hadoop-hdfs-client/src/test/resources/log4j.properties: ## @@ -16,6 +16,8 @@ # # log4j configuration used during build and unit tests +hadoop.log.dir=. Review Comment: where does this put the logs? in target/ ? should we have a subdir? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #5295: YARN-11404. Add junit5 dependency to hadoop-mapreduce-client-app to fix few unit test failure
steveloughran commented on PR #5295: URL: https://github.com/apache/hadoop/pull/5295#issuecomment-1415624965 couple of belated comments 1. I don't see any need anywhere in the codebase to move to jupiter assertions. It makes backporting harder without offering any tangible benefits. If the jupiter team chose to move classes to new packages, well, that's their choice. But if we are going to update test asserts, assertJ is a far better assert framework. It's a richer assertion syntax, generates better messages and is already in the 3.3 line -which is why we are using for much of the new tests. 2, new pom imports should be added to hadoop-project and then referenced, so we can stay on top of the changes. I'm not going to suggest rolling this back -as it's in, and it was a big piece of work. It's just that in particular the code changes for jupiter assertions wasn't needed and it's potentially counter-productive. What I would propose is * followup PR to move the pom declarations up * no new patches to move to jupiter asserts. stay on org.junit or embrace assertJ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] szilard-nemeth commented on a diff in pull request #5295: YARN-11404. Add junit5 dependency to hadoop-mapreduce-client-app to fix few unit test failure
szilard-nemeth commented on code in PR #5295: URL: https://github.com/apache/hadoop/pull/5295#discussion_r1095627121 ## hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/pom.xml: ## @@ -100,6 +100,39 @@ test-jar test + + org.junit.jupiter + junit-jupiter-api + test + + + org.junit.jupiter + junit-jupiter-engine + test + + + org.junit.jupiter + junit-jupiter-params + test + + + org.mockito + mockito-junit-jupiter + 4.11.0 Review Comment: Hi @steveloughran , Fair point. @susheel-gupta Could you please open a follow-up jira for this? Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a diff in pull request #5295: YARN-11404. Add junit5 dependency to hadoop-mapreduce-client-app to fix few unit test failure
steveloughran commented on code in PR #5295: URL: https://github.com/apache/hadoop/pull/5295#discussion_r1095617154 ## hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/pom.xml: ## @@ -100,6 +100,39 @@ test-jar test + + org.junit.jupiter + junit-jupiter-api + test + + + org.junit.jupiter + junit-jupiter-engine + test + + + org.junit.jupiter + junit-jupiter-params + test + + + org.mockito + mockito-junit-jupiter + 4.11.0 Review Comment: can we have these versioned imports pulled up into the hadoop-project pom for (a) version maintenance and (b) ease of using an IDE to find where things are used. this is particularly important for mockito as it is so brittle -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18598) maven site generation doesn't include javadocs
[ https://issues.apache.org/jira/browse/HADOOP-18598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17683795#comment-17683795 ] Steve Loughran commented on HADOOP-18598: - just to confirm: 3.3.5 site docs are good...this blocker is fixed! > maven site generation doesn't include javadocs > -- > > Key: HADOOP-18598 > URL: https://issues.apache.org/jira/browse/HADOOP-18598 > Project: Hadoop Common > Issue Type: Bug > Components: site >Affects Versions: 3.3.5 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Blocker > Labels: pull-request-available > Fix For: 3.4.0, 3.3.5 > > > the rc0 excluded all the site docs. running mvn site on trunk throws up site > plugin issues, which may be related, so start by updating that. > rc validation scripts to include checks for the api/index.html -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] haiyang1987 commented on a diff in pull request #5301: HDFS-16892. Fix method name of RPC.Builder#setnumReaders
haiyang1987 commented on code in PR #5301: URL: https://github.com/apache/hadoop/pull/5301#discussion_r1095503796 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java: ## @@ -901,7 +901,7 @@ public Builder setNumHandlers(int numHandlers) { * @return Default: -1. * @param numReaders input numReaders. */ -public Builder setnumReaders(int numReaders) { +public Builder setNumReaders(int numReaders) { Review Comment: Hi @tomscut @steveloughran @ayushtkn thanks help review it. sorry for the late reply. Considering that it is a public method in a public class and avoid compilation failures in downstream use, can create a new method as @steveloughran said, and keep the old method as deprecated, which may be better. I will update PR later. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] haiyang1987 commented on a diff in pull request #5301: HDFS-16892. Fix method name of RPC.Builder#setnumReaders
haiyang1987 commented on code in PR #5301: URL: https://github.com/apache/hadoop/pull/5301#discussion_r1095503796 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java: ## @@ -901,7 +901,7 @@ public Builder setNumHandlers(int numHandlers) { * @return Default: -1. * @param numReaders input numReaders. */ -public Builder setnumReaders(int numReaders) { +public Builder setNumReaders(int numReaders) { Review Comment: Hi @tomscut @steveloughran @ayushtkn thanks help review it. sorry for the late reply. Considering that it is a public method in a public class and avoid compilation failures in downstream use, you can create a new method as @steveloughran said, and keep the old method as deprecated, which may be better. I will update PR later. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18616) Java 11 JavaDoc fails due to missing package comments
[ https://issues.apache.org/jira/browse/HADOOP-18616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17683741#comment-17683741 ] ASF GitHub Bot commented on HADOOP-18616: - ayushtkn commented on PR #5344: URL: https://github.com/apache/hadoop/pull/5344#issuecomment-1415328592 Dupes #5226 / HADOOP-18576 > Java 11 JavaDoc fails due to missing package comments > - > > Key: HADOOP-18616 > URL: https://issues.apache.org/jira/browse/HADOOP-18616 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.4.0, 3.3.5, 3.3.9 > Environment: Yetus Java 11 OpenJDK JavaDoc >Reporter: Steve Vaughan >Assignee: Steve Vaughan >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.3.9 > > > Submissions to `hadoop-common` fail in Yetus due to Java 11 JavaDoc errors: > ``` > [ERROR] > /home/builder/src/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/concurrent/package-info.java:21: > error: unknown tag: InterfaceAudience.Private > [ERROR] @InterfaceAudience.Private > [ERROR] ^ > [ERROR] > /home/builder/src/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/concurrent/package-info.java:22: > error: unknown tag: InterfaceStability.Unstable > [ERROR] @InterfaceStability.Unstable > [ERROR] ^ > ``` -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn commented on pull request #5344: HADOOP-18616. Java 11 JavaDoc fails due to missing package comments
ayushtkn commented on PR #5344: URL: https://github.com/apache/hadoop/pull/5344#issuecomment-1415328592 Dupes #5226 / HADOOP-18576 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5324: HDFS-16895. NamenodeHeartbeatService should use credentials of logged…
hadoop-yetus commented on PR #5324: URL: https://github.com/apache/hadoop/pull/5324#issuecomment-1415309397 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 6s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 45m 59s | | trunk passed | | +1 :green_heart: | compile | 0m 43s | | trunk passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | compile | 0m 36s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 29s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 40s | | trunk passed | | +1 :green_heart: | javadoc | 0m 47s | | trunk passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | javadoc | 0m 54s | | trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 27s | | trunk passed | | +1 :green_heart: | shadedclient | 26m 52s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 39s | | the patch passed | | +1 :green_heart: | compile | 0m 36s | | the patch passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | javac | 0m 36s | | the patch passed | | +1 :green_heart: | compile | 0m 29s | | the patch passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 29s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 16s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 33s | | the patch passed | | +1 :green_heart: | javadoc | 0m 31s | | the patch passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 | | +1 :green_heart: | javadoc | 0m 47s | | the patch passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 23s | | the patch passed | | +1 :green_heart: | shadedclient | 26m 20s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 40m 29s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 34s | | The patch does not generate ASF License warnings. | | | | 153m 54s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5324/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5324 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux d89b254fdd75 4.15.0-197-generic #208-Ubuntu SMP Tue Nov 1 17:23:37 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / d51321b31a07e4b7fa42bc739a8fc1c2a573e417 | | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5324/6/testReport/ | | Max. process+thread count | 2415 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5324/6/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.or
[GitHub] [hadoop] Hexiaoqiao commented on pull request #5330: HDFS-16898. Make write lock fine-grain in method processCommandFromActor
Hexiaoqiao commented on PR #5330: URL: https://github.com/apache/hadoop/pull/5330#issuecomment-1415302521 > > > > @Hexiaoqiao , thank for your replying~, I will try to draw some pictures to describe it soonly. Great. It will be more helpful to push this improvement forward. cc @zhangshuyan0 would you mind to take another review? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hfutatzhanghb commented on pull request #5330: HDFS-16898. Make write lock fine-grain in method processCommandFromActor
hfutatzhanghb commented on PR #5330: URL: https://github.com/apache/hadoop/pull/5330#issuecomment-1415288788 > @Hexiaoqiao , thank for your replying~, I will try to draw some pictures to describe it soonly. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org