[jira] [Work logged] (HDFS-16283) RBF: improve renewLease() to call only a specific NameNode rather than make fan-out calls

2021-10-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16283?focusedWorklogId=671838&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-671838
 ]

ASF GitHub Bot logged work on HDFS-16283:
-

Author: ASF GitHub Bot
Created on: 29/Oct/21 06:31
Start Date: 29/Oct/21 06:31
Worklog Time Spent: 10m 
  Work Description: Hexiaoqiao commented on pull request #3595:
URL: https://github.com/apache/hadoop/pull/3595#issuecomment-954466025


   Thanks for bringing this improvement. Sorry I do not receive discussion in 
mail-list and reply here directly.
   I am not sure if it is the best solution to expose namespace id to client 
which will confuse the end-user in my opinion.
   `If we are adding new protocols, maybe we can add "renewLease(String 
clientName, String path)" and let Router do the resolve from path to namespace.`
   In my practice, I think `path` will be the good choice here. Strong +1. And 
router could resolve it which keep the same logic for other interface. But the 
bad news is that it will increase the invoke interface times. Anyway, I think 
this solution will not break anything.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 671838)
Time Spent: 50m  (was: 40m)

> RBF: improve renewLease() to call only a specific NameNode rather than make 
> fan-out calls
> -
>
> Key: HDFS-16283
> URL: https://issues.apache.org/jira/browse/HDFS-16283
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Currently renewLease() against a router will make fan-out to all the 
> NameNodes. Since renewLease() call is so frequent and if one of the NameNodes 
> are slow, then eventually the router queues are blocked by all renewLease() 
> and cause router degradation. 
> We will make a change in the client side to keep track of NameNode Id in 
> additional to current fileId so routers understand which NameNodes the client 
> is renewing lease against.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16269) [Fix] Improve NNThroughputBenchmark#blockReport operation

2021-10-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16269?focusedWorklogId=671833&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-671833
 ]

ASF GitHub Bot logged work on HDFS-16269:
-

Author: ASF GitHub Bot
Created on: 29/Oct/21 05:50
Start Date: 29/Oct/21 05:50
Worklog Time Spent: 10m 
  Work Description: jianghuazhu commented on a change in pull request #3544:
URL: https://github.com/apache/hadoop/pull/3544#discussion_r738950223



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNNThroughputBenchmark.java
##
@@ -166,4 +166,31 @@ public void testNNThroughputForAppendOp() throws Exception 
{
   }
 }
   }
+
+  /**
+   * This test runs {@link NNThroughputBenchmark} against a mini DFS cluster
+   * for block report operation.
+   */
+  @Test(timeout = 12)
+  public void testNNThroughputForBlockReportOp() throws Exception {
+final Configuration conf = new HdfsConfiguration();
+conf.setInt(DFSConfigKeys.DFS_NAMENODE_MIN_BLOCK_SIZE_KEY, 16);
+conf.setInt(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, 16);
+MiniDFSCluster cluster = null;
+try {
+  cluster = new MiniDFSCluster.Builder(conf).numDataNodes(3).build();
+  cluster.waitActive();
+
+  final Configuration benchConf = new HdfsConfiguration();
+  benchConf.setInt(DFSConfigKeys.DFS_NAMENODE_MIN_BLOCK_SIZE_KEY, 16);
+  benchConf.setInt(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, 16);
+  NNThroughputBenchmark.runBenchmark(benchConf,
+  new String[]{"-fs", cluster.getURI().toString(), "-op",
+  "blockReport", "-datanodes", "3", "-reports", "2"});
+} finally {
+  if (cluster != null) {
+cluster.shutdown();
+  }
+}

Review comment:
   Sorry, this is my question.
   Thanks @aajisaka  for the reminder.
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 671833)
Time Spent: 2h 50m  (was: 2h 40m)

> [Fix] Improve NNThroughputBenchmark#blockReport operation
> -
>
> Key: HDFS-16269
> URL: https://issues.apache.org/jira/browse/HDFS-16269
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: benchmarks, namenode
>Affects Versions: 2.9.2
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> When using NNThroughputBenchmark to verify the blockReport, you will get some 
> exception information.
> Commands used:
> ./bin/hadoop org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark -fs 
>  -op blockReport -datanodes 3 -reports 1
> The exception information:
> 21/10/12 14:35:18 INFO namenode.NNThroughputBenchmark: Starting benchmark: 
> blockReport
> 21/10/12 14:35:19 INFO namenode.NNThroughputBenchmark: Creating 10 files with 
> 10 blocks each.
> 21/10/12 14:35:19 ERROR namenode.NNThroughputBenchmark: 
> java.lang.ArrayIndexOutOfBoundsException: 50009
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$BlockReportStats.addBlocks(NNThroughputBenchmark.java:1161)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$BlockReportStats.generateInputs(NNThroughputBenchmark.java:1143)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$OperationStatsBase.benchmark(NNThroughputBenchmark.java:257)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.run(NNThroughputBenchmark.java:1528)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.runBenchmark(NNThroughputBenchmark.java:1430)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.main(NNThroughputBenchmark.java:1550)
> Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 50009
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$BlockReportStats.addBlocks(NNThroughputBenchmark.java:1161)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$BlockReportStats.generateInputs(NNThroughputBenchmark.java:1143)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$OperationStatsBase.benchmark(NNThroughputBenchmark.java:257)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.run(NNThroughputBenchmark.java:1528)
> at org.apache.hadoop.util.ToolR

[jira] [Work logged] (HDFS-16269) [Fix] Improve NNThroughputBenchmark#blockReport operation

2021-10-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16269?focusedWorklogId=671817&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-671817
 ]

ASF GitHub Bot logged work on HDFS-16269:
-

Author: ASF GitHub Bot
Created on: 29/Oct/21 05:25
Start Date: 29/Oct/21 05:25
Worklog Time Spent: 10m 
  Work Description: aajisaka commented on a change in pull request #3544:
URL: https://github.com/apache/hadoop/pull/3544#discussion_r738940853



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNNThroughputBenchmark.java
##
@@ -166,4 +166,31 @@ public void testNNThroughputForAppendOp() throws Exception 
{
   }
 }
   }
+
+  /**
+   * This test runs {@link NNThroughputBenchmark} against a mini DFS cluster
+   * for block report operation.
+   */
+  @Test(timeout = 12)
+  public void testNNThroughputForBlockReportOp() throws Exception {
+final Configuration conf = new HdfsConfiguration();
+conf.setInt(DFSConfigKeys.DFS_NAMENODE_MIN_BLOCK_SIZE_KEY, 16);
+conf.setInt(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, 16);
+MiniDFSCluster cluster = null;
+try {
+  cluster = new MiniDFSCluster.Builder(conf).numDataNodes(3).build();
+  cluster.waitActive();
+
+  final Configuration benchConf = new HdfsConfiguration();
+  benchConf.setInt(DFSConfigKeys.DFS_NAMENODE_MIN_BLOCK_SIZE_KEY, 16);
+  benchConf.setInt(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, 16);
+  NNThroughputBenchmark.runBenchmark(benchConf,
+  new String[]{"-fs", cluster.getURI().toString(), "-op",
+  "blockReport", "-datanodes", "3", "-reports", "2"});
+} finally {
+  if (cluster != null) {
+cluster.shutdown();
+  }
+}

Review comment:
   The update is not correct. The try-with-resources clause will be as 
follows:
   ```java
 try (MiniDFSCluster cluster = new 
MiniDFSCluster.Builder(conf).numDataNodes(3).build()) {
   cluster.waitActive();
   final Configuration benchConf = new HdfsConfiguration();
   benchConf.setInt(DFSConfigKeys.DFS_NAMENODE_MIN_BLOCK_SIZE_KEY, 16);
   benchConf.setInt(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, 16);
   NNThroughputBenchmark.runBenchmark(benchConf,
   new String[]{"-fs", cluster.getURI().toString(), "-op",
   "blockReport", "-datanodes", "3", "-reports", "2"});
 }
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 671817)
Time Spent: 2h 40m  (was: 2.5h)

> [Fix] Improve NNThroughputBenchmark#blockReport operation
> -
>
> Key: HDFS-16269
> URL: https://issues.apache.org/jira/browse/HDFS-16269
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: benchmarks, namenode
>Affects Versions: 2.9.2
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> When using NNThroughputBenchmark to verify the blockReport, you will get some 
> exception information.
> Commands used:
> ./bin/hadoop org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark -fs 
>  -op blockReport -datanodes 3 -reports 1
> The exception information:
> 21/10/12 14:35:18 INFO namenode.NNThroughputBenchmark: Starting benchmark: 
> blockReport
> 21/10/12 14:35:19 INFO namenode.NNThroughputBenchmark: Creating 10 files with 
> 10 blocks each.
> 21/10/12 14:35:19 ERROR namenode.NNThroughputBenchmark: 
> java.lang.ArrayIndexOutOfBoundsException: 50009
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$BlockReportStats.addBlocks(NNThroughputBenchmark.java:1161)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$BlockReportStats.generateInputs(NNThroughputBenchmark.java:1143)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$OperationStatsBase.benchmark(NNThroughputBenchmark.java:257)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.run(NNThroughputBenchmark.java:1528)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.runBenchmark(NNThroughputBenchmark.java:1430)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.main(NNThroughputBenchmark.java:1550)
> Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException:

[jira] [Work logged] (HDFS-16266) Add remote port information to HDFS audit log

2021-10-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16266?focusedWorklogId=671784&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-671784
 ]

ASF GitHub Bot logged work on HDFS-16266:
-

Author: ASF GitHub Bot
Created on: 29/Oct/21 03:30
Start Date: 29/Oct/21 03:30
Worklog Time Spent: 10m 
  Work Description: tomscut commented on pull request #3538:
URL: https://github.com/apache/hadoop/pull/3538#issuecomment-954401714


   Hi @tasanuma @aajisaka @ferhui @ayushtkn , could you also take a look? 
Thanks a lot.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 671784)
Time Spent: 5h 40m  (was: 5.5h)

> Add remote port information to HDFS audit log
> -
>
> Key: HDFS-16266
> URL: https://issues.apache.org/jira/browse/HDFS-16266
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: tomscut
>Assignee: tomscut
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>
> In our production environment, we occasionally encounter a problem where a 
> user submits an abnormal computation task, causing a sudden flood of 
> requests, which causes the queueTime and processingTime of the Namenode to 
> rise very high, causing a large backlog of tasks.
> We usually locate and kill specific Spark, Flink, or MapReduce tasks based on 
> metrics and audit logs. Currently, IP and UGI are recorded in audit logs, but 
> there is no port information, so it is difficult to locate specific processes 
> sometimes. Therefore, I propose that we add the port information to the audit 
> log, so that we can easily track the upstream process.
> Currently, some projects contain port information in audit logs, such as 
> Hbase and Alluxio. I think it is also necessary to add port information for 
> HDFS audit logs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-16290) Make log more standardized when executing verifyAndSetNamespaceInfo()

2021-10-28 Thread JiangHua Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-16290 started by JiangHua Zhu.
---
> Make log more standardized when executing verifyAndSetNamespaceInfo()
> -
>
> Key: HDFS-16290
> URL: https://issues.apache.org/jira/browse/HDFS-16290
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When verifyAndSetNamespaceInfo() is executed, the log will record some 
> information. E.g:
> '
> 2021-10-27 18:08:36,242 [50867]-INFO 
> [Thread-33:BPOfferService@376]-Acknowledging ACTIVE Namenode during 
> handshakeBlock pool BP-597961518-xxx.xxx.xxx.xxx-1534758275943 (Datanode Uuid 
> 9b2aedc9-f8b2 -4ee2-b99f-877bc6e42c87) service to 
> .xxx.xxx.org/.xxx.xxx.xxx:8021
> '
> Here, the connection between the'handshake' and the'Block pool' is too tight, 
> and the readability is not good.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16290) Make log more standardized when executing verifyAndSetNamespaceInfo()

2021-10-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16290:
--
Labels: pull-request-available  (was: )

> Make log more standardized when executing verifyAndSetNamespaceInfo()
> -
>
> Key: HDFS-16290
> URL: https://issues.apache.org/jira/browse/HDFS-16290
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When verifyAndSetNamespaceInfo() is executed, the log will record some 
> information. E.g:
> '
> 2021-10-27 18:08:36,242 [50867]-INFO 
> [Thread-33:BPOfferService@376]-Acknowledging ACTIVE Namenode during 
> handshakeBlock pool BP-597961518-xxx.xxx.xxx.xxx-1534758275943 (Datanode Uuid 
> 9b2aedc9-f8b2 -4ee2-b99f-877bc6e42c87) service to 
> .xxx.xxx.org/.xxx.xxx.xxx:8021
> '
> Here, the connection between the'handshake' and the'Block pool' is too tight, 
> and the readability is not good.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16290) Make log more standardized when executing verifyAndSetNamespaceInfo()

2021-10-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16290?focusedWorklogId=671782&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-671782
 ]

ASF GitHub Bot logged work on HDFS-16290:
-

Author: ASF GitHub Bot
Created on: 29/Oct/21 03:29
Start Date: 29/Oct/21 03:29
Worklog Time Spent: 10m 
  Work Description: jianghuazhu opened a new pull request #3600:
URL: https://github.com/apache/hadoop/pull/3600


   
   ### Description of PR
   When the DataNode starting, BPOfferService#verifyAndSetNamespaceInfo() will 
be executed. If the namenode being connected is in Active state, the following 
information will be recorded:
   '
   2021-10-27 18:08:36,242 [50867]-INFO 
[Thread-33:BPOfferService@376]-Acknowledging ACTIVE Namenode during 
handshakeBlock pool BP-597961518-xxx.xxx.xxx.xxx-1534758275943 (Datanode Uuid 
9b2aedc9-f8b2 -4ee2-b99f-877bc6e42c87) service to 
.xxx.xxx.org/.xxx.xxx.xxx:8021
   '
   Here, the connection between'handshake' and'Block pool' is too tight, and 
the readability is not good.
   
   ### How was this patch tested?
   This is related to the log, and there is less information to change. For 
testing, there is not much pressure.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 671782)
Remaining Estimate: 0h
Time Spent: 10m

> Make log more standardized when executing verifyAndSetNamespaceInfo()
> -
>
> Key: HDFS-16290
> URL: https://issues.apache.org/jira/browse/HDFS-16290
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When verifyAndSetNamespaceInfo() is executed, the log will record some 
> information. E.g:
> '
> 2021-10-27 18:08:36,242 [50867]-INFO 
> [Thread-33:BPOfferService@376]-Acknowledging ACTIVE Namenode during 
> handshakeBlock pool BP-597961518-xxx.xxx.xxx.xxx-1534758275943 (Datanode Uuid 
> 9b2aedc9-f8b2 -4ee2-b99f-877bc6e42c87) service to 
> .xxx.xxx.org/.xxx.xxx.xxx:8021
> '
> Here, the connection between the'handshake' and the'Block pool' is too tight, 
> and the readability is not good.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16266) Add remote port information to HDFS audit log

2021-10-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16266?focusedWorklogId=671780&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-671780
 ]

ASF GitHub Bot logged work on HDFS-16266:
-

Author: ASF GitHub Bot
Created on: 29/Oct/21 03:27
Start Date: 29/Oct/21 03:27
Worklog Time Spent: 10m 
  Work Description: tomscut commented on pull request #3538:
URL: https://github.com/apache/hadoop/pull/3538#issuecomment-954401049


   > lgtm +1
   
   Thanks @jojochuang for your comment.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 671780)
Time Spent: 5.5h  (was: 5h 20m)

> Add remote port information to HDFS audit log
> -
>
> Key: HDFS-16266
> URL: https://issues.apache.org/jira/browse/HDFS-16266
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: tomscut
>Assignee: tomscut
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> In our production environment, we occasionally encounter a problem where a 
> user submits an abnormal computation task, causing a sudden flood of 
> requests, which causes the queueTime and processingTime of the Namenode to 
> rise very high, causing a large backlog of tasks.
> We usually locate and kill specific Spark, Flink, or MapReduce tasks based on 
> metrics and audit logs. Currently, IP and UGI are recorded in audit logs, but 
> there is no port information, so it is difficult to locate specific processes 
> sometimes. Therefore, I propose that we add the port information to the audit 
> log, so that we can easily track the upstream process.
> Currently, some projects contain port information in audit logs, such as 
> Hbase and Alluxio. I think it is also necessary to add port information for 
> HDFS audit logs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16290) Make log more standardized when executing verifyAndSetNamespaceInfo()

2021-10-28 Thread JiangHua Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JiangHua Zhu updated HDFS-16290:

Component/s: datanode

> Make log more standardized when executing verifyAndSetNamespaceInfo()
> -
>
> Key: HDFS-16290
> URL: https://issues.apache.org/jira/browse/HDFS-16290
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: JiangHua Zhu
>Priority: Minor
>
> When verifyAndSetNamespaceInfo() is executed, the log will record some 
> information. E.g:
> '
> 2021-10-27 18:08:36,242 [50867]-INFO 
> [Thread-33:BPOfferService@376]-Acknowledging ACTIVE Namenode during 
> handshakeBlock pool BP-597961518-xxx.xxx.xxx.xxx-1534758275943 (Datanode Uuid 
> 9b2aedc9-f8b2 -4ee2-b99f-877bc6e42c87) service to 
> .xxx.xxx.org/.xxx.xxx.xxx:8021
> '
> Here, the connection between the'handshake' and the'Block pool' is too tight, 
> and the readability is not good.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-16290) Make log more standardized when executing verifyAndSetNamespaceInfo()

2021-10-28 Thread JiangHua Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JiangHua Zhu reassigned HDFS-16290:
---

Assignee: JiangHua Zhu

> Make log more standardized when executing verifyAndSetNamespaceInfo()
> -
>
> Key: HDFS-16290
> URL: https://issues.apache.org/jira/browse/HDFS-16290
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Minor
>
> When verifyAndSetNamespaceInfo() is executed, the log will record some 
> information. E.g:
> '
> 2021-10-27 18:08:36,242 [50867]-INFO 
> [Thread-33:BPOfferService@376]-Acknowledging ACTIVE Namenode during 
> handshakeBlock pool BP-597961518-xxx.xxx.xxx.xxx-1534758275943 (Datanode Uuid 
> 9b2aedc9-f8b2 -4ee2-b99f-877bc6e42c87) service to 
> .xxx.xxx.org/.xxx.xxx.xxx:8021
> '
> Here, the connection between the'handshake' and the'Block pool' is too tight, 
> and the readability is not good.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16290) Make log more standardized when executing verifyAndSetNamespaceInfo()

2021-10-28 Thread JiangHua Zhu (Jira)
JiangHua Zhu created HDFS-16290:
---

 Summary: Make log more standardized when executing 
verifyAndSetNamespaceInfo()
 Key: HDFS-16290
 URL: https://issues.apache.org/jira/browse/HDFS-16290
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: JiangHua Zhu


When verifyAndSetNamespaceInfo() is executed, the log will record some 
information. E.g:
'
2021-10-27 18:08:36,242 [50867]-INFO 
[Thread-33:BPOfferService@376]-Acknowledging ACTIVE Namenode during 
handshakeBlock pool BP-597961518-xxx.xxx.xxx.xxx-1534758275943 (Datanode Uuid 
9b2aedc9-f8b2 -4ee2-b99f-877bc6e42c87) service to 
.xxx.xxx.org/.xxx.xxx.xxx:8021
'
Here, the connection between the'handshake' and the'Block pool' is too tight, 
and the readability is not good.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16287) Support to make dfs.namenode.avoid.read.slow.datanode and dfs.namenode.block-placement-policy.exclude-slow-nodes.enabled reconfigurable

2021-10-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16287?focusedWorklogId=671773&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-671773
 ]

ASF GitHub Bot logged work on HDFS-16287:
-

Author: ASF GitHub Bot
Created on: 29/Oct/21 02:58
Start Date: 29/Oct/21 02:58
Worklog Time Spent: 10m 
  Work Description: haiyang1987 commented on pull request #3596:
URL: https://github.com/apache/hadoop/pull/3596#issuecomment-954368614


   Fix the problems encountered above and update later.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 671773)
Time Spent: 0.5h  (was: 20m)

> Support to make dfs.namenode.avoid.read.slow.datanode and 
> dfs.namenode.block-placement-policy.exclude-slow-nodes.enabled reconfigurable
> ---
>
> Key: HDFS-16287
> URL: https://issues.apache.org/jira/browse/HDFS-16287
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> 1. Consider that make dfs.namenode.avoid.read.slow.datanode and 
> dfs.namenode.block-placement-policy.exclude-slow-nodes.enabled reconfigurable 
> and rapid rollback in case this feature 
> [HDFS-16076|https://issues.apache.org/jira/browse/HDFS-16076] and 
> [HDFS-15879|https://issues.apache.org/jira/browse/HDFS-15879] unexpected 
> things happen in production environment  
> 2. Consider In DatanodeManager dealing with choosing targets for blocks  
> filter out slow nodes logic 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16283) RBF: improve renewLease() to call only a specific NameNode rather than make fan-out calls

2021-10-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16283?focusedWorklogId=671756&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-671756
 ]

ASF GitHub Bot logged work on HDFS-16283:
-

Author: ASF GitHub Bot
Created on: 29/Oct/21 02:10
Start Date: 29/Oct/21 02:10
Worklog Time Spent: 10m 
  Work Description: symious commented on a change in pull request #3595:
URL: https://github.com/apache/hadoop/pull/3595#discussion_r738886825



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
##
@@ -765,6 +765,14 @@ BatchedDirectoryListing getBatchedListing(
   @Idempotent
   void renewLease(String clientName) throws IOException;
 
+  /**
+   * The functionality is the same as renewLease(clientName). This is to 
support
+   * router based FileSystem to newLease against a specific target FileSystem 
instead
+   * of all the target FileSystems in each call.
+   */
+  @Idempotent
+  void renewLease(String clientName, String nsId) throws IOException;

Review comment:
   In a RBF cluster, user shouldn't be aware of the Namespace he's trying 
to operate.
   If we are adding new protocols, maybe we can add "renewLease(String 
clientName, String path)" and let Router do the resolve from path to namespace.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 671756)
Time Spent: 40m  (was: 0.5h)

> RBF: improve renewLease() to call only a specific NameNode rather than make 
> fan-out calls
> -
>
> Key: HDFS-16283
> URL: https://issues.apache.org/jira/browse/HDFS-16283
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Currently renewLease() against a router will make fan-out to all the 
> NameNodes. Since renewLease() call is so frequent and if one of the NameNodes 
> are slow, then eventually the router queues are blocked by all renewLease() 
> and cause router degradation. 
> We will make a change in the client side to keep track of NameNode Id in 
> additional to current fileId so routers understand which NameNodes the client 
> is renewing lease against.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16269) [Fix] Improve NNThroughputBenchmark#blockReport operation

2021-10-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16269?focusedWorklogId=671754&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-671754
 ]

ASF GitHub Bot logged work on HDFS-16269:
-

Author: ASF GitHub Bot
Created on: 29/Oct/21 02:05
Start Date: 29/Oct/21 02:05
Worklog Time Spent: 10m 
  Work Description: jianghuazhu commented on pull request #3544:
URL: https://github.com/apache/hadoop/pull/3544#issuecomment-954350998


   @virajjasani @ferhui, are you willing to spend some time to help review this 
pr.
   Thank you very much.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 671754)
Time Spent: 2.5h  (was: 2h 20m)

> [Fix] Improve NNThroughputBenchmark#blockReport operation
> -
>
> Key: HDFS-16269
> URL: https://issues.apache.org/jira/browse/HDFS-16269
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: benchmarks, namenode
>Affects Versions: 2.9.2
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> When using NNThroughputBenchmark to verify the blockReport, you will get some 
> exception information.
> Commands used:
> ./bin/hadoop org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark -fs 
>  -op blockReport -datanodes 3 -reports 1
> The exception information:
> 21/10/12 14:35:18 INFO namenode.NNThroughputBenchmark: Starting benchmark: 
> blockReport
> 21/10/12 14:35:19 INFO namenode.NNThroughputBenchmark: Creating 10 files with 
> 10 blocks each.
> 21/10/12 14:35:19 ERROR namenode.NNThroughputBenchmark: 
> java.lang.ArrayIndexOutOfBoundsException: 50009
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$BlockReportStats.addBlocks(NNThroughputBenchmark.java:1161)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$BlockReportStats.generateInputs(NNThroughputBenchmark.java:1143)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$OperationStatsBase.benchmark(NNThroughputBenchmark.java:257)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.run(NNThroughputBenchmark.java:1528)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.runBenchmark(NNThroughputBenchmark.java:1430)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.main(NNThroughputBenchmark.java:1550)
> Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 50009
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$BlockReportStats.addBlocks(NNThroughputBenchmark.java:1161)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$BlockReportStats.generateInputs(NNThroughputBenchmark.java:1143)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$OperationStatsBase.benchmark(NNThroughputBenchmark.java:257)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.run(NNThroughputBenchmark.java:1528)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.runBenchmark(NNThroughputBenchmark.java:1430)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.main(NNThroughputBenchmark.java:1550)
> Checked some code and found that the problem appeared here.
> private ExtendedBlock addBlocks(String fileName, String clientName)
>  throws IOException {
>  for(DatanodeInfo dnInfo: loc.getLocations()) {
>int dnIdx = dnInfo.getXferPort()-1;
>datanodes[dnIdx].addBlock(loc.getBlock().getLocalBlock());
> }
>  }
> It can be seen from this that what dnInfo.getXferPort() gets is a port 
> information and should not be used as an index of an array.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16269) [Fix] Improve NNThroughputBenchmark#blockReport operation

2021-10-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16269?focusedWorklogId=671753&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-671753
 ]

ASF GitHub Bot logged work on HDFS-16269:
-

Author: ASF GitHub Bot
Created on: 29/Oct/21 02:01
Start Date: 29/Oct/21 02:01
Worklog Time Spent: 10m 
  Work Description: jianghuazhu commented on pull request #3544:
URL: https://github.com/apache/hadoop/pull/3544#issuecomment-954349472


   Thanks for your comments and reviews, @aajisaka  @jojochuang .


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 671753)
Time Spent: 2h 20m  (was: 2h 10m)

> [Fix] Improve NNThroughputBenchmark#blockReport operation
> -
>
> Key: HDFS-16269
> URL: https://issues.apache.org/jira/browse/HDFS-16269
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: benchmarks, namenode
>Affects Versions: 2.9.2
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> When using NNThroughputBenchmark to verify the blockReport, you will get some 
> exception information.
> Commands used:
> ./bin/hadoop org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark -fs 
>  -op blockReport -datanodes 3 -reports 1
> The exception information:
> 21/10/12 14:35:18 INFO namenode.NNThroughputBenchmark: Starting benchmark: 
> blockReport
> 21/10/12 14:35:19 INFO namenode.NNThroughputBenchmark: Creating 10 files with 
> 10 blocks each.
> 21/10/12 14:35:19 ERROR namenode.NNThroughputBenchmark: 
> java.lang.ArrayIndexOutOfBoundsException: 50009
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$BlockReportStats.addBlocks(NNThroughputBenchmark.java:1161)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$BlockReportStats.generateInputs(NNThroughputBenchmark.java:1143)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$OperationStatsBase.benchmark(NNThroughputBenchmark.java:257)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.run(NNThroughputBenchmark.java:1528)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.runBenchmark(NNThroughputBenchmark.java:1430)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.main(NNThroughputBenchmark.java:1550)
> Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 50009
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$BlockReportStats.addBlocks(NNThroughputBenchmark.java:1161)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$BlockReportStats.generateInputs(NNThroughputBenchmark.java:1143)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$OperationStatsBase.benchmark(NNThroughputBenchmark.java:257)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.run(NNThroughputBenchmark.java:1528)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.runBenchmark(NNThroughputBenchmark.java:1430)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.main(NNThroughputBenchmark.java:1550)
> Checked some code and found that the problem appeared here.
> private ExtendedBlock addBlocks(String fileName, String clientName)
>  throws IOException {
>  for(DatanodeInfo dnInfo: loc.getLocations()) {
>int dnIdx = dnInfo.getXferPort()-1;
>datanodes[dnIdx].addBlock(loc.getBlock().getLocalBlock());
> }
>  }
> It can be seen from this that what dnInfo.getXferPort() gets is a port 
> information and should not be used as an index of an array.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16279) Print detail datanode info when process first storage report

2021-10-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16279?focusedWorklogId=671743&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-671743
 ]

ASF GitHub Bot logged work on HDFS-16279:
-

Author: ASF GitHub Bot
Created on: 29/Oct/21 01:31
Start Date: 29/Oct/21 01:31
Worklog Time Spent: 10m 
  Work Description: tomscut commented on pull request #3564:
URL: https://github.com/apache/hadoop/pull/3564#issuecomment-954339436


   Thanks @tasanuma @aajisaka @ferhui for your review and kind suggestions.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 671743)
Time Spent: 3h 40m  (was: 3.5h)

> Print detail datanode info when process first storage report
> 
>
> Key: HDFS-16279
> URL: https://issues.apache.org/jira/browse/HDFS-16279
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2, 3.2.4
>
> Attachments: image-2021-10-19-20-37-55-850.png
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> Print detail datanode info when process block report.
> !image-2021-10-19-20-37-55-850.png|width=547,height=98!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16279) Print detail datanode info when process first storage report

2021-10-28 Thread Takanobu Asanuma (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma resolved HDFS-16279.
-
Fix Version/s: 3.2.4
   3.3.2
   3.4.0
   Resolution: Fixed

> Print detail datanode info when process first storage report
> 
>
> Key: HDFS-16279
> URL: https://issues.apache.org/jira/browse/HDFS-16279
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2, 3.2.4
>
> Attachments: image-2021-10-19-20-37-55-850.png
>
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> Print detail datanode info when process block report.
> !image-2021-10-19-20-37-55-850.png|width=547,height=98!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16279) Print detail datanode info when process first storage report

2021-10-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16279?focusedWorklogId=671740&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-671740
 ]

ASF GitHub Bot logged work on HDFS-16279:
-

Author: ASF GitHub Bot
Created on: 29/Oct/21 01:28
Start Date: 29/Oct/21 01:28
Worklog Time Spent: 10m 
  Work Description: tasanuma commented on pull request #3564:
URL: https://github.com/apache/hadoop/pull/3564#issuecomment-954338444


   Merged. Thanks for your contribution, @tomscut. Thanks for your review and 
discussion, @aajisaka and @ferhui.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 671740)
Time Spent: 3.5h  (was: 3h 20m)

> Print detail datanode info when process first storage report
> 
>
> Key: HDFS-16279
> URL: https://issues.apache.org/jira/browse/HDFS-16279
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2021-10-19-20-37-55-850.png
>
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> Print detail datanode info when process block report.
> !image-2021-10-19-20-37-55-850.png|width=547,height=98!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16279) Print detail datanode info when process first storage report

2021-10-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16279?focusedWorklogId=671739&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-671739
 ]

ASF GitHub Bot logged work on HDFS-16279:
-

Author: ASF GitHub Bot
Created on: 29/Oct/21 01:27
Start Date: 29/Oct/21 01:27
Worklog Time Spent: 10m 
  Work Description: tasanuma merged pull request #3564:
URL: https://github.com/apache/hadoop/pull/3564


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 671739)
Time Spent: 3h 20m  (was: 3h 10m)

> Print detail datanode info when process first storage report
> 
>
> Key: HDFS-16279
> URL: https://issues.apache.org/jira/browse/HDFS-16279
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2021-10-19-20-37-55-850.png
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Print detail datanode info when process block report.
> !image-2021-10-19-20-37-55-850.png|width=547,height=98!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16285) Make HDFS ownership tools cross platform

2021-10-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16285?focusedWorklogId=671619&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-671619
 ]

ASF GitHub Bot logged work on HDFS-16285:
-

Author: ASF GitHub Bot
Created on: 28/Oct/21 20:07
Start Date: 28/Oct/21 20:07
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3588:
URL: https://github.com/apache/hadoop/pull/3588#issuecomment-954162614


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 6 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m 50s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   3m  7s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 24s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  60m 43s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 55s |  |  the patch passed  |
   | +1 :green_heart: |  cc  |   2m 55s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   2m 55s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 55s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 15s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 10s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  63m 38s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3588/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt)
 |  hadoop-hdfs-native-client in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 30s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 152m 44s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed CTEST tests | test_libhdfs_threaded_hdfspp_test_shim_static |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3588/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3588 |
   | Optional Tests | dupname asflicense compile cc mvnsite javac unit 
codespell golang |
   | uname | Linux fc47c36be751 4.15.0-153-generic #160-Ubuntu SMP Thu Jul 29 
06:54:29 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / af54877310df528a209a06e3cb413ba35e4342d0 |
   | Default Java | Red Hat, Inc.-1.8.0_302-b08 |
   | CTEST | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3588/4/artifact/out/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-ctest.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3588/4/testReport/ |
   | Max. process+thread count | 591 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3588/4/console |
   | versions | git=2.9.5 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 671619)
Time Spent: 40m  (was: 0.5h)

> Make HDFS ownership tools cross platform
> 
>
> Key: HDFS-16285
> URL: https://issues.apache.org/jira/browse/HDFS-16285
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, libhdfs++, tools
>Affects Versions: 3.4.0
> 

[jira] [Work logged] (HDFS-16283) RBF: improve renewLease() to call only a specific NameNode rather than make fan-out calls

2021-10-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16283?focusedWorklogId=671590&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-671590
 ]

ASF GitHub Bot logged work on HDFS-16283:
-

Author: ASF GitHub Bot
Created on: 28/Oct/21 19:09
Start Date: 28/Oct/21 19:09
Worklog Time Spent: 10m 
  Work Description: goiri commented on a change in pull request #3595:
URL: https://github.com/apache/hadoop/pull/3595#discussion_r738674897



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
##
@@ -765,6 +765,14 @@ BatchedDirectoryListing getBatchedListing(
   @Idempotent
   void renewLease(String clientName) throws IOException;
 
+  /**
+   * The functionality is the same as renewLease(clientName). This is to 
support
+   * router based FileSystem to newLease against a specific target FileSystem 
instead
+   * of all the target FileSystems in each call.
+   */
+  @Idempotent
+  void renewLease(String clientName, String nsId) throws IOException;

Review comment:
   Changing the client protocol requires more than just a RBF PR.
   We should bring this topic up in the mailing list.
   The good news is that this is an additional call and not modifying the 
existing.
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 671590)
Time Spent: 0.5h  (was: 20m)

> RBF: improve renewLease() to call only a specific NameNode rather than make 
> fan-out calls
> -
>
> Key: HDFS-16283
> URL: https://issues.apache.org/jira/browse/HDFS-16283
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently renewLease() against a router will make fan-out to all the 
> NameNodes. Since renewLease() call is so frequent and if one of the NameNodes 
> are slow, then eventually the router queues are blocked by all renewLease() 
> and cause router degradation. 
> We will make a change in the client side to keep track of NameNode Id in 
> additional to current fileId so routers understand which NameNodes the client 
> is renewing lease against.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16289) Hadoop HA checkpointer issue

2021-10-28 Thread Boris Bondarenko (Jira)
Boris Bondarenko created HDFS-16289:
---

 Summary: Hadoop HA checkpointer issue 
 Key: HDFS-16289
 URL: https://issues.apache.org/jira/browse/HDFS-16289
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: dfs
Affects Versions: 3.2.2
Reporter: Boris Bondarenko


In HA setup active namenode will reject fsimage sync from one of the two 
standby namenodes all the time. This maybe an edge case, in our environment it 
primarily affect standby cluster. What we experienced was memory problem on 
standby namenodes in the scenario when the standby node was not able to 
complete sync cycle for a long time.

It is my understanding that the break out from the loop will only happen when 
doCheckpoint call succeeds otherwise it throws an exception and continues.

I can provide more details on my findings with code references if necessary.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16287) Support to make dfs.namenode.avoid.read.slow.datanode and dfs.namenode.block-placement-policy.exclude-slow-nodes.enabled reconfigurable

2021-10-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16287?focusedWorklogId=671520&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-671520
 ]

ASF GitHub Bot logged work on HDFS-16287:
-

Author: ASF GitHub Bot
Created on: 28/Oct/21 16:05
Start Date: 28/Oct/21 16:05
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3596:
URL: https://github.com/apache/hadoop/pull/3596#issuecomment-953988764


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 24s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 16s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 23s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | -1 :x: |  spotbugs  |   3m 21s | 
[/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3596/1/artifact/out/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs.html)
 |  hadoop-hdfs-project/hadoop-hdfs generated 2 new + 0 unchanged - 0 fixed = 2 
total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  24m 51s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 311m 20s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3596/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 416m 30s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-hdfs-project/hadoop-hdfs |
   |  |  Write to static field 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.excludeSlowNodesEnabled
 from instance method 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.setExcludeSlowNodesForWriteEnabled(boolean)
  At DatanodeManager.java:from instance method 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.setExcludeSlowNodesForWriteEnabled(boolean)
  At DatanodeManager.java:[line 529] |
   |  |  Write to static field 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.excludeSlowNodesEnabled
 from instance method new 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager(BlockManager, 
Namesystem, Configuration)  At DatanodeManager.java:from instance method new 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager(BlockManager, 
Namesystem, Configuration)  At DatanodeManager.java:[line 263] |
   | Failed junit tests | hadoop.hdfs.tools

[jira] [Work logged] (HDFS-16269) [Fix] Improve NNThroughputBenchmark#blockReport operation

2021-10-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16269?focusedWorklogId=671518&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-671518
 ]

ASF GitHub Bot logged work on HDFS-16269:
-

Author: ASF GitHub Bot
Created on: 28/Oct/21 16:03
Start Date: 28/Oct/21 16:03
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3544:
URL: https://github.com/apache/hadoop/pull/3544#issuecomment-953986655


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 55s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 37s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 59s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 17s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m  2s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 20s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m 53s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 320m 19s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 425m 39s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3544/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3544 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 5bc5577d7cca 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 
19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 75c4e6dd4eb801d5918361c95320fd66d632213b |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3544/3/testReport/ |
   | Max. process+thread count | 1877 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3544/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This 

[jira] [Work logged] (HDFS-16266) Add remote port information to HDFS audit log

2021-10-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16266?focusedWorklogId=671420&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-671420
 ]

ASF GitHub Bot logged work on HDFS-16266:
-

Author: ASF GitHub Bot
Created on: 28/Oct/21 13:09
Start Date: 28/Oct/21 13:09
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3538:
URL: https://github.com/apache/hadoop/pull/3538#issuecomment-953828582


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  5s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 27s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 40s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 29s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  19m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   3m 47s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 14s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 18s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   5m 43s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 34s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  21m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 50s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  19m 50s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 49s |  |  root: The patch generated 
0 new + 606 unchanged - 2 fixed = 606 total (was 608)  |
   | +1 :green_heart: |  mvnsite  |   3m 10s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   2m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   6m 10s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 53s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 58s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 388m 27s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 12s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 612m 49s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3538/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3538 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux 63dfc831ddaf 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / cc82c7c89fa3be59996b5c809dbcaf136f3ccbd3 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3538/10/te

[jira] [Work logged] (HDFS-16285) Make HDFS ownership tools cross platform

2021-10-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16285?focusedWorklogId=671380&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-671380
 ]

ASF GitHub Bot logged work on HDFS-16285:
-

Author: ASF GitHub Bot
Created on: 28/Oct/21 11:29
Start Date: 28/Oct/21 11:29
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3588:
URL: https://github.com/apache/hadoop/pull/3588#issuecomment-953756696


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  28m 39s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  25m 31s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   3m 49s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  52m 43s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 59s |  |  the patch passed  |
   | +1 :green_heart: |  cc  |   2m 59s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   2m 59s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 59s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 38s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 100m 11s |  |  hadoop-hdfs-native-client in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 39s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 209m 58s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3588/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3588 |
   | Optional Tests | dupname asflicense compile cc mvnsite javac unit 
codespell golang |
   | uname | Linux 3c3c4796a0ce 4.15.0-153-generic #160-Ubuntu SMP Thu Jul 29 
06:54:29 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d62f528fdce266f5aa40a46b5d622f2a9a9a4fdc |
   | Default Java | Red Hat, Inc.-1.8.0_312-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3588/3/testReport/ |
   | Max. process+thread count | 544 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3588/3/console |
   | versions | git=2.27.0 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 671380)
Time Spent: 0.5h  (was: 20m)

> Make HDFS ownership tools cross platform
> 
>
> Key: HDFS-16285
> URL: https://issues.apache.org/jira/browse/HDFS-16285
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, libhdfs++, tools
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The source files for *hdfs_chown*, *hdfs_chmod* and *hdfs_chgrp* uses getopt 
> for parsing the command line arguments. getopt is available only on Linux and 
> thus, isn't cross platform. We need to replace getopt with 
> 

[jira] [Commented] (HDFS-16259) Catch and re-throw sub-classes of AccessControlException thrown by any permission provider plugins (eg Ranger)

2021-10-28 Thread Stephen O'Donnell (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17435315#comment-17435315
 ] 

Stephen O'Donnell commented on HDFS-16259:
--

[~ayushtkn] Thanks for the discussion on this. I will create a PR to catch the 
enforcer ACE subclass exceptions and re-throw ACE in the next day or two, which 
will solve the immediate problem. Then we can consider making incompatible 
changes on trunk later. 

> Catch and re-throw sub-classes of AccessControlException thrown by any 
> permission provider plugins (eg Ranger)
> --
>
> Key: HDFS-16259
> URL: https://issues.apache.org/jira/browse/HDFS-16259
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>
> When a permission provider plugin is enabled (eg Ranger) there are some 
> scenarios where it can throw a sub-class of an AccessControlException (eg 
> RangerAccessControlException). If this exception is allowed to propagate up 
> the stack, it can give problems in the HDFS Client, when it unwraps the 
> remote exception containing the AccessControlException sub-class.
> Ideally, we should make AccessControlException final so it cannot be 
> sub-classed, but that would be a breaking change at this point. Therefore I 
> believe the safest thing to do, is to catch any AccessControlException that 
> comes out of the permission enforcer plugin, and re-throw an 
> AccessControlException instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16287) Support to make dfs.namenode.avoid.read.slow.datanode and dfs.namenode.block-placement-policy.exclude-slow-nodes.enabled reconfigurable

2021-10-28 Thread Haiyang Hu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17435271#comment-17435271
 ] 

Haiyang Hu commented on HDFS-16287:
---

[~tomscut] Thank you for reply!PR have submitted.

> Support to make dfs.namenode.avoid.read.slow.datanode and 
> dfs.namenode.block-placement-policy.exclude-slow-nodes.enabled reconfigurable
> ---
>
> Key: HDFS-16287
> URL: https://issues.apache.org/jira/browse/HDFS-16287
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> 1. Consider that make dfs.namenode.avoid.read.slow.datanode and 
> dfs.namenode.block-placement-policy.exclude-slow-nodes.enabled reconfigurable 
> and rapid rollback in case this feature 
> [HDFS-16076|https://issues.apache.org/jira/browse/HDFS-16076] and 
> [HDFS-15879|https://issues.apache.org/jira/browse/HDFS-15879] unexpected 
> things happen in production environment  
> 2. Consider In DatanodeManager dealing with choosing targets for blocks  
> filter out slow nodes logic 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16287) Support to make dfs.namenode.avoid.read.slow.datanode and dfs.namenode.block-placement-policy.exclude-slow-nodes.enabled reconfigurable

2021-10-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16287?focusedWorklogId=671305&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-671305
 ]

ASF GitHub Bot logged work on HDFS-16287:
-

Author: ASF GitHub Bot
Created on: 28/Oct/21 09:07
Start Date: 28/Oct/21 09:07
Worklog Time Spent: 10m 
  Work Description: haiyang1987 opened a new pull request #3596:
URL: https://github.com/apache/hadoop/pull/3596


   ### Description of PR
   
   Support to make dfs.namenode.avoid.read.slow.datanode and 
dfs.namenode.block-placement-policy.exclude-slow-nodes.enabled reconfigurable
   Details: HDFS-16287
   
   ### For code changes:
   
   - [ ] Consider that make dfs.namenode.avoid.read.slow.datanode and 
dfs.namenode.block-placement-policy.exclude-slow-nodes.enabled reconfigurable 
and rapid rollback in case this feature HDFS-16076 and HDFS-15879 unexpected 
things happen in production environment
   - [ ] Consider In DatanodeManager dealing with choosing targets for blocks 
filter out slow nodes logic
   - [ ] DatanodeManager#startSlowPeerCollector launched by parameter 
'dfs.datanode.peer.stats.enabled' to control
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 671305)
Remaining Estimate: 0h
Time Spent: 10m

> Support to make dfs.namenode.avoid.read.slow.datanode and 
> dfs.namenode.block-placement-policy.exclude-slow-nodes.enabled reconfigurable
> ---
>
> Key: HDFS-16287
> URL: https://issues.apache.org/jira/browse/HDFS-16287
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> 1. Consider that make dfs.namenode.avoid.read.slow.datanode and 
> dfs.namenode.block-placement-policy.exclude-slow-nodes.enabled reconfigurable 
> and rapid rollback in case this feature 
> [HDFS-16076|https://issues.apache.org/jira/browse/HDFS-16076] and 
> [HDFS-15879|https://issues.apache.org/jira/browse/HDFS-15879] unexpected 
> things happen in production environment  
> 2. Consider In DatanodeManager dealing with choosing targets for blocks  
> filter out slow nodes logic 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16287) Support to make dfs.namenode.avoid.read.slow.datanode and dfs.namenode.block-placement-policy.exclude-slow-nodes.enabled reconfigurable

2021-10-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16287:
--
Labels: pull-request-available  (was: )

> Support to make dfs.namenode.avoid.read.slow.datanode and 
> dfs.namenode.block-placement-policy.exclude-slow-nodes.enabled reconfigurable
> ---
>
> Key: HDFS-16287
> URL: https://issues.apache.org/jira/browse/HDFS-16287
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> 1. Consider that make dfs.namenode.avoid.read.slow.datanode and 
> dfs.namenode.block-placement-policy.exclude-slow-nodes.enabled reconfigurable 
> and rapid rollback in case this feature 
> [HDFS-16076|https://issues.apache.org/jira/browse/HDFS-16076] and 
> [HDFS-15879|https://issues.apache.org/jira/browse/HDFS-15879] unexpected 
> things happen in production environment  
> 2. Consider In DatanodeManager dealing with choosing targets for blocks  
> filter out slow nodes logic 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16269) [Fix] Improve NNThroughputBenchmark#blockReport operation

2021-10-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16269?focusedWorklogId=671282&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-671282
 ]

ASF GitHub Bot logged work on HDFS-16269:
-

Author: ASF GitHub Bot
Created on: 28/Oct/21 08:08
Start Date: 28/Oct/21 08:08
Worklog Time Spent: 10m 
  Work Description: jianghuazhu commented on a change in pull request #3544:
URL: https://github.com/apache/hadoop/pull/3544#discussion_r738126745



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNNThroughputBenchmark.java
##
@@ -166,4 +166,31 @@ public void testNNThroughputForAppendOp() throws Exception 
{
   }
 }
   }
+
+  /**
+   * This test runs {@link NNThroughputBenchmark} against a mini DFS cluster
+   * for block report operation.
+   */
+  @Test(timeout = 12)
+  public void testNNThroughputForBlockReportOp() throws Exception {
+final Configuration conf = new HdfsConfiguration();
+conf.setInt(DFSConfigKeys.DFS_NAMENODE_MIN_BLOCK_SIZE_KEY, 16);
+conf.setInt(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, 16);
+MiniDFSCluster cluster = null;
+try {
+  cluster = new MiniDFSCluster.Builder(conf).numDataNodes(3).build();
+  cluster.waitActive();
+
+  final Configuration benchConf = new HdfsConfiguration();
+  benchConf.setInt(DFSConfigKeys.DFS_NAMENODE_MIN_BLOCK_SIZE_KEY, 16);
+  benchConf.setInt(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, 16);
+  NNThroughputBenchmark.runBenchmark(benchConf,
+  new String[]{"-fs", cluster.getURI().toString(), "-op",
+  "blockReport", "-datanodes", "3", "-reports", "2"});
+} finally {
+  if (cluster != null) {
+cluster.shutdown();
+  }
+}

Review comment:
   Thanks @aajisaka  for the comment and review.
   I will update it later.
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 671282)
Time Spent: 2h  (was: 1h 50m)

> [Fix] Improve NNThroughputBenchmark#blockReport operation
> -
>
> Key: HDFS-16269
> URL: https://issues.apache.org/jira/browse/HDFS-16269
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: benchmarks, namenode
>Affects Versions: 2.9.2
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> When using NNThroughputBenchmark to verify the blockReport, you will get some 
> exception information.
> Commands used:
> ./bin/hadoop org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark -fs 
>  -op blockReport -datanodes 3 -reports 1
> The exception information:
> 21/10/12 14:35:18 INFO namenode.NNThroughputBenchmark: Starting benchmark: 
> blockReport
> 21/10/12 14:35:19 INFO namenode.NNThroughputBenchmark: Creating 10 files with 
> 10 blocks each.
> 21/10/12 14:35:19 ERROR namenode.NNThroughputBenchmark: 
> java.lang.ArrayIndexOutOfBoundsException: 50009
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$BlockReportStats.addBlocks(NNThroughputBenchmark.java:1161)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$BlockReportStats.generateInputs(NNThroughputBenchmark.java:1143)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$OperationStatsBase.benchmark(NNThroughputBenchmark.java:257)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.run(NNThroughputBenchmark.java:1528)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.runBenchmark(NNThroughputBenchmark.java:1430)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.main(NNThroughputBenchmark.java:1550)
> Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 50009
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$BlockReportStats.addBlocks(NNThroughputBenchmark.java:1161)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$BlockReportStats.generateInputs(NNThroughputBenchmark.java:1143)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$OperationStatsBase.benchmark(NNThroughputBenchmark.java:257)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.run(NNThroughputBenchmark.java:1528)
> at org.apache.hadoop.util.ToolRun

[jira] [Work logged] (HDFS-16285) Make HDFS ownership tools cross platform

2021-10-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16285?focusedWorklogId=671265&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-671265
 ]

ASF GitHub Bot logged work on HDFS-16285:
-

Author: ASF GitHub Bot
Created on: 28/Oct/21 07:59
Start Date: 28/Oct/21 07:59
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3588:
URL: https://github.com/apache/hadoop/pull/3588#issuecomment-953595241


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m 36s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   3m  0s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 24s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  60m 55s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 56s |  |  the patch passed  |
   | +1 :green_heart: |  cc  |   2m 56s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   2m 56s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 56s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 15s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m  6s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 107m 32s |  |  hadoop-hdfs-native-client in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 30s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 196m 43s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3588/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3588 |
   | Optional Tests | dupname asflicense compile cc mvnsite javac unit 
codespell golang |
   | uname | Linux b81ba6360eda 4.15.0-153-generic #160-Ubuntu SMP Thu Jul 29 
06:54:29 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d62f528fdce266f5aa40a46b5d622f2a9a9a4fdc |
   | Default Java | Red Hat, Inc.-1.8.0_302-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3588/3/testReport/ |
   | Max. process+thread count | 570 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3588/3/console |
   | versions | git=2.9.5 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 671265)
Time Spent: 20m  (was: 10m)

> Make HDFS ownership tools cross platform
> 
>
> Key: HDFS-16285
> URL: https://issues.apache.org/jira/browse/HDFS-16285
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, libhdfs++, tools
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The source files for *hdfs_chown*, *hdfs_chmod* and *hdfs_chgrp* uses getopt 
> for parsing the command line arguments. getopt is available only on Linux and 
> thus, isn't cross platform. We need to replace getopt with 
> boo