[jira] [Commented] (HDFS-17111) RBF: Optimize msync to only call nameservices that have observer reads enabled.

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745413#comment-17745413
 ] 

ASF GitHub Bot commented on HDFS-17111:
---

xinglin commented on code in PR #5860:
URL: https://github.com/apache/hadoop/pull/5860#discussion_r1270233735


##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java:
##
@@ -194,6 +202,18 @@ public class RouterClientProtocol implements 
ClientProtocol {
 this.routerCacheAdmin = new RouterCacheAdmin(rpcServer);
 this.securityManager = rpcServer.getRouterSecurityManager();
 this.rbfRename = new RouterFederationRename(rpcServer, conf);
+
+this.crsNameservicesCache = CacheBuilder.newBuilder()

Review Comment:
   I misread the implementation: I thought the key was a nameservice and then 
the value is whether that nameservice has an observer node or not. 
   
   There are some inconsistency in what we are trying to achieve here. In one 
hand, we are leveraging static config in determining whether a nameservice has 
an observerNode or not. On the other hand, we are leveraging a dynamic cache 
with a size of 1 here, assuming the set of namespaces would actually be 
changed. It is better to be consistent here, either based on fixed static info 
or dynamic. 
   
   If we are leveraging static config in determining observerNode, we can 
assume we are using a static fixed set of nameservices as well. We can just do 
the check once and then assume the nameservices won't change. Or is this 
proposal too naive? 



##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java:
##
@@ -194,6 +202,18 @@ public class RouterClientProtocol implements 
ClientProtocol {
 this.routerCacheAdmin = new RouterCacheAdmin(rpcServer);
 this.securityManager = rpcServer.getRouterSecurityManager();
 this.rbfRename = new RouterFederationRename(rpcServer, conf);
+
+this.crsNameservicesCache = CacheBuilder.newBuilder()

Review Comment:
   I misread the implementation: I thought the key was a nameservice and then 
the value is whether that nameservice has an observer node or not. 
   
   There are some inconsistency in what we are trying to achieve here. In one 
hand, we are leveraging static config in determining whether a nameservice has 
an observerNode or not. On the other hand, we are leveraging a dynamic cache 
with a size of 1 here, assuming the set of namespaces would actually be 
changed. It is better to be consistent here, either based on fixed static info 
or dynamic. 
   
   If we are leveraging static config in determining observerNode, we can 
assume we are using a static fixed set of nameservices as well. We can just do 
the check once and then assume the nameservices won't change. Or is this 
proposal too naive? 





> RBF: Optimize msync to only call nameservices that have observer reads 
> enabled.
> ---
>
> Key: HDFS-17111
> URL: https://issues.apache.org/jira/browse/HDFS-17111
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Simbarashe Dzinamarira
>Assignee: Simbarashe Dzinamarira
>Priority: Major
>  Labels: pull-request-available
>
> Right now when a client MSYNCs to the router, the call is fanned out to all 
> nameservices. We only need to proxy the msync to nameservices that have 
> observer reads configured.
> We can do this either by adding a new config for the admin to specify which 
> nameservices have CRS configured, or we can try to automatically detect these.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17060) BlockPlacementPolicyDefault#chooseReplicaToDelete should consider datanode load

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745359#comment-17745359
 ] 

ASF GitHub Bot commented on HDFS-17060:
---

hfutatzhanghb commented on PR #5778:
URL: https://github.com/apache/hadoop/pull/5778#issuecomment-1644940335

   @ayushtkn Sir, sorry for disturbing you and involving you here. Please also 
take a look at this PR when you have free time, Thanks a lot.




> BlockPlacementPolicyDefault#chooseReplicaToDelete should consider datanode 
> load
> ---
>
> Key: HDFS-17060
> URL: https://issues.apache.org/jira/browse/HDFS-17060
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: farmmamba
>Assignee: farmmamba
>Priority: Minor
>  Labels: pull-request-available
>
> When choose extra replicas for deleting, we should consider datanode load as 
> well.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17112) Show decommission duration in JMX and HTML

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745355#comment-17745355
 ] 

ASF GitHub Bot commented on HDFS-17112:
---

zhangshuyan0 commented on PR #5866:
URL: https://github.com/apache/hadoop/pull/5866#issuecomment-1644927014

   @Hexiaoqiao Thanks for your review. I'll add more page info and update this 
PR later.




> Show decommission duration in JMX and HTML
> --
>
> Key: HDFS-17112
> URL: https://issues.apache.org/jira/browse/HDFS-17112
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Shuyan Zhang
>Assignee: Shuyan Zhang
>Priority: Major
>  Labels: pull-request-available
>
> Expose decommission duration time in JMX page. It's a very useful info when 
> decommissioning a batch of datanodes in a cluster.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17112) Show decommission duration in JMX and HTML

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745354#comment-17745354
 ] 

ASF GitHub Bot commented on HDFS-17112:
---

zhangshuyan0 commented on code in PR #5866:
URL: https://github.com/apache/hadoop/pull/5866#discussion_r1270185680


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java:
##
@@ -6664,6 +6664,8 @@ public String getDecomNodes() {
   node.getLeavingServiceStatus().getOutOfServiceOnlyReplicas())
   .put("underReplicateInOpenFiles",
   node.getLeavingServiceStatus().getUnderReplicatedInOpenFiles())
+  .put("decommissionDuration",
+  monotonicNow() - node.getLeavingServiceStatus().getStartTime())

Review Comment:
   It's not related. I can pass all of them in my local environment.





> Show decommission duration in JMX and HTML
> --
>
> Key: HDFS-17112
> URL: https://issues.apache.org/jira/browse/HDFS-17112
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Shuyan Zhang
>Assignee: Shuyan Zhang
>Priority: Major
>  Labels: pull-request-available
>
> Expose decommission duration time in JMX page. It's a very useful info when 
> decommissioning a batch of datanodes in a cluster.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16922) The logic of IncrementalBlockReportManager#addRDBI method may cause missing blocks when cluster is busy.

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745351#comment-17745351
 ] 

ASF GitHub Bot commented on HDFS-16922:
---

hubble-insight commented on PR #5398:
URL: https://github.com/apache/hadoop/pull/5398#issuecomment-1644922899

   "This  fix seems to only alleviate the probability of this issue occurring. 
If the incremental report of this chunk has already been reported to nn and is 
not cached in pendingIBRs, then subsequent smaller GS reports will be reported 
to nn again."




> The logic of IncrementalBlockReportManager#addRDBI method may cause missing 
> blocks when cluster is busy.
> 
>
> Key: HDFS-16922
> URL: https://issues.apache.org/jira/browse/HDFS-16922
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: farmmamba
>Priority: Major
>  Labels: pull-request-available
>
> The current logic of IncrementalBlockReportManager# addRDBI method could lead 
> to the missing blocks when datanodes in pipeline are I/O busy.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17112) Show decommission duration in JMX and HTML

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745350#comment-17745350
 ] 

ASF GitHub Bot commented on HDFS-17112:
---

Hexiaoqiao commented on code in PR #5866:
URL: https://github.com/apache/hadoop/pull/5866#discussion_r1270178517


##
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html:
##
@@ -414,6 +415,7 @@
 {underReplicatedBlocks}
 {decommissionOnlyReplicas}
 {underReplicateInOpenFiles}
+{decommissionDuration}

Review Comment:
   Give the time unit here will be better for end user? Such as 
`{decommissionDuration} ms`





> Show decommission duration in JMX and HTML
> --
>
> Key: HDFS-17112
> URL: https://issues.apache.org/jira/browse/HDFS-17112
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Shuyan Zhang
>Assignee: Shuyan Zhang
>Priority: Major
>  Labels: pull-request-available
>
> Expose decommission duration time in JMX page. It's a very useful info when 
> decommissioning a batch of datanodes in a cluster.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17112) Show decommission duration in JMX and HTML

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745344#comment-17745344
 ] 

ASF GitHub Bot commented on HDFS-17112:
---

hfutatzhanghb commented on code in PR #5866:
URL: https://github.com/apache/hadoop/pull/5866#discussion_r1270166295


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java:
##
@@ -6664,6 +6664,8 @@ public String getDecomNodes() {
   node.getLeavingServiceStatus().getOutOfServiceOnlyReplicas())
   .put("underReplicateInOpenFiles",
   node.getLeavingServiceStatus().getUnderReplicatedInOpenFiles())
+  .put("decommissionDuration",
+  monotonicNow() - node.getLeavingServiceStatus().getStartTime())

Review Comment:
   Please also check the failed unit tests if they are related to this PR, 
thanks a lot.





> Show decommission duration in JMX and HTML
> --
>
> Key: HDFS-17112
> URL: https://issues.apache.org/jira/browse/HDFS-17112
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Shuyan Zhang
>Assignee: Shuyan Zhang
>Priority: Major
>  Labels: pull-request-available
>
> Expose decommission duration time in JMX page. It's a very useful info when 
> decommissioning a batch of datanodes in a cluster.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17112) Show decommission duration in JMX and HTML

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745343#comment-17745343
 ] 

ASF GitHub Bot commented on HDFS-17112:
---

hfutatzhanghb commented on code in PR #5866:
URL: https://github.com/apache/hadoop/pull/5866#discussion_r1270163504


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java:
##
@@ -6664,6 +6664,8 @@ public String getDecomNodes() {
   node.getLeavingServiceStatus().getOutOfServiceOnlyReplicas())
   .put("underReplicateInOpenFiles",
   node.getLeavingServiceStatus().getUnderReplicatedInOpenFiles())
+  .put("decommissionDuration",
+  monotonicNow() - node.getLeavingServiceStatus().getStartTime())

Review Comment:
   Thanks a lot. @zhangshuyan0 .





> Show decommission duration in JMX and HTML
> --
>
> Key: HDFS-17112
> URL: https://issues.apache.org/jira/browse/HDFS-17112
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Shuyan Zhang
>Assignee: Shuyan Zhang
>Priority: Major
>  Labels: pull-request-available
>
> Expose decommission duration time in JMX page. It's a very useful info when 
> decommissioning a batch of datanodes in a cluster.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17112) Show decommission duration in JMX and HTML

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745341#comment-17745341
 ] 

ASF GitHub Bot commented on HDFS-17112:
---

zhangshuyan0 commented on code in PR #5866:
URL: https://github.com/apache/hadoop/pull/5866#discussion_r1270162405


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java:
##
@@ -6664,6 +6664,8 @@ public String getDecomNodes() {
   node.getLeavingServiceStatus().getOutOfServiceOnlyReplicas())
   .put("underReplicateInOpenFiles",
   node.getLeavingServiceStatus().getUnderReplicatedInOpenFiles())
+  .put("decommissionDuration",
+  monotonicNow() - node.getLeavingServiceStatus().getStartTime())

Review Comment:
   The `node` object here is from `decomNodeList`, which means that it is in 
decommissioning state.
   
https://github.com/apache/hadoop/blob/c35f31640ec4d74e85379d838e05c9c923d0cc77/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L6654-L6656





> Show decommission duration in JMX and HTML
> --
>
> Key: HDFS-17112
> URL: https://issues.apache.org/jira/browse/HDFS-17112
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Shuyan Zhang
>Assignee: Shuyan Zhang
>Priority: Major
>  Labels: pull-request-available
>
> Expose decommission duration time in JMX page. It's a very useful info when 
> decommissioning a batch of datanodes in a cluster.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17111) RBF: Optimize msync to only call nameservices that have observer reads enabled.

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745339#comment-17745339
 ] 

ASF GitHub Bot commented on HDFS-17111:
---

simbadzina commented on code in PR #5860:
URL: https://github.com/apache/hadoop/pull/5860#discussion_r1270153376


##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java:
##
@@ -194,6 +202,18 @@ public class RouterClientProtocol implements 
ClientProtocol {
 this.routerCacheAdmin = new RouterCacheAdmin(rpcServer);
 this.securityManager = rpcServer.getRouterSecurityManager();
 this.rbfRename = new RouterFederationRename(rpcServer, conf);
+
+this.crsNameservicesCache = CacheBuilder.newBuilder()

Review Comment:
   To avoid computing the set of eligibleNameservices again and again. It may 
be a premature optimization though.
   I'm also not sure how the equality check in the cache compares to the 
previous code, performance wise.
   @mkuchenbecker thoughts?





> RBF: Optimize msync to only call nameservices that have observer reads 
> enabled.
> ---
>
> Key: HDFS-17111
> URL: https://issues.apache.org/jira/browse/HDFS-17111
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Simbarashe Dzinamarira
>Assignee: Simbarashe Dzinamarira
>Priority: Major
>  Labels: pull-request-available
>
> Right now when a client MSYNCs to the router, the call is fanned out to all 
> nameservices. We only need to proxy the msync to nameservices that have 
> observer reads configured.
> We can do this either by adding a new config for the admin to specify which 
> nameservices have CRS configured, or we can try to automatically detect these.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17111) RBF: Optimize msync to only call nameservices that have observer reads enabled.

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745331#comment-17745331
 ] 

ASF GitHub Bot commented on HDFS-17111:
---

xinglin commented on code in PR #5860:
URL: https://github.com/apache/hadoop/pull/5860#discussion_r1270146327


##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java:
##
@@ -194,6 +202,18 @@ public class RouterClientProtocol implements 
ClientProtocol {
 this.routerCacheAdmin = new RouterCacheAdmin(rpcServer);
 this.securityManager = rpcServer.getRouterSecurityManager();
 this.rbfRename = new RouterFederationRename(rpcServer, conf);
+
+this.crsNameservicesCache = CacheBuilder.newBuilder()

Review Comment:
   why do we decide to introduce a cache here? 
`isNameSapceObserverREadEligible`() does not even need to contact NN, right? It 
just checks local configs, which should be a very cheap operation. 





> RBF: Optimize msync to only call nameservices that have observer reads 
> enabled.
> ---
>
> Key: HDFS-17111
> URL: https://issues.apache.org/jira/browse/HDFS-17111
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Simbarashe Dzinamarira
>Assignee: Simbarashe Dzinamarira
>Priority: Major
>  Labels: pull-request-available
>
> Right now when a client MSYNCs to the router, the call is fanned out to all 
> nameservices. We only need to proxy the msync to nameservices that have 
> observer reads configured.
> We can do this either by adding a new config for the admin to specify which 
> nameservices have CRS configured, or we can try to automatically detect these.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17111) RBF: Optimize msync to only call nameservices that have observer reads enabled.

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745330#comment-17745330
 ] 

ASF GitHub Bot commented on HDFS-17111:
---

xinglin commented on code in PR #5860:
URL: https://github.com/apache/hadoop/pull/5860#discussion_r1270146327


##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java:
##
@@ -194,6 +202,18 @@ public class RouterClientProtocol implements 
ClientProtocol {
 this.routerCacheAdmin = new RouterCacheAdmin(rpcServer);
 this.securityManager = rpcServer.getRouterSecurityManager();
 this.rbfRename = new RouterFederationRename(rpcServer, conf);
+
+this.crsNameservicesCache = CacheBuilder.newBuilder()

Review Comment:
   why do we decide to introduce a cache here? 
isNameSapceObserverREadEligible() does not even need to contact NN, right? It 
just checks for local configs. 





> RBF: Optimize msync to only call nameservices that have observer reads 
> enabled.
> ---
>
> Key: HDFS-17111
> URL: https://issues.apache.org/jira/browse/HDFS-17111
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Simbarashe Dzinamarira
>Assignee: Simbarashe Dzinamarira
>Priority: Major
>  Labels: pull-request-available
>
> Right now when a client MSYNCs to the router, the call is fanned out to all 
> nameservices. We only need to proxy the msync to nameservices that have 
> observer reads configured.
> We can do this either by adding a new config for the admin to specify which 
> nameservices have CRS configured, or we can try to automatically detect these.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17111) RBF: Optimize msync to only call nameservices that have observer reads enabled.

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745324#comment-17745324
 ] 

ASF GitHub Bot commented on HDFS-17111:
---

hadoop-yetus commented on PR #5860:
URL: https://github.com/apache/hadoop/pull/5860#issuecomment-1644855434

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 29s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m 55s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   0m 59s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 14s |  |  
hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 0 new + 0 unchanged - 
2 fixed = 0 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m  5s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 52s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 30s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 108m 56s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5860/11/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5860 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 5328f9c1f91c 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 06fc39ea3b4da81f942192812c3980866601e4a1 |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5860/11/testReport/ |
   | Max. process+thread count | 2785 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5860/11/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 

[jira] [Commented] (HDFS-17111) RBF: Optimize msync to only call nameservices that have observer reads enabled.

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745315#comment-17745315
 ] 

ASF GitHub Bot commented on HDFS-17111:
---

hadoop-yetus commented on PR #5860:
URL: https://github.com/apache/hadoop/pull/5860#issuecomment-1644796572

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  52m 42s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 24s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  38m 40s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 18s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5860/8/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 1 new + 0 
unchanged - 2 fixed = 1 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  38m 57s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  21m 58s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5860/8/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 167m 39s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterRPCMultipleDestinationMountTableResolver
 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5860/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5860 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 546a9d44f59f 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 
13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 02ba0aa3a823da4c59cb3759a47cd7aa9b391f25 |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 

[jira] [Commented] (HDFS-17111) RBF: Optimize msync to only call nameservices that have observer reads enabled.

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745303#comment-17745303
 ] 

ASF GitHub Bot commented on HDFS-17111:
---

hadoop-yetus commented on PR #5860:
URL: https://github.com/apache/hadoop/pull/5860#issuecomment-1644769849

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   9m 30s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m 40s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 25s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 19s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 15s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5860/9/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 1 new + 0 
unchanged - 2 fixed = 1 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   0m 58s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m 10s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 19s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 30s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 122m 55s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5860/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5860 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux dcd82db52392 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 02ba0aa3a823da4c59cb3759a47cd7aa9b391f25 |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5860/9/testReport/ |
   | Max. process+thread count | 3353 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   

[jira] [Commented] (HDFS-17111) RBF: Optimize msync to only call nameservices that have observer reads enabled.

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745301#comment-17745301
 ] 

ASF GitHub Bot commented on HDFS-17111:
---

hadoop-yetus commented on PR #5860:
URL: https://github.com/apache/hadoop/pull/5860#issuecomment-1644767505

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   7m 56s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m 20s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 25s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   1m  5s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 10s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 15s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5860/10/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 1 new + 0 
unchanged - 2 fixed = 1 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 43s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 22s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 118m 13s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5860/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5860 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 46e2641d6dc8 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 02ba0aa3a823da4c59cb3759a47cd7aa9b391f25 |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5860/10/testReport/ |
   | Max. process+thread count | 3246 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |

[jira] [Commented] (HDFS-17111) RBF: Optimize msync to only call nameservices that have observer reads enabled.

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745291#comment-17745291
 ] 

ASF GitHub Bot commented on HDFS-17111:
---

hadoop-yetus commented on PR #5860:
URL: https://github.com/apache/hadoop/pull/5860#issuecomment-1644701615

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 49s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  51m 23s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 55s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 49s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 35s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 50s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 46s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  40m 18s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 18s |  |  
hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 0 new + 0 unchanged - 
2 fixed = 0 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  38m 43s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  21m 47s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 168m 29s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5860/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5860 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 7f6b94bb4083 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 2f748b70a146c6e5b81a61c0b66b671c8c4ba593 |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5860/7/testReport/ |
   | Max. process+thread count | 2512 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5860/7/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 

[jira] [Commented] (HDFS-17111) RBF: Optimize msync to only call nameservices that have observer reads enabled.

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745273#comment-17745273
 ] 

ASF GitHub Bot commented on HDFS-17111:
---

simbadzina commented on code in PR #5860:
URL: https://github.com/apache/hadoop/pull/5860#discussion_r1269958411


##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java:
##
@@ -1928,9 +1928,13 @@ public BatchedEntries listOpenFiles(long 
prevId,
   @Override
   public void msync() throws IOException {
 rpcServer.checkOperation(NameNode.OperationCategory.READ, true);
-Set nss = namenodeResolver.getNamespaces();
+Set allNamespaces = 
namenodeResolver.getNamespaces();
 RemoteMethod method = new RemoteMethod("msync");
-rpcClient.invokeConcurrent(nss, method);
+Set namespacesEligibleForObserverReads = 
allNamespaces

Review Comment:
   Done.





> RBF: Optimize msync to only call nameservices that have observer reads 
> enabled.
> ---
>
> Key: HDFS-17111
> URL: https://issues.apache.org/jira/browse/HDFS-17111
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Simbarashe Dzinamarira
>Assignee: Simbarashe Dzinamarira
>Priority: Major
>  Labels: pull-request-available
>
> Right now when a client MSYNCs to the router, the call is fanned out to all 
> nameservices. We only need to proxy the msync to nameservices that have 
> observer reads configured.
> We can do this either by adding a new config for the admin to specify which 
> nameservices have CRS configured, or we can try to automatically detect these.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17111) RBF: Optimize msync to only call nameservices that have observer reads enabled.

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745274#comment-17745274
 ] 

ASF GitHub Bot commented on HDFS-17111:
---

simbadzina commented on code in PR #5860:
URL: https://github.com/apache/hadoop/pull/5860#discussion_r1269958645


##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java:
##
@@ -1928,9 +1929,17 @@ public BatchedEntries listOpenFiles(long 
prevId,
   @Override
   public void msync() throws IOException {
 rpcServer.checkOperation(NameNode.OperationCategory.READ, true);
-Set nss = namenodeResolver.getNamespaces();
+Set allNamespaces = 
namenodeResolver.getNamespaces();
 RemoteMethod method = new RemoteMethod("msync");
-rpcClient.invokeConcurrent(nss, method);
+Set namespacesEligibleForObserverReads = new 
HashSet<>();

Review Comment:
   Added a cache.





> RBF: Optimize msync to only call nameservices that have observer reads 
> enabled.
> ---
>
> Key: HDFS-17111
> URL: https://issues.apache.org/jira/browse/HDFS-17111
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Simbarashe Dzinamarira
>Assignee: Simbarashe Dzinamarira
>Priority: Major
>  Labels: pull-request-available
>
> Right now when a client MSYNCs to the router, the call is fanned out to all 
> nameservices. We only need to proxy the msync to nameservices that have 
> observer reads configured.
> We can do this either by adding a new config for the admin to specify which 
> nameservices have CRS configured, or we can try to automatically detect these.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17111) RBF: Optimize msync to only call nameservices that have observer reads enabled.

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745263#comment-17745263
 ] 

ASF GitHub Bot commented on HDFS-17111:
---

simbadzina commented on code in PR #5860:
URL: https://github.com/apache/hadoop/pull/5860#discussion_r1269919579


##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java:
##
@@ -1928,9 +1929,17 @@ public BatchedEntries listOpenFiles(long 
prevId,
   @Override
   public void msync() throws IOException {
 rpcServer.checkOperation(NameNode.OperationCategory.READ, true);
-Set nss = namenodeResolver.getNamespaces();
+Set allNamespaces = 
namenodeResolver.getNamespaces();
 RemoteMethod method = new RemoteMethod("msync");
-rpcClient.invokeConcurrent(nss, method);
+Set namespacesEligibleForObserverReads = new 
HashSet<>();

Review Comment:
   allNamespace does not change often. Only when an entire nameservice becomes 
inactive. The set of nonEligibleNamenodes is only refreshed on startup. A cache 
may help. Wondering whether checking the cache is just as expensive as doing 
the calculation each time. A set equality check vs. filtering the set each time.





> RBF: Optimize msync to only call nameservices that have observer reads 
> enabled.
> ---
>
> Key: HDFS-17111
> URL: https://issues.apache.org/jira/browse/HDFS-17111
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Simbarashe Dzinamarira
>Assignee: Simbarashe Dzinamarira
>Priority: Major
>  Labels: pull-request-available
>
> Right now when a client MSYNCs to the router, the call is fanned out to all 
> nameservices. We only need to proxy the msync to nameservices that have 
> observer reads configured.
> We can do this either by adding a new config for the admin to specify which 
> nameservices have CRS configured, or we can try to automatically detect these.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17111) RBF: Optimize msync to only call nameservices that have observer reads enabled.

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745252#comment-17745252
 ] 

ASF GitHub Bot commented on HDFS-17111:
---

xinglin commented on code in PR #5860:
URL: https://github.com/apache/hadoop/pull/5860#discussion_r1269897343


##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java:
##
@@ -1928,9 +1928,13 @@ public BatchedEntries listOpenFiles(long 
prevId,
   @Override
   public void msync() throws IOException {
 rpcServer.checkOperation(NameNode.OperationCategory.READ, true);
-Set nss = namenodeResolver.getNamespaces();
+Set allNamespaces = 
namenodeResolver.getNamespaces();
 RemoteMethod method = new RemoteMethod("msync");
-rpcClient.invokeConcurrent(nss, method);
+Set namespacesEligibleForObserverReads = 
allNamespaces

Review Comment:
   Can we add a comment here, specifically mentioning why this change is 
introduced?





> RBF: Optimize msync to only call nameservices that have observer reads 
> enabled.
> ---
>
> Key: HDFS-17111
> URL: https://issues.apache.org/jira/browse/HDFS-17111
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Simbarashe Dzinamarira
>Assignee: Simbarashe Dzinamarira
>Priority: Major
>  Labels: pull-request-available
>
> Right now when a client MSYNCs to the router, the call is fanned out to all 
> nameservices. We only need to proxy the msync to nameservices that have 
> observer reads configured.
> We can do this either by adding a new config for the admin to specify which 
> nameservices have CRS configured, or we can try to automatically detect these.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16991) Fix testMkdirsRaceWithObserverRead

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745249#comment-17745249
 ] 

ASF GitHub Bot commented on HDFS-16991:
---

hadoop-yetus commented on PR #5591:
URL: https://github.com/apache/hadoop/pull/5591#issuecomment-1644503356

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  docker  |   0m  4s |  |  Docker failed to build run-specific 
yetus/hadoop:tp-27}.  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/5591 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5591/6/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Fix testMkdirsRaceWithObserverRead
> --
>
> Key: HDFS-16991
> URL: https://issues.apache.org/jira/browse/HDFS-16991
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.3.4
>Reporter: fanluo
>Assignee: fanluo
>Priority: Minor
>  Labels: pull-request-available
>
> The test case testMkdirsRaceWithObserverRead which in TestObserverNode 
> sometimes failed like this:
> {code:java}
> java.lang.AssertionError: Client #1 lastSeenStateId=-9223372036854775808 
> activStateId=5
> null    at org.junit.Assert.fail(Assert.java:89)
>     at org.junit.Assert.assertTrue(Assert.java:42)
>     at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestObserverNode.testMkdirsRaceWithObserverRead(TestObserverNode.java:607)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  {code}
> i think the Thread.sleep() should move into the sub threads, like this:
> {code:java}
> public void run() {
>   try {
>     fs.mkdirs(DIR_PATH);
>     Thread.sleep(150); // wait until mkdir is logged
>     clientState.lastSeenStateId = HATestUtil.getLastSeenStateId(fs);
>     assertSentTo(fs, 0);    FileStatus stat = fs.getFileStatus(DIR_PATH);
>     assertSentTo(fs, 2);
>     assertTrue("Should be a directory", stat.isDirectory());
>   } catch (FileNotFoundException ioe) {
>     clientState.fnfe = ioe;
>   } catch (Exception e) {
>     fail("Unexpected exception: " + e);
>   }
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17111) RBF: Optimize msync to only call nameservices that have observer reads enabled.

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745246#comment-17745246
 ] 

ASF GitHub Bot commented on HDFS-17111:
---

simbadzina commented on code in PR #5860:
URL: https://github.com/apache/hadoop/pull/5860#discussion_r1269883206


##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java:
##
@@ -1783,16 +1783,26 @@ && isNamespaceStateIdFresh(nsId)
   }
 
   private boolean isObserverReadEligible(String nsId, Method method) {
-boolean isReadEnabledForNamespace =
-observerReadEnabledDefault != 
observerReadEnabledOverrides.contains(nsId);
-return isReadEnabledForNamespace && isReadCall(method);
+return isReadCall(method) && isNamespaceObserverReadEligible(nsId);
+  }
+
+  /**
+   * Check if a namespace is eligible for observer reads.
+   * @param nsId namespaceID
+   * @return whether the 'namespace' has observer reads enabled.
+   */
+  boolean isNamespaceObserverReadEligible(String nsId) {
+return observerReadEnabledDefault != 
observerReadEnabledOverrides.contains(nsId);
   }
 
   /**
* Check if a method is read-only.
* @return whether the 'method' is a read-only operation.
*/
   private static boolean isReadCall(Method method) {
+if (method == null) {

Review Comment:
   This is to fix a unit test here: 
https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouter.java#L200
   
   The ideal fix is mocking Method.class there so that 
`RemoteMethod.getMethod()` does not return null and mocking 
`Method.isAnnotationPresent()`. However that required updating the version of 
Mockito, in order to mock a final class.





> RBF: Optimize msync to only call nameservices that have observer reads 
> enabled.
> ---
>
> Key: HDFS-17111
> URL: https://issues.apache.org/jira/browse/HDFS-17111
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Simbarashe Dzinamarira
>Assignee: Simbarashe Dzinamarira
>Priority: Major
>  Labels: pull-request-available
>
> Right now when a client MSYNCs to the router, the call is fanned out to all 
> nameservices. We only need to proxy the msync to nameservices that have 
> observer reads configured.
> We can do this either by adding a new config for the admin to specify which 
> nameservices have CRS configured, or we can try to automatically detect these.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16991) Fix testMkdirsRaceWithObserverRead

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745244#comment-17745244
 ] 

ASF GitHub Bot commented on HDFS-16991:
---

hadoop-yetus commented on PR #5591:
URL: https://github.com/apache/hadoop/pull/5591#issuecomment-1644478437

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  docker  |   3m 58s |  |  Docker failed to build run-specific 
yetus/hadoop:tp-29615}.  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/5591 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5591/5/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Fix testMkdirsRaceWithObserverRead
> --
>
> Key: HDFS-16991
> URL: https://issues.apache.org/jira/browse/HDFS-16991
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.3.4
>Reporter: fanluo
>Assignee: fanluo
>Priority: Minor
>  Labels: pull-request-available
>
> The test case testMkdirsRaceWithObserverRead which in TestObserverNode 
> sometimes failed like this:
> {code:java}
> java.lang.AssertionError: Client #1 lastSeenStateId=-9223372036854775808 
> activStateId=5
> null    at org.junit.Assert.fail(Assert.java:89)
>     at org.junit.Assert.assertTrue(Assert.java:42)
>     at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestObserverNode.testMkdirsRaceWithObserverRead(TestObserverNode.java:607)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  {code}
> i think the Thread.sleep() should move into the sub threads, like this:
> {code:java}
> public void run() {
>   try {
>     fs.mkdirs(DIR_PATH);
>     Thread.sleep(150); // wait until mkdir is logged
>     clientState.lastSeenStateId = HATestUtil.getLastSeenStateId(fs);
>     assertSentTo(fs, 0);    FileStatus stat = fs.getFileStatus(DIR_PATH);
>     assertSentTo(fs, 2);
>     assertTrue("Should be a directory", stat.isDirectory());
>   } catch (FileNotFoundException ioe) {
>     clientState.fnfe = ioe;
>   } catch (Exception e) {
>     fail("Unexpected exception: " + e);
>   }
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-17074) Remove incorrect comment in TestRedudantBlocks#setup

2023-07-20 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena resolved HDFS-17074.
-
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Remove incorrect comment in TestRedudantBlocks#setup
> 
>
> Key: HDFS-17074
> URL: https://issues.apache.org/jira/browse/HDFS-17074
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.4.0
>Reporter: farmmamba
>Assignee: farmmamba
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> In TestRedudantBlocks#setup(),  The below comment is incorrect.
> {code:java}
> // disable block recovery 
> conf.setInt(DFSConfigKeys.DFS_NAMENODE_REDUNDANCY_INTERVAL_SECONDS_KEY, 1);
> conf.setInt(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, 1);{code}
> We should delete this comment.
> The correct usage is in TestAddOverReplicatedStripedBlocks#setup()
> {code:java}
> // disable block recovery
> conf.setInt(DFSConfigKeys.DFS_NAMENODE_REPLICATION_MAX_STREAMS_KEY, 0);
> conf.setInt(DFSConfigKeys.DFS_NAMENODE_REDUNDANCY_INTERVAL_SECONDS_KEY, 1);
> conf.setInt(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, 1); {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17074) Remove incorrect comment in TestRedudantBlocks#setup

2023-07-20 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745241#comment-17745241
 ] 

Ayush Saxena commented on HDFS-17074:
-

Committed to trunk.

Thanx [~zhanghaobo] for the contribution & [~zhangshuyan] for the review!!!

> Remove incorrect comment in TestRedudantBlocks#setup
> 
>
> Key: HDFS-17074
> URL: https://issues.apache.org/jira/browse/HDFS-17074
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.4.0
>Reporter: farmmamba
>Assignee: farmmamba
>Priority: Trivial
>  Labels: pull-request-available
>
> In TestRedudantBlocks#setup(),  The below comment is incorrect.
> {code:java}
> // disable block recovery 
> conf.setInt(DFSConfigKeys.DFS_NAMENODE_REDUNDANCY_INTERVAL_SECONDS_KEY, 1);
> conf.setInt(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, 1);{code}
> We should delete this comment.
> The correct usage is in TestAddOverReplicatedStripedBlocks#setup()
> {code:java}
> // disable block recovery
> conf.setInt(DFSConfigKeys.DFS_NAMENODE_REPLICATION_MAX_STREAMS_KEY, 0);
> conf.setInt(DFSConfigKeys.DFS_NAMENODE_REDUNDANCY_INTERVAL_SECONDS_KEY, 1);
> conf.setInt(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, 1); {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17074) Remove incorrect comment in TestRedudantBlocks#setup

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745240#comment-17745240
 ] 

ASF GitHub Bot commented on HDFS-17074:
---

ayushtkn merged PR #5822:
URL: https://github.com/apache/hadoop/pull/5822




> Remove incorrect comment in TestRedudantBlocks#setup
> 
>
> Key: HDFS-17074
> URL: https://issues.apache.org/jira/browse/HDFS-17074
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.4.0
>Reporter: farmmamba
>Assignee: farmmamba
>Priority: Trivial
>  Labels: pull-request-available
>
> In TestRedudantBlocks#setup(),  The below comment is incorrect.
> {code:java}
> // disable block recovery 
> conf.setInt(DFSConfigKeys.DFS_NAMENODE_REDUNDANCY_INTERVAL_SECONDS_KEY, 1);
> conf.setInt(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, 1);{code}
> We should delete this comment.
> The correct usage is in TestAddOverReplicatedStripedBlocks#setup()
> {code:java}
> // disable block recovery
> conf.setInt(DFSConfigKeys.DFS_NAMENODE_REPLICATION_MAX_STREAMS_KEY, 0);
> conf.setInt(DFSConfigKeys.DFS_NAMENODE_REDUNDANCY_INTERVAL_SECONDS_KEY, 1);
> conf.setInt(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, 1); {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17067) Use BlockingThreadPoolExecutorService for nnProbingThreadPool in ObserverReadProxy

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745214#comment-17745214
 ] 

ASF GitHub Bot commented on HDFS-17067:
---

xinglin commented on PR #5803:
URL: https://github.com/apache/hadoop/pull/5803#issuecomment-1644420686

   thanks @goiri for committing this PR to trunk.




> Use BlockingThreadPoolExecutorService for nnProbingThreadPool in 
> ObserverReadProxy
> --
>
> Key: HDFS-17067
> URL: https://issues.apache.org/jira/browse/HDFS-17067
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.4.0
>Reporter: Xing Lin
>Assignee: Xing Lin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> In HDFS-17030, we introduced an ExecutorService, to submit 
> getHAServiceState() requests. We constructed the ExecutorService directly 
> from a basic ThreadPoolExecutor, without setting _allowCoreThreadTimeOut_ to 
> true. Then, the core thread will be kept up and running even when the main 
> thread exits. To fix it, one could set _allowCoreThreadTimeOut_ to true. 
> However, in this PR, we decide to directly use an existing executorService 
> implementation (_BlockingThreadPoolExecutorService_) in hadoop instead. It 
> takes care of setting _allowCoreThreadTimeOut_ and also allows setting the 
> prefix for thread names.
> {code:java}
>   private final ExecutorService nnProbingThreadPool =
>   new ThreadPoolExecutor(1, 4, 1L, TimeUnit.MINUTES,
>   new ArrayBlockingQueue(1024));
> {code}
> A second minor issue is we did not shutdown the executorService in close(). 
> It is a minor issue as close() will only be called when the garbage collector 
> starts to reclaim an ObserverReadProxyProvider object, not when there is no 
> reference to the ObserverReadProxyProvider object. The time between when an 
> ObserverReadProxyProvider becomes dereferenced and when the garage collector 
> actually starts to reclaim that object is out of control/under-defined 
> (unless the program is shutdown with an explicit System.exit(1)).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16993) Datanode supports configure TopN DatanodeNetworkCounts

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745202#comment-17745202
 ] 

ASF GitHub Bot commented on HDFS-16993:
---

hadoop-yetus commented on PR #5597:
URL: https://github.com/apache/hadoop/pull/5597#issuecomment-1644388189

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 42s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  48m  8s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   1m 22s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   1m 11s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 35s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 12s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   3m 31s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  37m  4s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 11s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |   1m 11s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5597/6/artifact/out/blanks-eol.txt)
 |  The patch has 2 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   1m  0s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5597/6/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 3 new + 310 unchanged 
- 0 fixed = 313 total (was 310)  |
   | +1 :green_heart: |  mvnsite  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   3m 29s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  36m 55s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 214m 13s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5597/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 58s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 362m 27s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeMetrics |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5597/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5597 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux f23a176d2263 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 
13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 

[jira] [Commented] (HDFS-17112) Show decommission duration in JMX and HTML

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745197#comment-17745197
 ] 

ASF GitHub Bot commented on HDFS-17112:
---

hadoop-yetus commented on PR #5866:
URL: https://github.com/apache/hadoop/pull/5866#issuecomment-1644370875

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  45m 49s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 23s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   1m 22s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   1m 14s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 12s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   3m 26s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  36m 29s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 13s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   3m 12s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  37m 50s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 166m 51s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5866/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +0 :ok: |  asflicense  |   0m 35s |  |  ASF License check generated no 
output?  |
   |  |   | 312m 49s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.mover.TestStorageMover |
   |   | hadoop.hdfs.server.datanode.TestDataNodeMetrics |
   |   | hadoop.hdfs.server.datanode.TestBlockRecovery2 |
   |   | hadoop.hdfs.server.datanode.TestBatchIbr |
   |   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
   |   | hadoop.hdfs.server.datanode.TestDataNodeVolumeMetrics |
   |   | hadoop.hdfs.server.mover.TestMover |
   |   | hadoop.hdfs.server.blockmanagement.TestSequentialBlockGroupId |
   |   | hadoop.hdfs.server.datanode.TestDataNodeReconfiguration |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5866/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5866 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux 626e86102f06 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 
13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool 

[jira] [Commented] (HDFS-17111) RBF: Optimize msync to only call nameservices that have observer reads enabled.

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745189#comment-17745189
 ] 

ASF GitHub Bot commented on HDFS-17111:
---

mkuchenbecker commented on code in PR #5860:
URL: https://github.com/apache/hadoop/pull/5860#discussion_r1269784340


##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java:
##
@@ -1783,16 +1783,26 @@ && isNamespaceStateIdFresh(nsId)
   }
 
   private boolean isObserverReadEligible(String nsId, Method method) {
-boolean isReadEnabledForNamespace =
-observerReadEnabledDefault != 
observerReadEnabledOverrides.contains(nsId);
-return isReadEnabledForNamespace && isReadCall(method);
+return isReadCall(method) && isNamespaceObserverReadEligible(nsId);
+  }
+
+  /**
+   * Check if a namespace is eligible for observer reads.
+   * @param nsId namespaceID
+   * @return whether the 'namespace' has observer reads enabled.
+   */
+  boolean isNamespaceObserverReadEligible(String nsId) {
+return observerReadEnabledDefault != 
observerReadEnabledOverrides.contains(nsId);
   }
 
   /**
* Check if a method is read-only.
* @return whether the 'method' is a read-only operation.
*/
   private static boolean isReadCall(Method method) {
+if (method == null) {

Review Comment:
   Is it necessary to support null input on a private method? 





> RBF: Optimize msync to only call nameservices that have observer reads 
> enabled.
> ---
>
> Key: HDFS-17111
> URL: https://issues.apache.org/jira/browse/HDFS-17111
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Simbarashe Dzinamarira
>Assignee: Simbarashe Dzinamarira
>Priority: Major
>  Labels: pull-request-available
>
> Right now when a client MSYNCs to the router, the call is fanned out to all 
> nameservices. We only need to proxy the msync to nameservices that have 
> observer reads configured.
> We can do this either by adding a new config for the admin to specify which 
> nameservices have CRS configured, or we can try to automatically detect these.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17111) RBF: Optimize msync to only call nameservices that have observer reads enabled.

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745190#comment-17745190
 ] 

ASF GitHub Bot commented on HDFS-17111:
---

mkuchenbecker commented on code in PR #5860:
URL: https://github.com/apache/hadoop/pull/5860#discussion_r1269785090


##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java:
##
@@ -1928,9 +1929,17 @@ public BatchedEntries listOpenFiles(long 
prevId,
   @Override
   public void msync() throws IOException {
 rpcServer.checkOperation(NameNode.OperationCategory.READ, true);
-Set nss = namenodeResolver.getNamespaces();
+Set allNamespaces = 
namenodeResolver.getNamespaces();
 RemoteMethod method = new RemoteMethod("msync");
-rpcClient.invokeConcurrent(nss, method);
+Set namespacesEligibleForObserverReads = new 
HashSet<>();

Review Comment:
   How often does this change? Seems like the namenodeResolver could do this 
calculation once and cache. 





> RBF: Optimize msync to only call nameservices that have observer reads 
> enabled.
> ---
>
> Key: HDFS-17111
> URL: https://issues.apache.org/jira/browse/HDFS-17111
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Simbarashe Dzinamarira
>Assignee: Simbarashe Dzinamarira
>Priority: Major
>  Labels: pull-request-available
>
> Right now when a client MSYNCs to the router, the call is fanned out to all 
> nameservices. We only need to proxy the msync to nameservices that have 
> observer reads configured.
> We can do this either by adding a new config for the admin to specify which 
> nameservices have CRS configured, or we can try to automatically detect these.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-17067) Use BlockingThreadPoolExecutorService for nnProbingThreadPool in ObserverReadProxy

2023-07-20 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HDFS-17067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HDFS-17067.

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Use BlockingThreadPoolExecutorService for nnProbingThreadPool in 
> ObserverReadProxy
> --
>
> Key: HDFS-17067
> URL: https://issues.apache.org/jira/browse/HDFS-17067
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.4.0
>Reporter: Xing Lin
>Assignee: Xing Lin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> In HDFS-17030, we introduced an ExecutorService, to submit 
> getHAServiceState() requests. We constructed the ExecutorService directly 
> from a basic ThreadPoolExecutor, without setting _allowCoreThreadTimeOut_ to 
> true. Then, the core thread will be kept up and running even when the main 
> thread exits. To fix it, one could set _allowCoreThreadTimeOut_ to true. 
> However, in this PR, we decide to directly use an existing executorService 
> implementation (_BlockingThreadPoolExecutorService_) in hadoop instead. It 
> takes care of setting _allowCoreThreadTimeOut_ and also allows setting the 
> prefix for thread names.
> {code:java}
>   private final ExecutorService nnProbingThreadPool =
>   new ThreadPoolExecutor(1, 4, 1L, TimeUnit.MINUTES,
>   new ArrayBlockingQueue(1024));
> {code}
> A second minor issue is we did not shutdown the executorService in close(). 
> It is a minor issue as close() will only be called when the garbage collector 
> starts to reclaim an ObserverReadProxyProvider object, not when there is no 
> reference to the ObserverReadProxyProvider object. The time between when an 
> ObserverReadProxyProvider becomes dereferenced and when the garage collector 
> actually starts to reclaim that object is out of control/under-defined 
> (unless the program is shutdown with an explicit System.exit(1)).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17067) Use BlockingThreadPoolExecutorService for nnProbingThreadPool in ObserverReadProxy

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745185#comment-17745185
 ] 

ASF GitHub Bot commented on HDFS-17067:
---

goiri merged PR #5803:
URL: https://github.com/apache/hadoop/pull/5803




> Use BlockingThreadPoolExecutorService for nnProbingThreadPool in 
> ObserverReadProxy
> --
>
> Key: HDFS-17067
> URL: https://issues.apache.org/jira/browse/HDFS-17067
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.4.0
>Reporter: Xing Lin
>Assignee: Xing Lin
>Priority: Major
>  Labels: pull-request-available
>
> In HDFS-17030, we introduced an ExecutorService, to submit 
> getHAServiceState() requests. We constructed the ExecutorService directly 
> from a basic ThreadPoolExecutor, without setting _allowCoreThreadTimeOut_ to 
> true. Then, the core thread will be kept up and running even when the main 
> thread exits. To fix it, one could set _allowCoreThreadTimeOut_ to true. 
> However, in this PR, we decide to directly use an existing executorService 
> implementation (_BlockingThreadPoolExecutorService_) in hadoop instead. It 
> takes care of setting _allowCoreThreadTimeOut_ and also allows setting the 
> prefix for thread names.
> {code:java}
>   private final ExecutorService nnProbingThreadPool =
>   new ThreadPoolExecutor(1, 4, 1L, TimeUnit.MINUTES,
>   new ArrayBlockingQueue(1024));
> {code}
> A second minor issue is we did not shutdown the executorService in close(). 
> It is a minor issue as close() will only be called when the garbage collector 
> starts to reclaim an ObserverReadProxyProvider object, not when there is no 
> reference to the ObserverReadProxyProvider object. The time between when an 
> ObserverReadProxyProvider becomes dereferenced and when the garage collector 
> actually starts to reclaim that object is out of control/under-defined 
> (unless the program is shutdown with an explicit System.exit(1)).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16993) Datanode supports configure TopN DatanodeNetworkCounts

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745177#comment-17745177
 ] 

ASF GitHub Bot commented on HDFS-16993:
---

hadoop-yetus commented on PR #5597:
URL: https://github.com/apache/hadoop/pull/5597#issuecomment-1644290834

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 30s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m 56s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 53s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 46s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 56s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 11s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 57s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m  6s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 46s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 46s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 46s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |   0m 42s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5597/7/artifact/out/blanks-eol.txt)
 |  The patch has 2 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   0m 36s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5597/7/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 3 new + 310 unchanged 
- 0 fixed = 313 total (was 310)  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 56s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 57s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 193m 26s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5597/7/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 39s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 292m 17s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.server.datanode.TestDataNodeMetrics |
   |   | hadoop.hdfs.server.namenode.ha.TestObserverNode |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5597/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5597 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux 1f9b211b9b13 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   

[jira] [Updated] (HDFS-13916) Distcp SnapshotDiff to support WebHDFS

2023-07-20 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-13916:
--
Fix Version/s: 3.3.9

> Distcp SnapshotDiff to support WebHDFS
> --
>
> Key: HDFS-13916
> URL: https://issues.apache.org/jira/browse/HDFS-13916
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: distcp, webhdfs
>Affects Versions: 3.0.1, 3.1.1
>Reporter: Xun REN
>Assignee: Xun REN
>Priority: Major
>  Labels: easyfix, newbie, patch, pull-request-available
> Fix For: 3.4.0, 3.3.9
>
> Attachments: HDFS-13916.002.patch, HDFS-13916.003.patch, 
> HDFS-13916.004.patch, HDFS-13916.005.patch, HDFS-13916.006.patch, 
> HDFS-13916.007.patch, HDFS-13916.patch
>
>
> [~ljain] has worked on the JIRA: HDFS-13052 to provide the possibility to 
> make DistCP of SnapshotDiff with WebHDFSFileSystem. However, in the patch, 
> there is no modification for the real java class which is used by launching 
> the command "hadoop distcp ..."
>  
> You can check in the latest version here:
> [https://github.com/apache/hadoop/blob/branch-3.1.1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpSync.java#L96-L100]
> In the method "preSyncCheck" of the class "DistCpSync", we still check if the 
> file system is DFS. 
> So I propose to change the class DistCpSync in order to take into 
> consideration what was committed by Lokesh Jain.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13916) Distcp SnapshotDiff to support WebHDFS

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745164#comment-17745164
 ] 

ASF GitHub Bot commented on HDFS-13916:
---

steveloughran merged PR #5839:
URL: https://github.com/apache/hadoop/pull/5839




> Distcp SnapshotDiff to support WebHDFS
> --
>
> Key: HDFS-13916
> URL: https://issues.apache.org/jira/browse/HDFS-13916
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: distcp, webhdfs
>Affects Versions: 3.0.1, 3.1.1
>Reporter: Xun REN
>Assignee: Xun REN
>Priority: Major
>  Labels: easyfix, newbie, patch, pull-request-available
> Fix For: 3.4.0
>
> Attachments: HDFS-13916.002.patch, HDFS-13916.003.patch, 
> HDFS-13916.004.patch, HDFS-13916.005.patch, HDFS-13916.006.patch, 
> HDFS-13916.007.patch, HDFS-13916.patch
>
>
> [~ljain] has worked on the JIRA: HDFS-13052 to provide the possibility to 
> make DistCP of SnapshotDiff with WebHDFSFileSystem. However, in the patch, 
> there is no modification for the real java class which is used by launching 
> the command "hadoop distcp ..."
>  
> You can check in the latest version here:
> [https://github.com/apache/hadoop/blob/branch-3.1.1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpSync.java#L96-L100]
> In the method "preSyncCheck" of the class "DistCpSync", we still check if the 
> file system is DFS. 
> So I propose to change the class DistCpSync in order to take into 
> consideration what was committed by Lokesh Jain.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17111) RBF: Optimize msync to only call nameservices that have observer reads enabled.

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745125#comment-17745125
 ] 

ASF GitHub Bot commented on HDFS-17111:
---

simbadzina commented on PR #5860:
URL: https://github.com/apache/hadoop/pull/5860#issuecomment-1644104524

   @virajjasani @goiri @hchaverri could you please help review this when you 
have bandwidth.




> RBF: Optimize msync to only call nameservices that have observer reads 
> enabled.
> ---
>
> Key: HDFS-17111
> URL: https://issues.apache.org/jira/browse/HDFS-17111
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Simbarashe Dzinamarira
>Assignee: Simbarashe Dzinamarira
>Priority: Major
>  Labels: pull-request-available
>
> Right now when a client MSYNCs to the router, the call is fanned out to all 
> nameservices. We only need to proxy the msync to nameservices that have 
> observer reads configured.
> We can do this either by adding a new config for the admin to specify which 
> nameservices have CRS configured, or we can try to automatically detect these.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17067) Use BlockingThreadPoolExecutorService for nnProbingThreadPool in ObserverReadProxy

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745116#comment-17745116
 ] 

ASF GitHub Bot commented on HDFS-17067:
---

xinglin commented on PR #5803:
URL: https://github.com/apache/hadoop/pull/5803#issuecomment-1644060205

   Thanks @mccormickt12 for reviewing and approving the PR!
   
   @goiri, could you take a look? thanks,




> Use BlockingThreadPoolExecutorService for nnProbingThreadPool in 
> ObserverReadProxy
> --
>
> Key: HDFS-17067
> URL: https://issues.apache.org/jira/browse/HDFS-17067
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.4.0
>Reporter: Xing Lin
>Assignee: Xing Lin
>Priority: Major
>  Labels: pull-request-available
>
> In HDFS-17030, we introduced an ExecutorService, to submit 
> getHAServiceState() requests. We constructed the ExecutorService directly 
> from a basic ThreadPoolExecutor, without setting _allowCoreThreadTimeOut_ to 
> true. Then, the core thread will be kept up and running even when the main 
> thread exits. To fix it, one could set _allowCoreThreadTimeOut_ to true. 
> However, in this PR, we decide to directly use an existing executorService 
> implementation (_BlockingThreadPoolExecutorService_) in hadoop instead. It 
> takes care of setting _allowCoreThreadTimeOut_ and also allows setting the 
> prefix for thread names.
> {code:java}
>   private final ExecutorService nnProbingThreadPool =
>   new ThreadPoolExecutor(1, 4, 1L, TimeUnit.MINUTES,
>   new ArrayBlockingQueue(1024));
> {code}
> A second minor issue is we did not shutdown the executorService in close(). 
> It is a minor issue as close() will only be called when the garbage collector 
> starts to reclaim an ObserverReadProxyProvider object, not when there is no 
> reference to the ObserverReadProxyProvider object. The time between when an 
> ObserverReadProxyProvider becomes dereferenced and when the garage collector 
> actually starts to reclaim that object is out of control/under-defined 
> (unless the program is shutdown with an explicit System.exit(1)).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17112) Show decommission duration in JMX and HTML

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745083#comment-17745083
 ] 

ASF GitHub Bot commented on HDFS-17112:
---

hfutatzhanghb commented on code in PR #5866:
URL: https://github.com/apache/hadoop/pull/5866#discussion_r1269433239


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java:
##
@@ -6664,6 +6664,8 @@ public String getDecomNodes() {
   node.getLeavingServiceStatus().getOutOfServiceOnlyReplicas())
   .put("underReplicateInOpenFiles",
   node.getLeavingServiceStatus().getUnderReplicatedInOpenFiles())
+  .put("decommissionDuration",
+  monotonicNow() - node.getLeavingServiceStatus().getStartTime())

Review Comment:
   Hi, @zhangshuyan0 very nice feature. But i have a question here: how to 
judge DECOMMISSIONED and IN_MAINTENANCE state here? Looking forward to your 
reply.





> Show decommission duration in JMX and HTML
> --
>
> Key: HDFS-17112
> URL: https://issues.apache.org/jira/browse/HDFS-17112
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Shuyan Zhang
>Assignee: Shuyan Zhang
>Priority: Major
>  Labels: pull-request-available
>
> Expose decommission duration time in JMX page. It's a very useful info when 
> decommissioning a batch of datanodes in a cluster.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-17112) Show decommission duration in JMX and HTML

2023-07-20 Thread Shuyan Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shuyan Zhang reassigned HDFS-17112:
---

Assignee: Shuyan Zhang

> Show decommission duration in JMX and HTML
> --
>
> Key: HDFS-17112
> URL: https://issues.apache.org/jira/browse/HDFS-17112
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Shuyan Zhang
>Assignee: Shuyan Zhang
>Priority: Major
>  Labels: pull-request-available
>
> Expose decommission duration time in JMX page. It's a very useful info when 
> decommissioning a batch of datanodes in a cluster.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17112) Show decommission duration in JMX and HTML

2023-07-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-17112:
--
Labels: pull-request-available  (was: )

> Show decommission duration in JMX and HTML
> --
>
> Key: HDFS-17112
> URL: https://issues.apache.org/jira/browse/HDFS-17112
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Shuyan Zhang
>Priority: Major
>  Labels: pull-request-available
>
> Expose decommission duration time in JMX page. It's a very useful info when 
> decommissioning a batch of datanodes in a cluster.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17112) Show decommission duration in JMX and HTML

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745078#comment-17745078
 ] 

ASF GitHub Bot commented on HDFS-17112:
---

zhangshuyan0 opened a new pull request, #5866:
URL: https://github.com/apache/hadoop/pull/5866

   ### Description of PR
   
   Expose decommission duration time in JMX page. It's a very useful info when 
decommissioning a batch of datanodes in a cluster.
   
   




> Show decommission duration in JMX and HTML
> --
>
> Key: HDFS-17112
> URL: https://issues.apache.org/jira/browse/HDFS-17112
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Shuyan Zhang
>Priority: Major
>
> Expose decommission duration time in JMX page. It's a very useful info when 
> decommissioning a batch of datanodes in a cluster.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17112) Show decommission duration in JMX and HTML

2023-07-20 Thread Shuyan Zhang (Jira)
Shuyan Zhang created HDFS-17112:
---

 Summary: Show decommission duration in JMX and HTML
 Key: HDFS-17112
 URL: https://issues.apache.org/jira/browse/HDFS-17112
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Shuyan Zhang


Expose decommission duration time in JMX page. It's a very useful info when 
decommissioning a batch of datanodes in a cluster.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17074) Remove incorrect comment in TestRedudantBlocks#setup

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745067#comment-17745067
 ] 

ASF GitHub Bot commented on HDFS-17074:
---

hfutatzhanghb commented on PR #5822:
URL: https://github.com/apache/hadoop/pull/5822#issuecomment-1643836470

   @ayushtkn Sir, could you please also take a look at this PR when you have 
free time?




> Remove incorrect comment in TestRedudantBlocks#setup
> 
>
> Key: HDFS-17074
> URL: https://issues.apache.org/jira/browse/HDFS-17074
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.4.0
>Reporter: farmmamba
>Assignee: farmmamba
>Priority: Trivial
>  Labels: pull-request-available
>
> In TestRedudantBlocks#setup(),  The below comment is incorrect.
> {code:java}
> // disable block recovery 
> conf.setInt(DFSConfigKeys.DFS_NAMENODE_REDUNDANCY_INTERVAL_SECONDS_KEY, 1);
> conf.setInt(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, 1);{code}
> We should delete this comment.
> The correct usage is in TestAddOverReplicatedStripedBlocks#setup()
> {code:java}
> // disable block recovery
> conf.setInt(DFSConfigKeys.DFS_NAMENODE_REPLICATION_MAX_STREAMS_KEY, 0);
> conf.setInt(DFSConfigKeys.DFS_NAMENODE_REDUNDANCY_INTERVAL_SECONDS_KEY, 1);
> conf.setInt(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, 1); {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16993) Datanode supports configure TopN DatanodeNetworkCounts

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17745064#comment-17745064
 ] 

ASF GitHub Bot commented on HDFS-16993:
---

hfutatzhanghb commented on PR #5597:
URL: https://github.com/apache/hadoop/pull/5597#issuecomment-1643823739

   @ayushtkn Hi, sir. I have updated two unit tests, please take a look when 
you have free time. Thanks a lot.




> Datanode supports configure TopN DatanodeNetworkCounts
> --
>
> Key: HDFS-16993
> URL: https://issues.apache.org/jira/browse/HDFS-16993
> Project: Hadoop HDFS
>  Issue Type: Wish
>Affects Versions: 3.3.5
>Reporter: farmmamba
>Priority: Major
>  Labels: pull-request-available
>
> In our prod environment, we try to collect datanode metrics every 15s through 
> jmx_exporter.  we found the datanodenetworkerror metric generates a lot.
> for example, if we have a cluster with 1000 datanodes, every datanode may 
> generate 999 datanodenetworkerror metrics, and overall datanodes will 
> generate 1000 multiple 999 = 999000 metrics. This is a very expensive 
> operation. In most scenarios, we only need the topN of it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17093) In the case of all datanodes sending FBR when the namenode restarts (large clusters), there is an issue with incomplete block reporting

2023-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17744942#comment-17744942
 ] 

ASF GitHub Bot commented on HDFS-17093:
---

hadoop-yetus commented on PR #5855:
URL: https://github.com/apache/hadoop/pull/5855#issuecomment-1643504733

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   8m 46s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 55s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 46s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 55s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   2m  0s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 21s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 35s | 
[/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5855/2/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1.txt)
 |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1.  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 49s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 33s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 198m 28s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5855/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 303m 40s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestObserverNode |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5855/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5855 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 1c5d788046f5 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / db14c0abd3144c55923de3a3a47042bfa5d6beae |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private